FAQs

1. What are the conditions for using DTAMS?

Please see the information on the “overview” page of DTAMS. Briefly, these DTAMS resources are now (since January 2019) available free of charge in a self-service format. Users are requested to acknowledge (cite) our research center as the original source for any DTAMS use you may find helpful.

2. Is training required to administer the measurement tool?

Training is not required. We recommend allotting approximately 1 hour for teachers to complete the assessment.

3. Is there any cost to use DTAMS?

No. As of January 2019 we no longer are offering our DTAMS scoring service, and there is no longer any fee associated with this.

4. How long does it take teachers to complete one assessment?

The length of time to take the assessments generally varies between 30-50 minutes, with the bulk of participants taking less than 45 minutes. For the posttest, some teachers have more to say for the open response questions and thus that can take about 10 minutes longer.

5. If I’m interested in using these assessments, what are my next steps?

Browse the DTAMS websites (part of Center for Research in Mathematics and Science Teacher Development – CRIMSTED – at University of Louisville) to locate the specific assessment you are interested in. You may download and use (or adapt) as detailed on the “overview” page. Scoring guides and a selection of other resources to aid in interpreting DTAMS scores are also available.

6. How is test security maintained?

As is true of a number of educational measures published in journals and measurement yearbooks and other publically available sites, the DTAMS assessments are now available to anyone via the internet. Often, educational researchers do not seem to be overly concerned about using such measures in terms of test security. We do not undertake any specific procedures at this point to maintain test security. Project directors/evaluators and others who wish to use these need to decide for themselves how serious this issue may be for their context and decide to use or not use DTAMS accordingly. As noted in the “overview” tab for DTAMS, users may choose to edit or adapt (e.g. could rearrange response choices, edit some response choices, mix items across different versions while paying attention to which subscores each items contributes to, etc.) as one possibility to minimize any test security concerns.

7. What are your recommendations for using these assessments?

Our position is to leave it up to the users to decide how these assessments will best serve them. Below are some examples of what others have done or are doing with these assessments.
(a) These assessments are intended for diagnostic purposes and we suggest they are best used to measure growth or to identify strengths and weaknesses of individual teachers.
(b) Some have administered all 3 science content areas as a pretest in order to use results to determine which content area they'd like to focus on for upcoming professional learning offerings. CAUTION: Due to test fatigue, we recommend not administering all 3 of these on the same day.
(c) Some have chosen to primarily focus on one or more of the knowledge type subscores or content subcategory scores. For example, some users were more interested in the pedagogy of teachers using their knowledge to teach, others were interested in enhancing science inquiry skills, and still others wanted to focus on deep, schematic knowledge. Some have used the content category subscores to choose an emphasis on one content area over another, e.g. "force and motion" for physical science. CAUTION: since each of these subscores are based on a fewer number of items than the assessment overall, conclusions drawn from subscores alone are more tentative than from total score and should be done cautiously.
(d) Many have used these assessments in a pre-post design to look for gains. Some looked at gains in subscores (either knowledge type or content subcategory) as well as overall gains. Same caution as in 7(c) applies. CAUTION: Another caution is to be sure that administration conditions are conducive to enhancing validity of interpretation of any scores. For example, some users reported to us that when they administered a posttest at the end of a Friday after a long and mentally tiring series of professional learning days, teachers may have been tired and/or less than fully motivated to demonstrate their full knowledge, which would undermine interpretations of scores.