Skip to main content
Skip Navigation

Table of Contents  |  Search Technical Documentation  |  References

NAEP Analysis and Scaling → Estimation of NAEP Score Scales → Treatment of Missing Responses in NAEP

NAEP Technical DocumentationTreatment of Missing Responses in NAEP

In almost all NAEP Item Response Theory (IRT) analyses, missing responses at the end of a block of items are considered not reached items and are treated as if they had not been presented to the respondent. Occasionally, extended constructed-response items are the last item in a block. Because considerably more effort is required of the student to answer these items, nonresponse to an extended constructed-response item at the end of a block is considered an intentional omission (and scored as the lowest category) unless the student also did not respond to the item immediately preceding that item. In that case, the extended constructed-response item considered not reached is treated as if it had not been presented to the student. In the case of the national main and state writing assessment, there is a single constructed-response item in each separately-timed block. In the writing assessment when a student does not respond to the item or when the student provides an off-task response, the response also is treated as if the item had not been administered.

Missing responses to items before the last observed response in a block are considered intentional omissions. If the omitted item is a multiple-choice item, the missing response is treated as fractionally correct at the value of the reciprocal of the number of response alternatives. If the omitted item is not a multiple-choice item, the missing response is scored so that the response is in the lowest category.

These conventions are discussed by Mislevy and Wu (1988). With regard to the handling of not-reached items, Mislevy and Wu found that ignoring not-reached items introduces slight biases into item parameter estimation when not-reached items are present and speed is correlated with ability. With regard to omissions, they found that the method described above provides consistent limited-information maximum likelihood estimates of item and ability parameters under the assumption that respondents omit only if they can do no better than respond randomly.


Last updated 14 July 2008 (DR)

Printer-friendly Version