The primary goal of the analysis of NAEP data is to summarize the performance of groups of students. NAEP analysis consists of several steps.
Initial activities include the calculation of simple counts and percentages for contextual variables as well as
classical test statistics. The purpose of initial activities is threefold. First, initial activities verify the accuracy of the data used in the analysis. Second, they provide the first indication of aspects of the data and analysis that require special consideration and attention. Finally, the initial item analysis provides starting values for use in the scaling process. Some of these activities are conducted without student weights or with preliminary student weights, but final student weights are used whenever possible.
After the initial activities are completed, NAEP score scales are created via
Item Response Theory (IRT), and
scale score distributions are estimated for groups of students. When the score scales are created, parameters describing the item response characteristics are estimated. In years in which state assessments take place, the same score scales are used for both national and state assessment results. Because NAEP is not designed to report individual test scores, it produces estimates of scale score distributions for groups of students. The resulting scale score distributions describing student performance are transformed to a NAEP reporting scale, and summary statistics of the scale scores are estimated. Statistical tests are used to make inferences about the comparisons of results for different groups of students or for different assessment years. Finally, NAEP scale score distributions are described via National Assessment Governing Board achievement levels and/or item mapping procedures. Subjects for which the Governing Board has established achievement levels include civics,
science, technology and engineering literacy (TEL),
U.S. history, and
Separate analysis plans are developed for each NAEP overview of NAEP assessment designs.