Skip Navigation

Note 4: National Assessment of Educational Progress (NAEP) (2009)

The National Assessment of Educational Progress (NAEP), governed by the National Assessment Governing Board (NAGB), is administered regularly in a number of academic subjects. Since its creation in 1969, NAEP has had two major goals: to assess student performance reflecting current educational and assessment practices and to measure change in student performance reliably over time. To address these goals, NAEP conducts a main assessment and a long-term trend assessment. The two assessments are administered to separate samples of students at separate times, use separate instruments, and measure different educational content. Thus, results from the two assessments should not be directly compared.


Indicators 12 and 13 are based on the main NAEP. Begun in 1990, the main NAEP periodically assesses students' performance in several subjects in grades 4, 8, and 12, following the assessment framework developed by NAGB and using the latest advances in assessment methodology. NAGB develops the frameworks using standards developed within the field, using a consensus process involving educators, subject-matter experts, and other interested citizens. Each round of the main NAEP includes a student assessment and background questionnaires (for the student, teacher, and school) to provide information on instructional experiences and the school environment at each grade.

Through 1988, NAEP reported only on the academic achievement of the nation as a whole and subgroups within the population. Because the national samples were not designed to support the reporting of accurate and representative state-level results, Congress passed legislation in 1988 authorizing a voluntary Trial State Assessment (TSA). Separate representative samples of students were selected for each state or jurisdiction that agreed to participate in state NAEP. TSAs were conducted in 1990, 1992, and 1994 and were evaluated thoroughly. Beginning with the 1996 assessment, the authorizing statute no longer considered the state component to be a "trial" assessment.

A significant change to state NAEP occurred in 2001 with the reauthorization of the Elementary and Secondary Education Act, also referred to as the "No Child Left Behind" legislation. This legislation requires states who receive Title I funding to participate in state NAEP every two years, in reading and mathematics at grades 4 and 8. State participation in other state NAEP subjects, including science and writing, remains voluntary.

The assessments given in the states are exactly the same as those given nationally. The assessments follow the subject area frameworks developed by NAGB and use the latest advances in assessment methodology. State NAEP assesses at grades 4 and 8, but not at grade 12. The assessments allow states to monitor their own progress over time in the selected subject areas. They can then compare the knowledge and skills of their students with students in specific states and with those across the nation.

The ability of the assessments to measure change in student performance over time is sometimes limited by changes in the NAEP framework. While shorter-term trends can be measured in most of the NAEP subjects, data from different assessments are not always comparable. (In cases where the framework of a given assessment changes, linking studies are generally conducted to ensure compara-bility over time.) However, recent main NAEP assessment instruments for science and reading have typically been kept stable for shorter periods, allowing for comparisons across time. For example, from 1990 to 2005, in general, assessment instruments in the same subject areas were developed using the same framework, shared a common set of questions, and used comparable procedures to sample and address student populations. In 2005, NAGB revised the grade 12 mathematics framework to reflect changes in high school mathematics standards and coursework. As a result, even though many questions are repeated from previous assessments, the 2005 mathematics results cannot be directly compared with those from previous years.

NAGB called for the development of a new mathematics framework for the 2005 assessment. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and to include more clearly the specific objectives for each grade level. The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content along with a variety of ways of knowing and doing mathematics. For grades 4 and 8, comparisons over time can be made among the assessments prior to and after the implementation of the 2005 framework. In grade 12, with the implementation of the 2005 framework, the assessment included more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework. Additionally, the measurement and geometry content areas were merged. Grade 12 results could not be placed on the old NAEP scale and could not be directly compared with previous years. The reporting scale for grade 12 mathematics was changed from 0-500 to 0-300. For more information regarding the 2005 framework revisions, see:

The main NAEP results are reported in The Condition of Education in terms of both average scale scores and achievement levels. The achievement levels define what students who are performing at the Basic, Proficient, and Advanced levels of achievement should know and be able to do. NAGB establishes new achievement levels whenever a new main NAEP framework is adopted. As provided by law, NCES, upon review of congressionally mandated evaluations of NAEP, has determined that achievement levels are to be used on a trial basis and should be interpreted with caution. NAEP achievement levels have been widely used by national and state officials. The policy definitions of the achievement levels that apply across all grades and subject areas are as follows:

In indicators 12 and 13, the percentage of students at or above Proficient or at or above Basic are reported. The percentage of students at or above Proficient includes students at the Proficient and Advanced achievement levels. Similarly, the percentage of students at or above Basic includes students at the Basic, the Proficient, and the Advanced achievement levels.

Unlike estimates from other sample surveys presented in this report, NAEP estimates that are potentially unstable (large standard error compared with the estimate) are not flagged as potentially unreliable. This practice for NAEP estimates is consistent with the current output from the NAEP online data analysis tool. The reader should always consult the appropriate standard errors when interpreting these findings. For additional information on NAEP, including technical aspects of scoring and assessment validity and more specific information on achievement levels, see

Until 1996, the main NAEP assessments excluded certain subgroups of students identified as "special needs students," including students with disabilities and students with limited English proficiency. For the 1996 and 2000 mathematics assessments and the 1998 and 2000 reading assessments, the main NAEP included a separate assessment with provisions for accommodating these students (e.g., extended time, small group testing, mathematics questions read aloud, and so on). Thus, for these years, there are results for both the unaccommodated assessment and the accommodated assessment. For the 2002, 2003, and 2005 reading assessments and the 2003 and 2005 mathematics assessments, the main NAEP did not include a separate unaccommodated assessment; only a single accommodated assessment was administered. The switch to a single accommodated assessment instrument was made after it was determined that accommodations in NAEP did not have any significant effect on student scores. Indicators 12 and 13 present NAEP results with and without accommodations.

Long-Term Trend NAEP

The long-term trend NAEP has measured student performance since the early 1970s. Originally, the long-term trend NAEP was designed, like the main NAEP, to measure student performance in mathematics, reading, science, and writing, but recent efforts have focused primarily on reading and mathematics. Indicator 14 reports findings from the long-term trend reading and mathematics assessments. Since the early 1970s, the long-term trend NAEP has used the same instruments to provide a means of comparing performance over time, but the instruments do not necessarily reflect current teaching standards or curricula. Results have been reported for students at ages 9, 13, and 17 in mathematics, reading, and science, and for students at grades 4, 8, and 12 in writing. Future assessments are scheduled to be conducted in reading and mathematics. Results from the long-term trend NAEP are presented as mean scale scores because, unlike the main NAEP, the long-term trend NAEP does not define achievement levels.

2004 Bridge Study

Several changes were made to the long-term trend assessment in 2004 to align it with best current assessment practices and with policies applicable to the NAEP main assessments. According to the new policy of NAGB, reading and mathematics are to be assessed by both the long-term trend instruments and the main NAEP instruments, but science and writing will be assessed only in main NAEP. As a result, changes were needed to remove the sets, or blocks, of questions for science and writing, which had been intermixed with the reading and mathematics blocks in the long-term trend assessment instruments.

The changes provided an opportunity to bring other aspects of the assessment up to date. Considerable progress in testing theory has been made since the late 1960s, when these assessments were first designed, and the 2004 administration provided an opportunity to bring these improvements to the long-term trend assessments. In addition, since 1996, main NAEP assessments have been providing accommodations to allow more students with disabilities and students who were not fluent in English to participate. Traditionally, the long-term trend assessments had not provided such accommodations. However, in 2004, it was possible to provide accommodations and assess a greater proportion of students.

As a result of these changes, two assessments were given in 2004—a modified assessment that contained many changes from previous assessments, and a bridge assessment that was used to link the modified assessment to the 1999 assessment so the trend line could be continued. The modified assessment included the following changes:

In 2004, students were randomly assigned to take either the bridge assessment or the modified assessment. The bridge assessment replicated the instrument given in 1999 and used the same administration procedures. The modified assessment included the new items and modifications listed above. The modified assessment will provide the basis of comparison for all future assessments, and the bridge will link its results back to the results of the past 30 years. Comparing the results of the modified and bridge assessments demonstrates that the link between the 2004 bridge and modified assessments successfully continues the trend line.

Indicator 14 features data from the long-term trend reading and mathematics assessments. For more information on the long-term trend NAEP, see