Skip to main content
Skip Navigation

NAEP Technical DocumentationNAEP Scoring


Preliminary Activities

Training for Scoring

Scoring Monitoring

Three types of cognitive items are scored for NAEP. Multiple-choice item responses are captured by high-speed scanners during student booklet processing. Short constructed-response items (typically those with two or three valid score points) and extended constructed-response items (typically those with four or more valid score points) are scored by trained scoring personnel. Unless otherwise noted, the term "scoring" in this section refers to constructed-response items.

Scoring a large number of short and extended constructed-response items with a high level of accuracy and reliability within a limited time frame is essential to the success of NAEP. To ensure reliable and efficient scoring of constructed-response items, NAEP takes the following steps:

  • develops focused and explicit scoring guides that match the criteria delineated in the assessment frameworks;

  • recruits qualified and experienced scorers, trains them, and verifies their ability to score particular questions through qualifying tests;

  • employs an image-processing and scoring system that routes images of student responses directly to the scorers so they can focus on scoring rather than paper routing;

  • monitors scorer consistency through ongoing reliability checks, including second scoring;

  • assesses the quality of scorer decision-making through frequent monitoring by NAEP assessment experts; and

  • documents all training, scoring, and quality control procedures in the technical reports.

The table below presents a general overview of recent NAEP scoring activities.

Processing and scoring totals, national and state assessments, by subject area and year: Various years, 2000–2014
YearSubject areaGradeNumber of booklets scoredNumber of constructed responses scoredNumber of individual 
cognitive items
Number of team leadersNumber of scorers
NOTE: Number of constructed responses scored includes second scores. Data for 2011 mathematics, reading, and science represent national and state assessments; data for 2011 writing represents national assessment only. In 2014, the first national assessment of Technology and Engineering Literacy (TEL) was administered to eighth-graders on computer. The 2011 writing assessment was also computer-based. For TEL and 2011 writing, "Number of booklets scored" denotes number of computer-based assessments scored. In 2010 and 2014, data for civics, geography, and U.S. history were combined into one social sciences assessment; trainers and teams scored a mixture of items from all three subject areas. Team leaders refers to the number of scoring supervisors.
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), various years: 2000-2014 Assessments.
U.S. History811,279108,55268435
TEL 821,579 269,86798644
U.S. History4,8,1230,987387,62516723153
2006U.S. History4,8,1238,400458,1721322165
U.S. History4,8,1232,700399,182479 81


The table below presents a general overview of recent NAEP Long Term Trend scoring activities.

Processing and scoring totals, long-term trend assessments, by subject area and age: 2004, 2008 and 2012
YearSubject area AgeNumber of
Number of
responses scored
Number of
cognitive items
2012Mathematics long-term trend9, 13, 1726,210422,192181
Reading long-term trend9, 13, 1726,35247,24119
2008Mathematics long-term trend9, 13, 1728,465452,994179
Reading long-term trend9, 13, 1726,62151,74319
2004Mathematics long-term trend9, 13, 1740,3001,082,923219
Reading long-term trend9, 13, 1741,200131,49634
NOTE: Number of constructed responses scored includes second scores.
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), Mathematics and Reading, 2004, 2008, and 2012 Long-Term Trend Assessments.


As new NAEP items are created, tested, and improved, test development staff create scoring guides using as specific examples a range of actual student responses captured by the materials-processing staff. The scoring and test development staffs create training materials matching the assessment framework criteria. For future assessments, continuous documentation assures that the scoring staff will train and score the item in the same way that it was originally implemented. This repeatability allows reporting on trends in student performance over time.

NAEP Scoring Staff

Scorers score student responses. Scoring supervisors provide logistical support to the trainers and help monitor team activities. Trainers are responsible for training both scorers and supervisors on specific content and for assuring that team scoring performance meets expectations. Content leads for each subject area (reading, science, etc.) oversee the trainers and provide support as needed.

Scorers must have a minimum of a baccalaureate degree from a four-year college or university. An advanced degree and scoring experience and/or teaching experience are preferred. In some subjects, scorers must complete a placement test, used as a tool to identify scorers with appropriate content knowledge. During the training process, scoring teams are trained so that each student response can be scored consistently. Following training, for all extended constructed-response items and some short constructed-response items with particularly complex scoring guides, each scorer is given a pre-scored qualification set of student responses to score. Qualification standards for each item vary according to the number of score levels for the item. Individual scorer results are retained for all qualification sets.

Scoring supervisors and trainers are selected based upon many factors including their previous experience, educational and professional backgrounds, demonstration of a strong understanding of the scoring criteria, and strong interpersonal communication skills and organizational abilities.

NAEP scoring teams usually consist of 10-12 scorers who are led by a scoring supervisor and a trainer. Prior to the scoring effort, all personnel are intensively trained. The trainers who train the scorers, the supervisors who oversee a group of scorers, and the scorers themselves are all given both general scoring training and item-specific content training.

NAEP Scoring System

The NAEP electronic scoring system offers the latest technology coupled with secure network communications to transmit scanned images of paper student responses, or electronic text files of computer-based student responses, to the trained scorers and to receive back the scores assigned by them. Student responses from paper booklets are scanned from the original test booklets; the actual test booklets can be accessed and referenced if needed. The scorer sees each student response in isolation on a computer screen and assigns a score. The scorer cannot access any other responses from the student for a particular item or from other items the student answered. As each response is scored, another student response is shown for scoring, until all responses for an item have been scored.

During scoring, the NAEP electronic scoring system provides documentation of numerous scoring metrics. Reports on item and scoring performance can be retrieved as needed. In addition, custom reports of daily activities are sent out nightly to development, scoring, and analysis staff to monitor NAEP scoring quality and progress.

All assessments are scored item by item so that scorers train on one item and one scoring guide at a time. This method is efficient only with electronic presentation of student responses.

NAEP Scoring Procedures

During the scoring of a particular item, a percentage of scored responses is randomly recirculated by the system to be rescored by a second scorer in order to check the consistency of current-year scoring. Five percent of responses are second-scored for large samples, and 25 percent of responses are second-scored for smaller  samples. This comparison of first and second scores yields the within-year interrater agreement.

In addition, NAEP trend scoring is used to compare the consistency of scoring over time (i.e., cross-year interrater agreement). During trend scoring, the NAEP electronic scoring system allows for the presentation of a pool of scored responses from a prior assessment to current scorers. Comparing current scores to the scores given in the prior assessment offers the ability to generate reports to evaluate scoring consistency over time for a specific NAEP item.

Backreading of current year responses ensures frequent monitoring of scorer decision-making by supervisory staff. Backreading allows the supervisor to review responses (with scores assigned) already scored by each scorer and to assure that each scorer is applying the scoring guide correctly. About 5 percent of each scorer's output is monitored through backreading.

During training and scoring, any changes to existing documentation are captured by scoring staff, shared across scoring teams, and incorporated into the history of the NAEP item. This is reviewed prior to the next scoring effort.

Last updated 30 August 2021 (ML)