The NAEP reading assessment results present a broad view of how well America's students are reading—one of the most important skills that young people can acquire and develop throughout their lives.
On this page, learn more about:
The National Assessment Governing Board (NAGB) oversees the development of NAEP frameworks that describe the specific knowledge and skills to be assessed in each subject. Frameworks incorporate ideas and input from subject area experts, school administrators, policymakers, teachers, parents, and others. The NAEP Reading Framework (3.25 MB) describes the assessment content and how students' responses are evaluated.
The assessment exercises and scoring criteria were developed by a committee of reading and measurement experts to capture the goals of the framework. The framework, which describes the goals of the reading assessment and what kind of exercises it ought to feature, was created by the Board through a development process involving reading teachers and researchers, measurement experts, policymakers, and members of the general public. This framework shaped the 2009 and 2011 reading assessments. The NAEP Reading Committee was instrumental in developing new material for the 2011 assessment, guided by the framework.
The framework describes the types of texts and questions to be included in the assessment, as well as how the questions should be designed and scored. The framework specifies the use of
In addition, all reading questions are aligned to cognitive targets, the kinds of thinking that underlie reading comprehension. The framework specifies that the assessment questions measure three cognitive targets for both literary and information texts. The targets and their descriptions are listed below.
The framework also calls for a systematic assessment of vocabulary.
Each of the above categories—types of texts and cognitive targets—should occupy a certain proportion of the assessment, as specified in the framework. See target and actual distribution of questions by type.
The assessment consisted of both multiple-choice and constructed-response questions. Multiple-choice questions were designed to test students’ understanding of the individual texts, as well as their ability to integrate and synthesize ideas across the texts. Constructed-response questions were based on consideration of the texts the students read. Each student read approximately two passages and responded to questions about what he or she read.
NAEP also gives questionnaires to teachers, students, and schools that are part of the NAEP sample. Responses to these questionnaires provide information about school policies affecting reading instruction, as well as information about schools' resources.
Nationally representative samples of 213,100 fourth-graders and 168,200 eighth-graders participated in the 2011 National Assessment of Educational Progress (NAEP) in reading. See details about participation rates and about sample size and target population.
The NAEP program does not, and is not designed to, report on the performance of individual students. Instead, groups of the student population from representative national samples are assessed. For example, NAEP reports results for male and female students, Black students and White students, and students in different regions of the country. Students are selected using a complex sampling design.
NAEP assesses representative samples of students rather than the entire population of students. The sample selection process utilizes a probability sample design in which each school and each student has a known probability of being selected (the probabilities are proportionate to the estimated number of students in the grade assessed). Samples are selected according to a multistage design, with students drawn from within sampled public and private schools nationwide. Read details of assessment sample design in the technical documentation and see a diagram of sample selection for NAEP state assessments.
The Common Core of Data (CCD) file, a comprehensive list of operating public schools in each jurisdiction that is compiled each school year by NCES, served as the sampling frame for the selection of public schools in each state/jurisdiction. The sample of students in districts participating in the Trial Urban District Assessment (TUDA) represents an augmentation of the sample of students selected as part of the state samples. All students at more local geographic sampling levels also make up part of the broader samples. For example, so the TUDA samples are included as part of the corresponding state samples, just as and the state samples are included as part of the national sample.
The Private School Survey (PSS), a survey of all U.S. private schools carried out biennially by the Census Bureau under contract to NCES, served as the sampling frame for private schools. While state and district results are based on samples of public schools only, the national results are based on the combined samples of public and private schools.
Because each school that participated in the assessment, and each student assessed, represents only a portion of the larger population of interest, the results are weighted to make appropriate inferences between the student samples and the respective populations from which they are drawn. Sampling weights are adjusted for the disproportionate representation of some groups in the selected sample. This includes oversampling of schools with high concentrations of students from certain racial/ethnic groups and the lower sampling rates of students who attend very small schools. See more about weighted school and student percentages for this assessment on the website of The Nation’s Report Card.
Learn more about NAEP, the nation's only ongoing assessment of what students know and can do in various subject areas.
Explore the most recent NAEP results in any subject on the The Nation’s Report Card website.