Overview of the Assessment
Reporting the Assessment—Scale Scores and NAEP Achievement Levels
Description of Science Performance by Item Maps for Each Grade
Results are Estimates
NAEP Reporting Groups
Cautions in Interpretations
NAEP assesses science performance by administering assessments to samples of students who are representative of the nation's students. The content of the NAEP science assessment is determined by a framework incorporating expert perspectives about science knowledge and its measurement. Read more about what the assessment measures, how it was developed, who took the assessment, and how the assessment was administered. This page describes elements of the main NAEP science assessment, and does not apply to the long-term trend science assessment (which was discontinued in 1999). Read more about the difference between the main and long-term trend NAEP assessments.
In 2011, NAEP conducted a special administration of the science assessment at grade 8 in an effort to link the NAEP scale to the Trends in International Mathematics and Science Study (TMSS) so that states could compare the performance of their students with that of students in other countries. Because of this special administration, the 2019 assessment results at grade 8 are compared to results from the 2015, 2011, and 2009 assessments. The results from the 2011 NAEP-TIMSS Linking Study were presented in a separate report showing how the performance of eighth-grade students in states and selected districts compared to international benchmarks.
The NAEP Science Framework describes the types of questions to be included in the 2019 assessment and how they should be designed and scored. The National Assessment Governing Board oversees the development of NAEP frameworks that describe the specific knowledge and skills to be assessed in each subject. The 2019 assessment was developed using the same frameworks used in 2009 and 2015, allowing the results from the three assessment years to be compared.
In 2009, a new science framework was introduced. Because of resulting changes to the assessment, 2009 marked the start of a new trend line; therefore, beginning with the 2009 assessment, performance results cannot be compared to those from previous assessment years. Find out how the 2009 framework differs from the previous framework.
The 2009 science assessment, like other NAEP assessments since 1996, permitted test accommodations for students with disabilities (SD) and for English learners (EL). Read more about NAEP's policy of inclusion.
Differences between groups of students are discussed only if they have been determined to be statistically significant.
The results of student performance on the NAEP science assessment are presented on this website in two ways: as average scores on the NAEP science scale and as the percentages of students attaining NAEP science achievement levels. The average scale scores represent how students performed on the assessment. The NAEP achievement levels represent how that performance measured up against defined expectations for achievement. Thus, the average scale scores represent what students know and can do, while the NAEP achievement-level results indicate the degree to which student performance meets expectations of what they should know and be able to do.
Average science scale score results are based on the NAEP science scale, which ranges from 0 to 300.
In 2009, the first year of the new science framework, an overall science scale was developed at each grade. The scale at each grade ranges from 0 to 300 with a mean of 150 and standard deviations of 35. Although the score ranges are identical, the scales were derived independently at each grade; therefore, scales cannot be compared across grades. In 2009, 2011, 2015, and 2019 the overall science scale was derived from an analysis of all science items (representing the three fields of science: Physical Science, Life Science, and Earth and Space Sciences) together.
Average scores for each of the three science content areas specified in the framework are also available and are reported on the 0 to 300 scale. Because subscales are set separately for each content area, comparisons cannot be made from one area to another.
NAEP achievement-level results are presented in terms of science achievement levels as adopted by the National Assessment Governing Board, and are intended to measure how well students' actual achievement matches the achievement desired of them. For each grade tested, the Governing Board has adopted three achievement levels: NAEP Basic, NAEP Proficient, and NAEP Advanced. For reporting purposes, the achievement-level cut scores are placed on the science scales, resulting in four ranges: below NAEP Basic, NAEP Basic, NAEP Proficient, and NAEP Advanced.
The Governing Board established its achievement levels in 1996 based upon the science content framework and standard-setting process involving a cross section of educators and interested citizens from across the nation who were asked to judge what students should know and be able to do relative to the content reflected in the NAEP science framework. The achievement levels and cut scores were revised in 2009 to reflect the new science framework. Explore the new NAEP achievement-level descriptions for science. As provided by law, NCES has determined that the achievement levels are to be considered developmental and should be interpreted and used with caution. However, both NCES and the Governing Board believe these performance standards are useful for understanding trends in student achievement.
The performance of fourth-, eighth-, and twelfth-graders can be illustrated by maps that position question descriptions along the NAEP science scale. The descriptions used on these maps focus on the science skill or knowledge needed to answer the question. For multiple-choice questions, the description indicates the skill or knowledge demonstrated by selection of the correct option; for constructed-response questions, the description takes into account the skill or knowledge specified by the different levels of scoring criteria for that question.
Approximately 25 to 30 science questions per grade are placed on the item map. Explore the item maps for science.
Item maps illustrate the knowledge and skills demonstrated by students performing at different points on the NAEP science scale. In order to provide additional context, the cut points for the three NAEP achievement levels are marked on the item maps. The map location for each question represents the probability that, for a given score point, 65 percent of the students for a constructed-response question, 74 percent of the students for a four-option multiple-choice question, and 72 percent of the students for a five-option multiple-choice question answered that question successfully. For constructed-response questions, responses may be completely or partially correct; therefore, a question can map to several points on the scale.
The average scores and percentages presented on this website are estimates because they are based on representative samples of students rather than on the entire population of students. Moreover, the collection of subject-area questions used at each grade level is but a sample of the many questions that could have been asked. As such, NAEP results are subject to a measure of uncertainty, reflected in the standard error of the estimates. The standard errors for the estimated scale scores and percentages in the figures and tables presented on this website are available through the NAEP Data Explorer.
Results are provided for groups of students defined by shared characteristics—race/ethnicity, gender, eligibility for free/reduced-price school lunch, highest level of parental education, type of school, charter school, type of school location, region of the country, status as students with disabilities, and status as students identified as English learners. Based on participation rate criteria, results are reported for subpopulations only when sufficient numbers of students and adequate school representation are present. The minimum requirement is at least 62 students in a particular group from at least five
primary sampling units (PSUs). However, the data for all students, regardless of whether their group was reported separately, were included in computing overall results. Explanations of the reporting groups are presented below.
Prior to 2011, student race/ethnicity was obtained from school records and reported for the six mutually exclusive categories shown below:
Students who identified with more than one of the other five categories were classified as “other” and were included as part of the "unclassified" category along with students who had a background other than the ones listed or whose race/ethnicity could not be determined.
In compliance with new standards from the U.S. Office of Management and Budget for collecting and reporting data on race/ethnicity, additional information was collected in 2011 so that results could be reported separately for Asian students, Native Hawaiian/Other Pacific Islander students, and students identifying with two or more races. Beginning in 2011, all of the students participating in NAEP were identified by school reports as one of the seven racial/ethnic categories listed below:
Students identified as Hispanic were classified as Hispanic in 2011 even if they were also identified with another racial/ethnic group. Students who identified with two or more of the other racial/ethnic groups (e.g., White and Black) would have been classified as “other” and reported as part of the "unclassified" category prior to 2011, and from 2011 on classified as “Two or More Races."
When comparing the results for racial/ethnic groups from 2011, 2015, and 2019 to earlier assessment years, the data for Asian and Native Hawaiian/Other Pacific Islander students were combined into a single Asian/Pacific Islander category. Information based on student self-reported race/ethnicity will continue to be reported in the NAEP Data Explorer.
Results are reported separately for males and females.
As part of the Department of Agriculture's
National School Lunch Program (NSLP), schools can receive cash subsidies and donated commodities in turn for offering free or reduced-price lunches to eligible children. Based on available school records, students were classified as either currently eligible for the free/reduced-price school lunch or not eligible. Eligibility for free and reduced-price lunches is determined by students' family income in relation to the federally established poverty level. Students whose family income is at or below 130 percent of the poverty level qualify to receive free lunch, and students whose family income is between 130 percent and 185 percent of the poverty level qualify to receive reduced-price lunch. For the period July 1, 2018 through June 30, 2018, for a family of four, 130 percent of the poverty level was $32,630 and 185 percent was $46,435 in most states. The classification applies only to the school year when the assessment was administered (i.e., the 2018–2019 school year) and is not based on eligibility in previous years. If school records were not available, the student was classified as "Information not available." If the school did not participate in the program, all students in that school were classified as "Information not available.Because of the improved quality of the data on students' eligibility for NSLP, the percentage of students for whom information was not available has decreased compared to the percentages reported prior to the 2003 assessment. As a result of the passage of the Healthy, Hunger-Free Kids Act of 2010, schools can use a new universal meal service option, the "Community Eligibility Provision" (CEP). Through CEP, eligible schools can provide meal service to all students at no charge, regardless of economic status and without the need to collect eligibility data through household applications. CEP became available nationwide in the 2014-2015 school year; as a result, the percentage of students in many states categorized as eligible for NSLP may have increased in comparison to 2013 due to this provision. Because students' eligibility for NSLP maybe be underreportd at grade 12, the results are not included in the 2019 report. Therefore, readers should interpret NSLP trend results with caution. See the proportion of students in each category at
grade 4 and
grade 8 in the NAEP Data Explorer.
Parents' highest level of education is defined by the highest level reported by eighth-graders and twelfth-graders for either parent. Fourth-graders' replies to this question were not reported because their responses in previous studies were highly variable, and a large percentage of them chose the "I don't know" option. Parental education attainment is one component used to measure student's socioeconomic status (SES).
The national results are based on a representative sample of students in both public schools and nonpublic schools. Nonpublic schools include private schools, Bureau of Indian Affairs schools, and Department of Defense schools. Private schools include Catholic, Conservative Christian, Lutheran, and other private schools. Results are reported for private schools overall, as well as disaggregated by Catholic and other private schools. The school participation rates for private schools overall in 2015 did not meet the 70 percent criteria for reporting at all three grades, so their results are not presented in this report. The results for Catholic schools, however, met the criteria and are presented in the report. The state results are based on public school students only.
A pilot student of America's charter schools and their students was conducted as part of the 2003 NAEP assessments in reading and mathematics at grade 4. Results are available for charter schools starting in 2003 at grade 4, 2005 at grade 8, and 2009 at grade 12. Results for this variable are reported for public school students.
NAEP results are reported for four mutually exclusive categories of school location: city, suburb, town, and rural. The categories are based on standard definitions established by the Federal Office of Management and Budget using population and geographic information from the U.S. Census Bureau. Schools are assigned to these categories in the NCES Common Core of Data based on their physical address.
The classification system was revised for 2007 and 2009. The new locale codes are based on an address's proximity to an urbanized area (a densely settled core with densely settled surrounding areas). This is a change from the original system based on metropolitan statistical areas. To distinguish the two systems, the new system is referred to as "urban-centric locale codes." The urban-centric locale code system classifies territory into four major types: city, suburban, town, and rural. Each type has three subcategories. For city and suburb, these are gradations of size—large, midsize, and small. Towns and rural areas are further distinguished by their distance from an urbanized area. They can be characterized as fringe, distant, or remote.
Prior to 2003, NAEP results were reported for four NAEP-defined regions of the nation: Northeast, Southeast, Central, and West. As of 2003, to align NAEP with other federal data collections, NAEP analysis and reports have used the U.S. Census Bureau's definition of "region." The four regions defined by the U.S. Census Bureau are Northeast, South, Midwest, and West. The Central region used by NAEP before 2003 contained the same states as the Midwest region defined by the U.S. Census. The former Southeast region consisted of the states in the Census-defined South minus Delaware, the District of Columbia, Maryland, Oklahoma, Texas, and the section of Virginia in the District of Columbia metropolitan area. The former West region consisted of Oklahoma, Texas, and the states in the Census-defined West. The former Northeast region consisted of the states in the Census-defined Northeast plus Delaware, the District of Columbia, Maryland, and the section of Virginia in the District of Columbia metropolitan area. Therefore trend data by region are not provided for the 2005 science assessment. The table below shows how states are subdivided into these Census regions. All 50 states and the District of Columbia are listed. Other jurisdictions, including the Department of Defense Educational Activity schools, are not assigned to any region.
SOURCE: U.S. Department of Commerce Economics and Statistics Administration.
Results are reported for students who were identified by school records as having a disability. A student with a disability may need specially designed instruction to meet his or her learning goals. A student with a disability will usually have an Individualized Education Program (IEP), which guides his or her special education instruction. Students with disabilities are often referred to as special education students and may be classified by their school as learning disabled (LD) or emotionally disturbed (ED). The goal of NAEP is that students who are capable of participating meaningfully in the assessment are assessed, but some students with disabilities selected by NAEP may not be able to participate, even with the accommodations provided. Beginning in 2009, NAEP disaggregated students with disabilities from students who were identified under section 504 of the Rehabilitation Act of 1973. The results for SD are based on students who were assessed and could not be generalized to the total population of such students.
Results are reported for students who were identified by school records as being English learners. (Note that English learners were previously referred to as limited English proficient (LEP).
NAEP has established policies and procedures to maximize the inclusion of all students in the assessment. Every effort is made to ensure that all selected students who are capable of participating meaningfully in the assessment are assessed. While some students with disabilities (SD) and/or English learner (EL) students can be assessed without any special procedures, others require accommodations to participate in NAEP. Still other SD and/or EL students selected by NAEP may not be able to participate. Local school authorities determine whether SD/EL students require accommodations or shall be excluded because they cannot be assessed. The percentage of SD and/or EL students who are excluded from NAEP assessments varies from one jurisdiction to another and within a jurisdiction over time. Read more about the potential effects of exclusion rates on assessment results.
See additional information about the percentages of students with disabilities and English learners identified, excluded, and assessed at the national and state level.
See the types of accommodations permitted for students with disabilities and/or English learners at the national and state levels.
Exclusion rates for other subjects, as well as rates of use of specific accommodations, are available.
The differences between scale scores and between percentages that are discussed in the results take into account the standard errors associated with the estimates. Comparisons are based on statistical tests that consider both the magnitude of the difference between the group average scores or percentages and the standard errors of those statistics. Throughout the results, differences between scores or between percentages are discussed only when they are significant from a statistical perspective.
All differences reported are significant at the 0.05 level with appropriate adjustments for multiple comparisons.The term "significant" is not intended to imply a judgment about the absolute magnitude or the educational relevance of the differences. It is intended to identify statistically dependable population differences to help inform dialogue among policymakers, educators, and the public.
Users of this website are cautioned against interpreting NAEP results as implying causal relations. Inferences related to student group performance or to the effectiveness of public and nonpublic schools, for example, should take into consideration the many socioeconomic and educational factors that may also have an impact on performance.
The NAEP science scale makes it possible to examine relationships between students' performance and various background factors measured by NAEP. However, a relationship that exists between achievement and another variable does not reveal its underlying cause, which may be influenced by a number of other variables. Similarly, the assessments do not reflect the influence of unmeasured variables. The results are most useful when they are considered in combination with other knowledge about the student population and the educational system, such as trends in instruction, changes in the school-age population, and societal demands and expectations.
A caution is also warranted for some small population group estimates. At times in the results pages, smaller population groups show very large increases or decreases across years in average scores. However, it is often necessary to interpret such score gains with extreme caution. For one thing, the effects of exclusion-rate changes for small subgroups may be more marked for small groups than they are for the whole population. Also, the standard errors are often quite large around the score estimates for small groups, which in turn means the standard error around the gain is also large.