Skip Navigation
Annual Reports and Information Staff (Annual Reports)

Reader’s Guide

The Condition of Education contains indicators on the state of education in the United States, from prekindergarten through postsecondary education, as well as labor force outcomes and international comparisons. Readers can browse the Condition of Education online and download PDFs of individual indicators.

In order to increase usability for a wider audience, the more succinct Report on the Condition of Education, which highlights and synthesizes key findings from the Condition of Education indicator system, is available in PDF format. Beyond the core topics represented in the summary report, the online indicator system also includes sections focused on School Crime and Safety and the condition of education by geographic locale, called Education Across America. School crime and safety topics are also highlighted separately in the Report on the Indicators of School Crime and Safety, which is available in PDF format.

Analyses throughout the Condition of Education are generally supported by tables in the Digest of Education Statistics, which are available on the Reference Tables page and hyperlinked in each indicator. For more information about data sources, see the Data Sources and Estimates section below.

The data in these indicators were obtained from many different sources—which collect information from respondents throughout the education system, including students and teachers, state education agencies, elementary and secondary schools, and colleges and universities—using surveys and compilations of administrative records. Users should be cautious when comparing data from different sources. Differences in aspects such as procedures, timing, question phrasing, and interviewer training can affect the comparability of results across data sources. Unless otherwise noted, data are for the 50 states and the District of Columbia.

Most indicators in the Condition of Education summarize data from surveys conducted by the National Center for Education Statistics (NCES) or by the U.S. Census Bureau with support from NCES. Brief descriptions of the major NCES surveys used in these indicators can be found in the Guide to Sources. More detailed descriptions can be obtained on the NCES website under “Surveys and Programs.”

The Guide to Sources also includes information on non-NCES sources used to develop indicators, such as the Census Bureau’s Current Population Survey (CPS).

Data for the Condition of Education indicators are obtained from two types of surveys: universe surveys and sample surveys. In universe surveys, information is collected from every member of the population. For example, in a survey regarding expenditures of public elementary and secondary schools, data would be obtained from each school district in the United States. When data from an entire population are available, estimates of the total population or a subpopulation are made by simply summing the units in the population or subpopulation. As a result, there is no sampling error, and observed differences are reported as true.

Since universe surveys are often expensive and time consuming, many surveys collect data from a sample of the population of interest (sample surveys). For example, the National Assessment of Educational Progress (NAEP) assesses a representative sample of students rather than the entire population of students. When a sample survey is used, statistical uncertainty is introduced because the data come from only a portion of the entire population. This statistical uncertainty must be considered when reporting estimates and making comparisons. For more information, please see the section on standard errors below.

All data collections are also subject to nonsampling error, including potential errors in design, reporting, and processing as well as error due to nonresponse. To the extent possible, these nonsampling errors are kept to a minimum by methods built into the study design, data collection procedures, and data processing. In general, however, the effects of nonsampling error are more difficult to gauge than are those produced by sampling variability.

Various types of statistics derived from universe and sample surveys are reported in the Condition of Education. Many indicators report the size of a population or subpopulation, and the size of a subpopulation is often expressed as a percentage of the total population. In addition, the average (or mean) value of some characteristic of the population or subpopulation may be reported. The average is obtained by summing the values for all members of the population and dividing the sum by the size of the population. An example is the annual average salaries of full-time instructional faculty at degree-granting postsecondary institutions (the sum of wages divided by the number of faculty members). Another measure that is sometimes used is the median. The median is the midpoint value of the distribution of a characteristic, meaning that 50 percent of the population is estimated to fall at or above this level and 50 percent of the population is estimated to fall at or below this level. An example is the median annual earnings of young adults who are full-time, full-year workers (a median value of $30,000 would imply that 50 percent of full-time full-year young adult workers earn $30,000 or less and the other 50 percent earn $30,000 or more).


Using estimates calculated from data based on a sample of the population requires consideration of several factors before the estimates become meaningful. When using data from a sample, some amount of error will always be present in estimations of characteristics of the total population or subpopulation because the data are available from only a portion of the total population. Consequently, data from samples can provide only an approximation of the true or actual value. This uncertainty is often represented as the margin of error of an estimate—that is, the range of values around the estimate expected to contain the true or actual value—which depends on several factors, such as the amount of variation in the responses, the size and representativeness of the sample, and the size of the subgroup for which the estimate is computed. The magnitude of these factors is measured by what statisticians call the standard error of an estimate. A larger standard error typically indicates that the estimate is less precise, while a smaller standard error typically indicates that the estimate is more precise. To estimate the margin of error, the standard error is scaled based on the desired level of confidence in the estimate. Throughout the Condition of Education, margins of error are produced based on a 95 percent level of confidence.

When data from sample surveys are reported, the standard error is calculated for each estimate. The standard errors for all estimated totals, means, medians, or percentages are reported in the reference tables.

In order to caution the reader when interpreting findings that may be imprecise in the indicators, estimates from sample surveys are flagged with a “!” when the standard error is between 30 and 50 percent of the magnitude of an estimate, and estimates are suppressed and replaced with a “‡” when the standard error is 50 percent of the estimate or greater.


When estimates are from a sample, caution is warranted when drawing conclusions about whether one estimate is different in comparison to another; whether a time series of estimates is increasing, decreasing, or staying the same; or whether two variables are associated. Although one estimate may appear to be larger than another, a statistical test may find that the apparent difference between them is not “statistically significant” due to the uncertainty around the estimates. In this case, the estimates are described as having no measurable difference.

Whether differences in means or percentages are statistically significant can be determined using the standard errors of the estimates and their associated margins of error. In the indicators in the Condition of Education and other NCES reports, the difference between two estimates is considered statistically significant when the difference between the two estimates is greater than their combined margin of error, based on a statistical significance level of .05.

For all indicators that report estimates based on samples, differences between estimates (including trends over time) are stated only when they are statistically significant, based on a 95 percent level of confidence. To determine whether the difference between two estimates is statistically significant, most indicators use two-tailed t tests at the .05 level, with an adjustment if the samples being compared are dependent. The analyses are not adjusted for multiple comparisons, with the exception of indicators that use NAEP data. Analyses in NAEP indicators are typically conducted using the NAEP Data Explorer, which makes adjustments for comparisons involving a variable with more than two categories. The NAEP Data Explorer makes such adjustments using the Benjamini-Hochberg False Discovery Rate. In indicator text, differences that meet these thresholds of statistical significance are often referred to with the terms “higher” and “lower.” When the variables to be tested are postulated to form a trend over time, the relationship may be tested using linear regression or ANOVA trend analyses instead of a series of t tests. In indicator text, statistically significant trends over time are often referred to as “increases” or “decreases.” Indicators that use other methods of statistical comparison include a separate technical notes section. For more information on data analysis, see the NCES Statistical Standards, Standard 5-1.

Multivariate analyses, such as ordinary least squares (OLS) regression models, provide information on whether the relationship between an independent variable and an outcome measure (such as group differences in the outcome measure) persists after taking into account other variables (such as student, family, and school characteristics). For indicators that include a regression analysis, multiple categorical or continuous independent variables are entered simultaneously. A significant regression coefficient indicates an association between the dependent (outcome) variable and the independent variable, after controlling for other independent variables included in the regression analysis.

Data presented in the indicators typically do not investigate more complex hypotheses or support causal inferences. We encourage readers who are interested in more complex questions and in-depth analyses to explore other NCES resources, including publications, online data tools, and public- and restricted-use datasets.

A number of considerations influence the ultimate selection of the data years to feature in the indicators. To make analyses as timely as possible, the latest year of available data (at the time of the development of the indicator) is shown. The choice of comparison years is sometimes based on the need to show the earliest available survey year, as in the case of the NAEP and the international assessment surveys. In the case of surveys with long time frames, such as surveys measuring enrollment, indicators in the Condition of Education generally provide the trend for the most recent decade of data. In the figures and tables of the indicators, intervening years are selected in increments in order to show the general trend. The narrative for the indicators typically compares the most current year’s data with those from the initial year of the presented trend. Where applicable, the narrative may also note years in which the data begin to diverge from previous trends.


All calculations in the indicators are based on unrounded estimates. Therefore, the reader may find that a calculation cited in the text or figure, such as a difference or a percentage change, may not be identical to the calculation obtained by using the rounded values shown in the accompanying tables. Although values reported in the reference tables are generally rounded to one decimal place (e.g., 76.5 percent), values reported in each indicator are generally rounded to whole numbers (with any value of 0.50 or above rounded to the next highest whole number). Due to rounding, cumulative percentages may sometimes equal 99 or 101 percent rather than 100 percent. While the data labels on the figures have been rounded to whole numbers, the graphical presentation of these data is based on the unrounded estimates.


The Office of Management and Budget (OMB) is responsible for the standards that govern the categories used to collect and present federal data on race and ethnicity. The OMB revised the guidelines on racial/ethnic categories used by the federal government in October 1997, with a January 2003 deadline for implementation.1 The revised standards require a minimum of these five categories for data on race: American Indian or Alaska Native; Asian; Black or African American; Native Hawaiian or Other Pacific Islander; and White. The standards also require the collection of data on ethnicity categories: at a minimum, Hispanic or Latino and Not Hispanic or Latino. It is important to note that Hispanic origin is an ethnicity rather than a race, and, therefore, persons of Hispanic origin may be of any race. Origin can be viewed as the heritage, nationality group, lineage, or country of birth of the person or the person’s parents or ancestors before their arrival in the United States. The race categories American Indian or Alaska Native; Asian; Black or African American; Native Hawaiian or Other Pacific Islander; and White, as presented in these indicators, exclude persons of Hispanic origin unless noted otherwise.

The categories are defined as follows:

American Indian or Alaska Native: A person having origins in any of the original peoples of North and South America (including Central America) and maintaining tribal affiliation or community attachment.

Asian: A person having origins in any of the original peoples of the Far East, Southeast Asia, or the Indian subcontinent, including Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand, and Vietnam.

Black or African American: A person having origins in any of the black racial groups of Africa.

Native Hawaiian or Other Pacific Islander: A person having origins in any of the original peoples of Hawaii, Guam, Samoa, or other Pacific Islands.

White: A person having origins in any of the original peoples of Europe, the Middle East, or North Africa.

Hispanic or Latino: A person of Mexican, Puerto Rican, Cuban, South or Central American, or other Spanish culture or origin, regardless of race.

Within these indicators, some of the category labels have been shortened in the text, tables, and figures for ease of reference. American Indian or Alaska Native is denoted as American Indian/Alaska Native (except when separate estimates are available for American Indians alone or Alaska Natives alone); Black or African American is shortened to Black; Native Hawaiian or Other Pacific Islander is shortened to Pacific Islander; and Hispanic or Latino is shortened to Hispanic.

The indicators in the Condition of Education draw from a number of different data sources. Many are federal surveys that collect data using the OMB standards for racial/ethnic classification described above; however, some sources have not fully adopted the standards, and some indicators include data collected prior to the adoption of the standards. Data for Asian and Pacific Islander persons are reported as one category in indicators for which the data were not collected separately for the two groups.

Some of the surveys from which data are presented in these indicators give respondents the option of selecting either an “other” race category, a “Two or more races” or “multiracial” category, or both. Where possible, indicators present data on the “Two or more races” category; in some cases, however, this category may not be separately shown because the information was not collected or because of other data issues. In general, the “other” category is not separately shown. In some surveys, respondents are not given the option to select more than one race. In these surveys, respondents of Two or more races must select a single race category. Any comparisons between data from surveys that offer the option to select more than one race and surveys that do not offer such an option should take into account the fact that there is a potential for bias if members of one racial group are more likely than members of other racial groups to identify themselves as “Two or more races.”2 For postsecondary data, U.S. nonresident students students are counted separately and are therefore not included in any racial/ethnic category.

Limitations of the Data

The relatively small sizes of the American Indian/Alaska Native and Pacific Islander populations pose many measurement difficulties when conducting statistical analyses. Even in larger surveys, the numbers of American Indians/Alaska Native and Pacific Islander respondents included in a sample are often small. Researchers studying data on these two populations often face small sample sizes that reduce the reliability of results. Survey data for American Indians/Alaska Native and Pacific Islander populations often have somewhat higher standard errors than data for other racial/ethnic groups. Due to large standard errors, differences that seem substantial are often not statistically significant and, therefore, are not cited in the text.

Data on American Indians/Alaska Native persons are often subject to inconsistencies in how respondents identify their race/ethnicity. According to research on the collection of race/ethnicity data conducted by the Bureau of Labor Statistics in 1995, the categorization of American Indian and Alaska Native is the least stable self-identification. The racial/ethnic categories presented to a respondent, and the way in which the question is asked, can influence the response, especially for individuals who consider themselves as being of mixed race or ethnicity.

As mentioned above, data for Asian and Pacific Islander persons are reported as one category in indicators for which the data were not collected separately for the two groups. The combined category can sometimes mask significant differences between subgroups. For example, prior to 2011, NAEP collected data that did not allow for separate reporting of estimates for Asian and Pacific Islander students. Information from the Digest of Education Statistics 2022 (table 101.20), based on the Census Bureau’s Current Population Reports, indicates that 96 percent of all Asian/Pacific Islander 5- to 24-year-olds are Asian. This combined category of Asians/Pacific Islander is more representative of those who are Asian than those who are Pacific Islanders.


In accordance with the NCES Statistical Standards, many tables in this volume use special symbols to alert the reader to various statistical notes. These symbols and their meanings are as follows:

— Not available.
† Not applicable.
# Rounds to zero.
! Interpret data with caution. The coefficient of variation (CV) for this estimate is between 30 and 50 percent.
‡ Reporting standards not met. Either there are too few cases for a reliable estimate or the coefficient of variation (CV) for this estimate is 50 percent or greater.
* p < .05 significance level.


1 In March 2024, the OMB announced revisions to Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity (SPD 15). The updates include directives to use a combined race/ethnicity question; the addition of a new “Middle Eastern and North African” minimum reporting category; and a requirement to collect detailed race/ethnicity responses. As federal agencies begin to implement the updated standards, future editions of the Condition of Education will reflect the updated data on race and ethnicity, once available.

2 See Parker, J.D., Schenker, N., Ingram, D.D., Weed, J.A., Heck, K.E., and Madans, J.H. (2004). Bridging Between Two Standards for Collecting Information on Race and Ethnicity: An Application to Census 2000 and Vital Rates. Public Health Reports, 119(2): 192–205. Retrieved April 25, 2017, from