Skip Navigation
Annual Reports and Information Staff (Annual Reports)

Guide to Sources

National Center for Education Statistics (NCES)

The Baccalaureate and Beyond Longitudinal Study (B&B) is based on the National Postsecondary Student Aid Study (NPSAS) and provides information concerning education and work experience after completing a bachelor’s degree. A special emphasis of B&B is on those entering the teaching profession. B&B provides cross-sectional information 1 year after bachelor’s degree completion (comparable to the information that was provided in the Recent College Graduates study), while at the same time providing longitudinal data concerning entry into and progress through graduate-level education and the workforce, income, and debt repayment. While follow-ups involving high school cohorts or even college-entry cohorts may capture some of this information, these cohorts have limited numbers who actually complete a bachelor’s degree and continue their graduate education, which limits complex analyses for subgroups. Also, these cohorts are not representative of all bachelor’s degree recipients.

The first B&B followed NPSAS baccalaureate degree completers for a 10-year period after completion, beginning with NPSAS:93. About 11,000 students who completed their degrees in the 1992–93 academic year were included in the first B&B cohort (B&B:93). The first follow-up of this cohort (B&B:93/94) occurred 1 year later. In addition to collecting student data, B&B:93/94 collected postsecondary transcripts covering the undergraduate period, which provided complete information on progress and persistence at the undergraduate level. The second follow-up of this cohort (B&B:93/97) took place in spring 1997 and gathered information on employment history, family formation, and enrollment in graduate programs. The third follow-up (B&B:93/03) occurred in 2003 and provided information concerning graduate study and long-term employment experiences after degree completion.

The second B&B cohort (B&B:2000), which was associated with NPSAS:2000, included 11,700 students who completed their degrees in the 1999–2000 academic year. The first and only follow-up survey of this cohort was conducted in 2001 (B&B:2000/01) and focused on time to degree completion, participation in postbaccalaureate education and employment, and the activities of newly qualified teachers.

The third B&B cohort (B&B:08), which is associated with NPSAS:08, included 18,000 students who completed their degrees in the 2007–08 academic year. The first follow-up took place in 2009 (B&B:08/09), and the second follow-up took place in 2012 (B&B:08/12). The report Baccalaureate and Beyond: A First Look at the Employment Experiences and Lives of College Graduates, 4 Years On (B&B:08/12) (NCES 2014-141) presents findings based on data from the second follow-up. It examines bachelor’s degree recipients’ labor market experiences and enrollment in additional postsecondary degree programs through the 4th year after graduation. In addition, 2008/12 Baccalaureate and Beyond Longitudinal Study (B&B:08/12) Data File Documentation (NCES 2015-141) describes the universe, methods, and data collection procedures used in the second follow-up. A third and final follow-up (B&B:08/18) to the third B&B cohort was conducted in 2018 and early 2019. The report Baccalaureate and Beyond (B&B:08/18): First Look at the 2018 Employment and Educational Experiences of 2007–08 College Graduates (NCES 2021-241) presents findings based on data from the third follow-up. It explores bachelor’s degree recipients’ labor market experiences, financial debt and repayment, and postbaccalaureate enrollment through the 10th year after graduation.

The B&B:16 cohort included 19,500 students who completed their bachelor's degrees during the 2015–16 academic year at any Title IV eligible postsecondary institution in the United States that was eligible for inclusion in NPSAS:16. Students in the sample for the first follow-up (B&B:16/17) were identified through a process involving the selection of the NPSAS:16 sample of institutions, the selection of students within these institutions, and the identification of students within the institutions who met the criteria for inclusion in the B&B:16 cohort. The report Baccalaureate and Beyond (B&B:16/17): A First Look at the Employment and Educational Experiences of College Graduates, 1 Year Later (NCES 2019-241) provides information on the methodology of the survey, as well as discussion regarding some of the topics covered by the survey items, such as undergraduate time to degree and student loan borrowing, postbaccalaureate enrollment, employment outcomes, and steps toward a teaching career.

The second follow-up in the study (B&B:16/20) was conducted 4 years after the respondents earned their bachelor’s degrees. The B&B:16/20 sample was a subset of the B&B:16/17 sample—B&B:16/17 sample members deemed to have not completed the requirements for their bachelor’s degree in the 2015–16 academic year were ineligible and therefore excluded from the B&B:16/20 sample. The report Baccalaureate and Beyond (B&B:16/20): A First Look at the 2020 Employment and Education Experiences of 2015–16 College Graduates (NCES 2022-241) provides additional information about B&B:16/20.Further information on B&B may be obtained from

Aurora D’Amico
Tracy Hunt-White
Longitudinal Surveys Branch
Sample Surveys Division
National Center for Education Statistics 550 12th Street SW
Washington, DC 20202
aurora.damico@ed.gov
tracy.hunt-white@ed.gov
https://nces.ed.gov/surveys/b&b

CLOSE

The Beginning Postsecondary Students Longitudinal Study (BPS) provides information on persistence, progress, and attainment for 6 years after initial time of entry into postsecondary education. BPS includes traditional and nontraditional (e.g., older) students and is representative of all beginning students in postsecondary education in a given year. Initially, these individuals are surveyed in the National Postsecondary Student Aid Study (NPSAS) during the year in which they first begin their postsecondary education. These same students are surveyed again 2 and 5 years later through the BPS. By starting with a cohort that has already entered postsecondary education and following it for 6 years, the BPS can determine the extent to which students who start postsecondary education at various ages differ in their progress, persistence, and attainment, as well as their entry into the workforce. The first BPS was conducted in 1989–90, with follow-ups in 1992 (BPS:90/92) and 1994 (BPS:90/94). The second BPS was conducted in 1995–96, with follow-ups in 1998 (BPS:96/98) and 2001 (BPS:96/01). The third BPS was conducted in 2003–04, with follow-ups in 2006 (BPS:04/06) and 2009 (BPS:04/09).

The fourth BPS was conducted in 2012, with follow-ups in 2014 (BPS:12/14) and 2017 (BPS:12/17). In the base year, 1,690 institutions were sampled, of which all were confirmed eligible to participate. In addition, 128,120 students were sampled, and 123,600 were eligible to participate in the NPSAS:12 study. In the first follow-up (BPS:12/14), of the 35,540 eligible NPSAS:12 sample students, 24,770 responded, for an unweighted student response rate of 70 percent and a weighted response rate of 68 percent.

Further information on BPS may be obtained from

Aurora D’Amico
David Richards
Longitudinal Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aurora.damico@ed.gov
david.richards@ed.gov
https://nces.ed.gov/surveys/bps

CLOSE

The Common Core of Data (CCD) is NCES’s primary database on public elementary and secondary education in the United States. It is a comprehensive, annual, national statistical database of all public elementary and secondary schools and school districts containing data designed to be comparable across all states. This database can be used to select samples for other NCES surveys and provide basic information and descriptive statistics on public elementary and secondary schools and schooling in general.

The CCD collects statistical information annually from approximately 99,000 public elementary and secondary schools and approximately 19,000 public school districts (including supervisory unions and regional education service agencies) in the 50 states, the District of Columbia, the Department of Defense Education Activity (DoDEA), the Bureau of Indian Education (BIE), Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. Three categories of information are collected in the CCD survey: general descriptive information on schools and school districts, data on students and staff, and fiscal data. The general school and district descriptive information includes name, address, and phone number; the data on students and staff include selected demographic characteristics; and the fiscal data pertain to revenues and current expenditures.

The EDFacts data collection system is the primary collection tool for the CCD. NCES works collaboratively with the U.S. Department of Education’s Performance Information Management Service to develop the CCD collection procedures and data definitions. Coordinators from state education agencies (SEAs) submit the CCD data at different levels (school, agency, and state) to the EDFacts collection system. Prior to submitting CCD files to EDFacts, SEAs must collect and compile information from their respective local education agencies (LEAs) through established administrative records systems within their state or jurisdiction.

Once SEAs have completed their submissions, the CCD survey staff analyzes and verifies the data for quality assurance. Even though the CCD is a universe collection and thus not subject to sampling errors, nonsampling errors can occur. The two potential sources of nonsampling errors are nonresponse and inaccurate reporting. NCES attempts to minimize nonsampling errors through the use of annual training of SEA coordinators, extensive quality reviews, and survey editing procedures.

The NCES Education Demographic and Geographic Estimate (EDGE) program develops annually updated point locations (latitude and longitude) for public elementary and secondary schools included in the CCD database. The estimated location of schools and agency administrative offices is primarily derived from the physical address reported in the CCD directory files. The NCES EDGE program collaborates with the U.S. Census Bureau’s EDGE Branch to develop point locations for schools reported in the annual CCD directory file. For more information about NCES school point data, please see https://nces.ed.gov/programs/edge/Geographic/SchoolLocations.

The CCD survey consists of five components: The Public Elementary/Secondary School Universe Survey, the Local Education Agency (School District) Universe Survey, the State Nonfiscal Survey of Public Elementary/Secondary Education, the National Public Education Financial Survey (NPEFS), and the School District Finance Survey (F-33).

Further information on the nonfiscal CCD data may be obtained from

Chen-Su Chen
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
chen-su.chen@ed.gov
https://nces.ed.gov/ccd

Further information on the fiscal CCD data may be obtained from

Stephen Cornman
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street, SW
Washington, DC 20202
stephen.cornman@ed.gov
https://nces.ed.gov/ccd

The Public Elementary/Secondary School Universe Survey includes all U.S. public schools providing education services to prekindergarten, kindergarten, grade 1–13, and ungraded students.

The Public Elementary/Secondary School Universe Survey includes data for variables such as NCES school ID number, state school ID number, name of the school, name of the agency that operates the school, mailing address, physical location address, phone number, school type, operational status, county number, county name, full-time-equivalent (FTE) classroom teacher count, low/high grade span offered, school level, students eligible for free lunch, students eligible for reduced-price lunch, total students eligible for free and reduced-price lunch, and student totals and detail (by grade, by race/ethnicity, and by sex). The survey also contains flags indicating whether a school is Title I targeted assistance eligible, Title I schoolwide eligible, a magnet school, a charter school, a shared-time school, or a BIE school, as well as which grades are offered at the school.

CLOSE

The coverage of the Local Education Agency Universe Survey includes all school districts and administrative units providing education services to prekindergarten, kindergarten, grade 1–13, and ungraded students.

The Local Education Agency Universe Survey includes the following variables: NCES agency ID number, state agency ID number, agency name, phone number, mailing address, physical location address, agency type code, supervisory union number, American National Standards Institute (ANSI) state and county code, county name, core based statistical area (CBSA), metropolitan/micropolitan code, metropolitan status code, locale code, congressional district, operational status code, BIE agency status, low/high grade span offered, agency charter status, number of schools, number of full-time-equivalent teachers, number of ungraded students, number of PK–13 students, instructional staff fields, support staff fields, and LEA charter status.

CLOSE

The State Nonfiscal Survey of Public Elementary/Secondary Education provides state-level, aggregate information about students and staff in public elementary and secondary education. This survey covers public school student membership by grade, race/ethnicity, and state or jurisdiction and covers number of staff in public schools by category and state or jurisdiction. Beginning with the 2006–07 school year, the number of diploma recipients and other high school completers were no longer included in the State Nonfiscal Survey of Public Elementary/Secondary Education File.

CLOSE

The purpose of the National Public Education Financial Survey (NPEFS) is to provide state-level aggregate data on revenues and expenditures for public elementary and secondary education. The data collected are useful to (1) chief officers of state education agencies; (2) policymakers in the executive and legislative branches of federal and state governments; (3) education policy and public policy researchers; (4) the press; and (5) citizens interested in information about education finance.

Data for NPEFS are collected from SEAs in the 50 states, the District of Columbia, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. The data file is organized by state or jurisdiction and contains revenue data by funding source; expenditure data by function (the activity being supported by the expenditure) and object (the category of expenditure); average daily attendance data; and total student membership data from the CCD State Nonfiscal Survey of Public Elementary/Secondary Education.

CLOSE

The purpose of the School District Finance Survey (F-33) is to provide finance data for all LEAs that provide free public elementary and secondary education in the United States. National and state totals are not included (national- and state-level figures are presented, however, in the National Public Education Financial Survey).

NCES partners with the U.S. Census Bureau in the collection of school district finance data. The Census Bureau distributes Census Form F-33, Annual Survey of School System Finances, to all SEAs, and representatives from the SEAs collect and edit data from their LEAs and submit data to the Census Bureau. The Census Bureau then produces two data files: one for distribution and reporting by NCES and the other for distribution and reporting by the Census Bureau. The files include variables for revenues by source, expenditures by function and object, indebtedness, assets, and student membership counts, as well as identification variables.

The coverage of the F-33 survey is different from the coverage of the NPEFS survey, as NPEFS includes special state-run and federal-run school districts that are not included in the F-33. In addition, variance in data availability between the two surveys may occur in cases where some data are available at the state level but not at the district level, and this might result in state-aggregated district totals from F-33 differing from the state totals in NPEFS. When states submit NPEFS and F-33 data in their own financial accounting formats instead of the NCES-requested format, variance in the state procedures may result in variance in the data. In these instances, Census Bureau analysts design and implement a crosswalk system to conform state-formatted data to the format for variables in the F-33. Also, differences between the two surveys in the reporting of expenditures for similar data items can occur when there are differences in the methodology that the state respondents use to crosswalk their NPEFS or F-33 data. Finally, the imputation and editing processes and procedures of the two surveys can vary. For further detail on imputations and data editing in the F-33 and NPEFS surveys, please see the FY 20 NCES F-33 (Cornman et al. 2022 [NCES 2022-304]) and NPEFS (Cornman et al. 2022 [NCES 2022-302, at https://nces.ed.gov/ccd/pdf/2022302_FY20_NPEFS_Documentation.pdf]) survey documentation.

CLOSE
CLOSE

The Early Childhood Longitudinal Study, Kindergarten Class of 2010–11 (ECLS-K:2011) provides detailed information on the school achievement and experiences of students throughout their elementary school years. The students who participated in the ECLS-K:2011 were followed longitudinally from the kindergarten year (the 2010–11 school year) through the spring of 2016, when most of them were expected to be in 5th grade. This sample of students is designed to be nationally representative of all students who were enrolled in kindergarten or who were of kindergarten age and being educated in an ungraded classroom or school in the United States in the 2010–11 school year, including those in public and private schools, those who attended full-day and part-day programs, those who were in kindergarten for the first time, and those who were kindergarten repeaters. Students who attended early learning centers or institutions that offered education only through kindergarten are included in the study sample and represented in the cohort if those institutions were included in NCES’s Common Core of Data or Private School Survey universe collections.

The ECLS-K:2011 placed emphasis on measuring students’ experiences within multiple contexts and development in multiple domains. The design of the study included the collection of information from the students, their parents/guardians, their teachers, and their schools. Information was also collected from their before- and after-school care providers in the kindergarten year.

A nationally representative sample of approximately 18,170 children from about 1,310 schools participated in the base-year administration of the ECLS-K:2011 in the 2010–11 school year. The sample included children from different racial/ethnic and socioeconomic backgrounds. Asian/Pacific Islander students were oversampled to ensure that the sample included enough students of this race/ethnicity to make accurate estimates for the group as a whole. Nine data collections were conducted: fall and spring of the children’s kindergarten year (the base year), fall 2011 and spring 2012 (the 1st-grade year), fall 2012 and spring 2013 (the 2nd-grade year), spring 2014 (the 3rd-grade year), spring 2015 (the 4th-grade year), and spring 2016 (the 5th-grade year). Although the study refers to later rounds of data collection by the grade the majority of children were expected to be in (that is, the modal grade for children who were in kindergarten in the 2010–11 school year), children are included in subsequent data collections regardless of their grade level.

A total of approximately 780 of the 1,310 originally sampled schools participated during the base year of the study. This translates to a weighted unit response rate (weighted by the base weight) of 63 percent for the base year. In the base year, the weighted child assessment unit response rate was 87 percent for the fall data collection and 85 percent for the spring collection, and the weighted parent unit response rate was 74 percent for the fall collection and 67 percent for the spring collection.

Fall and spring data collections were conducted in the 2011–12 school year, when the majority of the children were in the 1st grade. The fall collection was conducted within a 33 percent subsample of the full base-year sample, and the spring collection was conducted within the full base-year sample. The weighted child assessment unit response rate was 89 percent for the fall data collection and 88 percent for the spring collection, and the weighted parent unit response rate was 87 percent for the fall data collection and 76 percent for the spring data collection.

In the 2012–13 data collection (when the majority of the children were in the 2nd grade) the weighted child assessment unit response rate was 84.0 percent in the fall and 83.4 percent in the spring. In the 2014 spring data collection (when the majority of the children were in the 3rd grade), the weighted child assessment unit response rate was 79.9 percent. In the 2015 spring data collection (when the majority of the children were in the 4th grade), the weighted child assessment unit response rate was 77.3 percent; in the 2016 spring data collection (when the majority of the children were in the 5th grade), the weighted child assessment unit response rate was 72.4 percent.

Further information on ECLS-K:2011 may be obtained from

Jill McCarroll
Longitudinal Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
ecls@ed.gov
https://nces.ed.gov/ecls/kindergarten2011.asp

CLOSE

The NCES Education Demographic and Geographic Estimates (EDGE) program designs and develops information resources to help understand the social and spatial context of education in the United States. It uses data from the U.S. Census Bureau’s American Community Survey to create custom indicators of social, economic, and housing conditions for school-age children and their parents. It also uses spatial data collected by NCES and the Census Bureau to create geographic locale indicators, school point locations, school district boundaries, and other types of data to support spatial analysis.

Further information regarding the EDGE program may be obtained from

Douglas Geverdt
Sample Surveys Division
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
https://nces.ed.gov/programs/edge/
douglas.geverdt@ed.gov

CLOSE

The Fast Response Survey System (FRSS) was established in 1975 to collect issue-oriented data quickly, with a minimal burden on respondents. The FRSS, whose surveys collect and report data on key education issues at the elementary and secondary levels, was designed to meet the data needs of U.S. Department of Education analysts, planners, and decisionmakers when information could not be collected quickly through NCES’s large recurring surveys. Findings from FRSS surveys have been included in congressional reports, testimony to congressional subcommittees, NCES reports, and other Department of Education reports. The findings are also often used by state and local education officials.

Data collected through FRSS surveys are representative at the national level, drawing from a sample that is appropriate for each study. The FRSS collects data from state education agencies and national samples of other educational organizations and participants, including local education agencies, public and private elementary and secondary schools, elementary and secondary school teachers and principals, and public libraries and school libraries. To ensure a minimal burden on respondents, the surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly.

The FRSS survey “School Safety and Discipline: 2013–14” (FRSS 106, 2014) collected information on specific safety and discipline plans and practices, training for classroom teachers and aides related to school safety and discipline issues, security personnel, frequency of specific discipline problems, and number of incidents of various offenses.

The sample for the survey was selected from the 2011–12 Common Core of Data (CCD) Public School Universe file. Approximately 1,600 regular public elementary, middle, and high school/combined schools in the 50 states and the District of Columbia were selected for the study. (For the purposes of the study, “regular” schools included charter schools.)

In February 2014, questionnaires and cover letters were mailed to the principal of each sampled school. The letter requested that the questionnaire be completed by the person most knowledgeable about discipline issues at the school, and respondents were offered the option of completing the survey either on paper or online. Telephone follow-up for survey nonresponse and data clarification was initiated in March 2014 and completed in July 2014. About 1,350 schools completed the survey. The unweighted survey response rate was 86 percent, and the weighted response rate using the initial base weights was 85 percent. The survey weights were adjusted for questionnaire/unit nonresponse, and the data were then weighted to yield national estimates that represent all eligible regular public schools in the United States.

One of the goals of the survey was to allow comparisons to the School Survey on Crime and Safety (SSOCS) data. Consistent with the approach used on SSOCS, respondents were asked to report for the current 2013–14 school year to date. Information about violent incidents that occurred in the school between the time that the survey was completed and the end of the school year are not included in the survey data. The report Public School Safety and Discipline: 2013–14 (NCES 2015-051) presents selected findings from the survey.

Further information on this FRSS survey may be obtained from

Chris Chapman
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
chris.chapman@ed.gov
https://nces.ed.gov/surveys/frss

CLOSE

The High School Longitudinal Study of 2009 (HSLS:09) is a nationally representative, longitudinal study of approximately 21,000 9th-grade students in 944 schools who will be followed through their secondary and postsecondary years. The study focuses on understanding students’ trajectories from the beginning of high school into postsecondary education, the workforce, and beyond. The HSLS:09 questionnaire is focused on, but not limited to, information on science, technology, engineering, and mathematics (STEM) education and careers. It is designed to provide data on mathematics and science education, the changing high school environment, and postsecondary education. This study features a new student assessment in algebra skills, reasoning, and problem solving and includes surveys of students, their parents, math and science teachers, and school administrators, as well as a new survey of school counselors.

The HSLS:09 base year took place in the 2009–10 school year, with a randomly selected sample of fall-term 9th-graders in more than 900 public and private high schools that had both a 9th and an 11th grade. Students took a mathematics assessment and survey online. Students’ parents, principals, and mathematics and science teachers and the school’s lead counselor completed surveys on the phone or online.

The HSLS:09 student questionnaire includes interest and motivation items for measuring key factors predicting choice of postsecondary paths, including majors and eventual careers. This study explores the roles of different factors in the development of a student’s commitment to attend college and then take the steps necessary to succeed in college (the right courses, courses in specific sequences, etc.). Questionnaires in this study have asked more questions of students and parents regarding reasons for selecting specific colleges (e.g., academic programs, financial aid and access prices, and campus environment).

The first follow-up of HSLS:09 occurred in the spring of 2012, when most sample members were in the 11th grade. A between-round postsecondary status update survey took place in the spring of students’ expected graduation year (2013). It asked respondents questions about high school completion status; college applications, acceptances, and rejections; and their postsecondary plans and choices. In the fall of 2013 and the spring of 2014, high school transcripts were collected and coded.

A full second follow-up took place in 2016, when most sample members were 3 years beyond high school graduation.

Further information on HSLS:09 may be obtained from

Elise Christopher
Longitudinal Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
hsls09@ed.gov
https://nces.ed.gov/surveys/hsls09/

CLOSE

High school transcript studies have been conducted since 1982 in conjunction with major NCES data collections. The studies collect information that is contained in a student’s high school record—courses taken while attending secondary school, information on credits earned, when specific courses were taken, and final grades.

A high school transcript study was conducted in 2004 as part of the Education Longitudinal Study of 2002 (ELS:2002/2004). A total of 1,550 schools participated in the request for transcripts, for an unweighted participation rate of approximately 79 percent. Transcript information was received on 14,920 members of the student sample (not just graduates), for an unweighted response rate of 91 percent.

Similar studies were conducted on the coursetaking patterns of 1982, 1987, 1990, 1992, 1994, 1998, 2000, 2005, 2009, and 2019 high school graduates. The 1982 data are based on approximately 12,000 transcripts collected by the High School and Beyond Longitudinal Study (HS&B). The 1987 data are based on approximately 25,000 transcripts from 430 schools obtained as part of the 1987 NAEP High School Transcript Study, a scope comparable to that of the NAEP transcript studies conducted in 1990, 1994, 1998, and 2000. The 1992 data are based on approximately 15,000 transcripts collected by the National Education Longitudinal Study of 1988 (NELS:88/92). The 2005 data, from the 2005 NAEP High School Transcript Study, come from a sample of over 26,000 transcripts from 640 public schools and 80 private schools. The 2009 data are from the 2009 NAEP High School Transcript Study, which collected transcripts from a nationally representative sample of 37,700 high school graduates from about 610 public schools and 130 private schools. The 2019 data are from the 2019 NAEP High School Transcript Study, which collected and analyzed transcripts from a nationally representative sample of 47,000 high school graduates from about 1,400 schools (both public and nonpublic). Most of the high school transcripts collected in the 2019 NAEP High School Transcript Study were those of students who participated in the 2019 NAEP 12th-grade mathematics and science assessments, and this facilitated analyses of the relationships between high school coursetaking patterns and graduates’ achievement based on their performance on the assessments.

Because the 1982 HS&B transcript study used a different method for identifying students with disabilities than was used in NAEP transcript studies after 1982, and in order to make the statistical summaries as comparable as possible, all the counts and percentages in this report are restricted to students whose records indicate that they had not participated in a special education program. This restriction lowers the number of 1990 graduates represented in the tables to 20,870.

Further information on NAEP high school transcript studies may be obtained from

Linda Hamilton
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
linda.hamilton@ed.gov
https://nces.ed.gov/surveys/hst/

CLOSE

The Integrated Postsecondary Education Data System (IPEDS) consists of 12 interrelated survey components that provide information on postsecondary institutions and academic libraries at these institutions, student enrollment, student financial aid, programs of study offered, retention and graduation rates, degrees and certificates conferred, and the human and financial resources involved in the provision of institutionally based postsecondary education. Prior to 2000, the IPEDS survey had the following subject-matter components: Institutional Characteristics; Total Institutional Activity (these data were moved to the Institutional Characteristics component in 1990–91, then to the Fall Enrollment component in 2000–01); Fall Enrollment; Fall Staff; Salaries, Tenure, and Fringe Benefits of Full-Time Faculty; Completions; Finance; Academic Libraries (in 2000, the Academic Libraries component separated from the IPEDS collection); and Graduation Rates. Since 2000, IPEDS survey components occurring in a particular collection year have been organized into three seasonal collection periods: fall, winter, and spring. The Institutional Characteristics and Completions components first took place during the fall 2000 collection. The Employees by Assigned Position (EAP); Salaries, Tenure, and Fringe Benefits of Full-Time Faculty; and Fall Staff components first took place during the winter 2001–02 collection. The Fall Enrollment, Student Financial Aid, Finance, and Graduation Rates components first took place during the spring 2001 collection. In the winter 2005–06 data collection, the EAP; Fall Staff; and Salaries, Tenure, and Fringe Benefits of Full-Time Faculty components were merged into the Human Resources component. During the 2007–08 collection year, the Fall Enrollment component was broken into two components: 12-month Enrollment (taking place in the fall collection) and Fall Enrollment (taking place in the spring collection). In the 2011–12 IPEDS data collection year, the Student Financial Aid component was moved to the winter data collection to aid in the timing of the net price of attendance calculations displayed on the College Navigator (https://nces.ed.gov/collegenavigator/). In the 2012–13 IPEDS data collection year, the Human Resources component was moved from the winter data collection to the spring data collection, and in the 2013–14 data collection year, the Graduation Rates and Graduation Rates 200 Percent components were moved from the spring data collection to the winter data collection. In the 2014–15 data collection year, a new component (Admissions) was added to IPEDS and a former IPEDS component (Academic Libraries) was reintegrated into IPEDS. The Admissions component, created out of admissions data contained in the fall data collection’s Institutional Characteristics component, was made a part of the winter data collection. The Academic Libraries component, after having been conducted as a survey independent of IPEDS between 2000 and 2012, was reintegrated into IPEDS as part of the spring data collection. Finally, in the 2015–16 data collection year, the Outcome Measures survey component was added to IPEDS.

Beginning in 2008–09, the first-professional degree category was combined with the doctor’s degree category. However, some degrees formerly identified as first-professional that take more than 2 full-time-equivalent academic years to complete, such as those in Theology (M.Div., M.H.L./Rav), are included in the master’s degree category. Doctor’s degrees were broken out into three distinct categories: research/scholarship, professional practice, and other doctor’s degrees.

The collection of race/ethnicity data also changed in 2008–09. IPEDS now collects a count of students who identify as Hispanic and counts of non-Hispanic students who identify with each race category. The “Asian” race category is now separate from the “Native Hawaiian or Other Pacific Islander” category, and a new category of “Two or more races” has been added.

The degree-granting institutions portion of IPEDS is a census of colleges that award associate’s or higher degrees and are eligible to participate in Title IV financial aid programs. Prior to 1993, data from technical and vocational institutions were collected through a sample survey. Beginning in 1993, all data are gathered in a census of all postsecondary institutions. Beginning in 1996, the survey was restricted to institutions participating in Title IV programs.

The classification of institutions offering college and university education changed as of 1996. Prior to 1996, institutions that either had courses leading to an associate’s or higher degree or that had courses accepted for credit toward those degrees were considered higher education institutions. Higher education institutions were accredited by an agency or association that was recognized by the U.S. Department of Education or were recognized directly by the Secretary of Education. The newer standard includes institutions that award associate’s or higher degrees and that are eligible to participate in Title IV federal financial aid programs. For an institution to be eligible to participate in Title IV financial aid programs, it must be accredited by an agency or association that was recognized by the U.S. Department of Education or be recognized directly by the Secretary of Education. Tables that contain any data according to this standard are titled “degree-granting” institutions. Time-series tables may contain data from both series, and they are noted accordingly. The impact of this change on data collected in 1996 was not large. For example, tables on faculty salaries and benefits were affected only to a small extent. Also, degrees awarded at the bachelor’s level or higher were not heavily affected. The largest impact was on private 2-year college enrollment. In contrast, most of the data on public 4-year colleges were affected to a minimal extent. The impact on enrollment in public 2-year colleges was noticeable in certain states, such as Arizona, Arkansas, Georgia, Louisiana, and Washington, but was relatively small at the national level. Overall, total enrollment for all institutions was about one-half of 1 percent higher in 1996 for degree-granting institutions than for higher education institutions.

Prior to the establishment of IPEDS in 1986, the Higher Education General Information Survey (HEGIS) acquired and maintained statistical data on the characteristics and operations of higher education institutions. Implemented in 1966, HEGIS was an annual universe survey of institutions accredited at the college level by an agency recognized by the Secretary of the U.S. Department of Education. These institutions were listed in NCES’s Education Directory, Colleges and Universities.

HEGIS surveys collected information on institutional characteristics, faculty salaries, finances, libraries, fall enrollment, student residence and migration, and earned degrees. Since these surveys, like IPEDS, were distributed to all higher education institutions, the data presented are not subject to sampling error. However, they are subject to nonsampling error, the sources of which varied with the survey instrument.

The NCES Taskforce for IPEDS Redesign recognized that there were issues related to the consistency of data definitions as well as the accuracy, reliability, and validity of other quality measures within and across surveys. The IPEDS redesign in 2000 provided institution-specific web-based data forms. While the new system shortened data processing time and provided better data consistency, it did not address the accuracy of the data provided by institutions.

Beginning in 2003–04 with the Prior Year Data Revision System, prior-year data have been available to institutions entering current data. This allows institutions to make changes to their prior-year entries either by adjusting the data or by providing missing data. These revisions allow the evaluation of the data’s accuracy by looking at the changes made.

NCES conducted a study (NCES 2005-175) of the 2002–03 data that were revised in 2003–04 to determine the accuracy of the imputations, track the institutions that submitted revised data, and analyze the revised data they submitted. When institutions made changes to their data, NCES accepted that the revised data were the most accurate, correct, and “true” data. The data were analyzed for the number and type of institutions making changes, the type of changes, the magnitude of the changes, and the impact on published data.

Because NCES imputes for missing data, imputation procedures were also addressed by the Redesign Taskforce. For the 2003–04 assessment, differences between revised values and values that were imputed in the original files were compared (i.e., revised value minus imputed value). These differences were then used to provide an assessment of the effectiveness of imputation procedures. The size of the differences also provides an indication of the accuracy of imputation procedures. To assess the overall impact of changes on aggregate IPEDS estimates, published tables for each component were reconstructed using the revised 2002–03 data. These reconstructed tables were then compared to the published tables to determine the magnitude of aggregate bias and the direction of this bias. The aggregate bias analysis revealed that, generally, differences between originally published estimates and revised estimates were small.

Since the 2000–01 data collection year, IPEDS data collections have been web based. Data have been provided by “keyholders,” institutional representatives appointed by campus chief executives, who are responsible for ensuring that survey data submitted by the institution are correct and complete. Because Title IV institutions are the primary focus of IPEDS and because these institutions are required to respond to IPEDS, response rates for Title IV institutions have been high (data on specific components are cited below). More details on the accuracy and reliability of IPEDS data can be found in the Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175).

Further information on IPEDS may be obtained from

Samuel Barbett
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
samuel.barbett@ed.gov
https://nces.ed.gov/ipeds/

The 12-month period during which data are collected is July 1 through June 30. Unduplicated headcount enrollment data are collected by gender, attendance status (full-time, part-time), race/ethnicity, first-time (entering), transfer-in (non-first-time entering), continuing/returning, and degree/certificate-seeking statuses for undergraduate students and by race/ethnicity and gender for graduate students. The 12-month Enrollment component also collects total enrollment in distance education courses. Instructional activity is collected as total credit and/or clock hours attempted at the undergraduate, graduate, and doctor’s professional levels, and these data are used to calculate a full-time-equivalent (FTE) enrollment. FTE enrollment is useful for gauging the size of the educational enterprise at the institution. Prior to the 2007–08 IPEDS data collection, the data collected in the 12-month Enrollment component were part of the Fall Enrollment component, which is conducted during the spring data collection period. However, to improve the timeliness of the data, a separate 12-month Enrollment survey component was developed in 2007. These data are now collected in the fall for the previous academic year.

The response rate for the 12-month Enrollment component of the fall 2020 data collection was nearly 100 percent. Data from 6 of the 6,055 Title IV institutions that were expected to respond to this component were imputed due to unit nonresponse.

Further information on the IPEDS 12-month Enrollment component may be obtained from

Tara Lawley
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
tara.lawley@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

The Completions component collects data on the number of students who complete a postsecondary education program (completers) and the number of postsecondary awards earned (completions). This component was part of the HEGIS series throughout its existence. However, the degree classification taxonomy was revised in 1970–71, 1982–83, 1986–87, 1991–92, 2002–03, 2009–10, and 2020–21. Collection of degree data has been maintained through IPEDS.

The nonresponse rate does not appear to be a significant source of nonsampling error for this component. The response rate over the years has been high; for the fall 2021 Completions component, the response rate rounded to 100 percent. Data from 3 of the 5,975 Title IV institutions that were expected to respond to this component were imputed due to unit nonresponse.

Further information on the IPEDS Completions component may be obtained from

Michelle Coon
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
michelle.coon@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

This survey collects the basic information necessary to classify institutions, including control, level, and types of programs offered, as well as information on tuition, fees, and room and board charges. Beginning in 2000, the survey collected institutional pricing data from institutions with first-time, full-time, degree/certificate-seeking undergraduate students. The survey also collects data on tuition and fees for undergraduate students, graduate students, and students enrolled in select doctoral programs. Unduplicated full-year enrollment counts and instructional activity are now collected in the 12-month Enrollment survey.

In the fall 2021 data collection, the response rate for Title IV entities on the Institutional Characteristics component was nearly 100 percent. Data from 1 of the 6,045 Title IV entities that were expected to respond to this component were imputed due to unit nonresponse.

Further information on the IPEDS Institutional Characteristics component may be obtained from

Stacey Peterson
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
stacey.peterson@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

This component was part of the spring data collection from IPEDS data collection years 2000–01 to 2010–11, but it moved to the winter data collection starting with the 2011–12 IPEDS data collection year. This move assists with the timing of the net price of attendance calculations displayed on College Navigator (https://nces.ed.gov/collegenavigator/).

Financial aid data are collected for undergraduate students. Data are collected regarding federal grants, state and local government grants, institutional grants, and loans. State, local, and institutional grants include grants, scholarships, waivers, and some stipends. The collected data include the number of students receiving each type of financial assistance and the average amount of aid received by type of aid. Beginning in 2008–09, student financial aid data collected includes greater detail on types of aid offered. Beginning with data collected for 2020–21, data collected for all undergraduates is disaggregated by degree/certificate seeking and non-degree/certificate seeking students.

In the winter 2021–22 data collection, the Student Financial Aid component collected data about financial aid awarded to undergraduate students, with particular emphasis on full-time, first-time degree/certificate-seeking undergraduate students awarded financial aid for the 2020–21 academic year.  Student counts and aid award amounts were collected to calculate the net price of attendance for two subsets of full-time, first-time degree/certificate-seeking undergraduate students: those awarded any grant or scholarship aid from the federal, state, or local government, or the institution, and those awarded Title IV aid. In addition, the component collected data on undergraduate and graduate students receiving select veteran’s and military tuition assistance benefits.

The response rate for the Student Financial Aid component in winter 2021–22 was nearly 100 percent. Of the 5,897 Title IV institutions that were expected to respond, responses were missing for 8 institutions, and these missing data were imputed.

Further information on the IPEDS Student Financial Aid component may be obtained from

Stacey Peterson
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
stacey.peterson@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

In IPEDS data collection years 2012–13 and earlier, the Graduation Rates and Graduation Rates 200 Percent components were collected during the spring collection. In the IPEDS 2013–14 data collection year, however, the Graduation Rates and Graduation Rates 200 Percent collections were moved to the winter data collection.

The 2020–21 Graduation Rates component collected counts of full-time, first-time degree/certificate-seeking undergraduate students beginning their postsecondary education in the specified cohort year and their completion status as of 150 percent of normal program completion time at the same institution where the students started. If 150 percent of normal program completion time extended beyond August 31, 2020, the counts as of that date were collected. Four-year institutions used 2014 as the cohort year, while less-than-4-year institutions used 2017 as the cohort year. Four-year institutions also report for full-time, first-time bachelor’s degree-seeking undergraduate students.

Starting with the 2016–17 Graduation Rates component, two new subcohort groups—students who received Pell Grants and students who received a subsidized Direct loan and did not receive Pell Grants—were added.

Of the 5,429 institutions that were expected to respond to the Graduation Rates component, responses were missing for 4 institutions, and these missing data were imputed.

The 2020–21 Graduation Rates 200 Percent component was designed to combine information reported in a prior collection via the Graduation Rates component with current information about the same cohort of students. From previously collected data, the following counts were obtained: the number of students entering the institution as full-time, first-time degree/certificate-seeking students in a cohort year; the number of students in this cohort completing within 100 and 150 percent of normal program completion time; and the number of cohort exclusions (such as students who left for military service). Then the number of additional cohort exclusions and additional program completers between 151 and 200 percent of normal program completion time was collected. Four-year institutions reported on bachelor’s or equivalent degree-seeking students and used cohort year 2012 as the reference period, while less-than-4-year institutions reported on all students in the cohort and used cohort year 2016 as the reference period.

Of the 5,042 institutions that were expected to respond to the Graduation Rates 200 Percent component, responses were missing for 3 institution, and these missing data were imputed.

Further information on the IPEDS Graduation Rates and Graduation Rates 200 Percent components may be obtained from

Andrew Mary
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
andrew.mary@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

In the 2014–15 survey year, an Admissions component was added to the winter data collection. This component was created out of the admissions data that had previously been a part of the fall Institutional Characteristics component. Situating these data in a new component in the winter collection enables all institutions to report data for the most recent fall period.

The Admissions component collects information about the selection process for entering first-time degree/certificate-seeking undergraduate students only from institutions that do not have an open admissions policy for entering first-time students. (The Institutional Characteristics Header component asks institutions whether they have an open admissions policy; those that have an open admissions policy do not respond to the Admissions component.) Data obtained from institutions include admissions considerations (e.g., secondary school records, admission test scores), the number of first-time degree/certificate-seeking undergraduate students who applied, the number admitted, and the number enrolled. Data collected for the IPEDS winter 2021–22 Admissions component relate to individuals applying to be admitted during the fall of the 2021–22 academic year (the fall 2021 reporting period).

Of the 1,965 Title IV institutions that were expected to respond to the Admissions component,  responses were missing for 1 institution, and these missing data were imputed.

Further information on the IPEDS Admissions component may be obtained from

Michelle Coon
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
michelle.coon@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

First administered in the winter 2015–16 data collection, the Outcome Measures component is designed to provide measures of student success for traditional college students, as well as for nontraditional college students, including those who are part-time and non-first-time (i.e., “transfer-in”).

In the winter 2015–16 data collection, the Outcome Measures component collected data from 2- and 4-year degree-granting institutions on the award and enrollment status for these four cohorts of degree/certificate-seeking undergraduates:

  • First-time, full-time entering students;
  • First-time, part-time entering students; 
  • Non-first-time (or “transfer-in”), full-time entering students; and
  • Non-first-time, part-time entering students.

Since the 2017–18 collection, two new subcohort groups—students who received Pell Grants and students who did not receive Pell Grants—have been added to each of the four main cohorts in the Outcome Measures component, resulting in a total of eight undergraduate subcohorts.

Prior to the 2017–18 collection, cohorts in this component were based on a fall term for academic reporters and a full year for program reporters. Since the 2017–18 collection, all institutions have reported on a full-year cohort.

The cohorts that were a part of the winter 2021–22 data collection consisted of all entering degree/certificate-seeking undergraduate students who began their studies between July 1, 2013, and June 30, 2014. Student completion status was collected as of August 31 at 4 years, 6 years, and 8 years after students entered the institution (e.g., 4-year completion status was measured on August 31, 2017). For students within the cohorts who did not receive a degree or certificate, the Outcome Measures component collected the enrollment status as of 8 years after they entered the reporting institution (August 31, 2021).

The response rate for the Outcome Measures component of the winter 2021–22 collection was nearly 100 percent. Of the 3,629 institutions that were expected to respond, 2 responses were missing, and these data were imputed.

Further information on the IPEDS Outcome Measures component may be obtained from

Tara Lawley
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
tara.lawley@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

This survey has been part of the HEGIS and IPEDS series since 1966. Response rates have been relatively high, generally exceeding 85 percent. Beginning in 2000, with web-based data collection, higher response rates were attained. In the spring 2022 data collection, in which the Fall Enrollment component covered student enrollment in fall 2021, the response rate was greater than 99 percent. Of the 5,967 institutions that were expected to respond, 10 institutions did not respond, and these data were imputed.

Beginning with the fall 1986 survey and the introduction of IPEDS (see above), a redesign of the survey resulted in the collection of data by race/ethnicity, gender, level of study (i.e., undergraduate and graduate), and attendance status (i.e., full-time and part-time). Other aspects of the survey include allowing (in alternating years) for the collection of age and residence data. The Fall Enrollment component also collects data on first-time retention rates, student-to-faculty ratios, and student enrollment in distance education courses. Finally, in even-numbered years, 4-year institutions provide enrollment data by level of study, race/ethnicity, and gender for nine selected fields of study or Classification of Instructional Programs (CIP) codes. (The CIP is a taxonomic coding scheme that contains titles and descriptions of primarily postsecondary instructional programs.)

Beginning in 2000, the survey collected instructional activity and unduplicated headcount data, which are needed to compute a standardized, full-time-equivalent (FTE) enrollment statistic for the entire academic year. As of 2007–08, the timeliness of the instructional activity data has been improved by collecting these data in the fall as part of the 12-month Enrollment component instead of in the spring as part of the Fall Enrollment component.

Further information on the IPEDS Fall Enrollment component may be obtained from

Tara Lawley
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
tara.lawley@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

This survey was part of the HEGIS series and has been continued under IPEDS. Substantial changes were made in the financial survey instruments in fiscal year (FY) 1976, FY 1982, FY 1987, FY 1997, and FY 2002. The FY 1976 survey instrument contained numerous revisions to earlier survey forms, which made direct comparisons of line items very difficult. Beginning in FY 1982, Pell Grant data were collected in the categories of federal restricted grant and contract revenues and restricted scholarship and fellowship expenditures. The introduction of IPEDS in the FY 1987 survey included several important changes to the survey instrument and data processing procedures. Beginning in FY 1997, data for private institutions were collected using new financial concepts consistent with Financial Accounting Standards Board (FASB) reporting standards, which provide a more comprehensive view of college finance activities. The data for public institutions continued to be collected using the older survey form. The data for public and private institutions were no longer comparable and, as a result, no longer presented together in analysis tables. In FY 2001, public institutions had the option of either continuing to report using Government Accounting Standards Board (GASB) standards or using the new FASB reporting standards. Beginning in FY 2002, public institutions could use either the original GASB standards, the FASB standards, or the new GASB Statement 35 standards (GASB35). Beginning in FY 2004, public institutions could no longer submit survey forms based on the original GASB standards. Beginning in FY 2008, public institutions could submit their GASB survey forms using a revised structure that was modified for better comparability with the IPEDS FASB finance forms, or the institutions could use the structure of the prior forms used from FY 2004 to FY 2007. Similarly, in FY 2008, private nonprofit institutions and public institutions using the FASB form were given an opportunity to report using the forms that had been modified to improve comparability with the GASB forms, or they could use forms with a structure that was consistent with the prior years. In FY 2010, the use of the forms with the older structure was discontinued, and all institutions used either the GASB or FASB forms that had been modified for comparability. Also, in FY 2010, a new series of forms was introduced for non-degree-granting institutions that included versions for for-profit, FASB, and GASB reporting institutions. From FY 2000 through FY 2013, private for-profit institutions used a version of the FASB form with much less detail than the FASB form used by private nonprofit institutions. As of FY 2014, however, private for-profit institutions have been required to report the same level of detail as private nonprofit institutions.

Possible sources of nonsampling error in the financial statistics include nonresponse, imputation, and misclassification. The unweighted response rate has been about 85 to 90 percent for most of the years these data appeared in NCES reports; however, in more recent years, response rates have been much higher because Title IV institutions are required to respond. Since 2002, the IPEDS data collection has been a full-scale web-based collection, which has improved the quality and timeliness of the data. For example, the ability of IPEDS to tailor online data entry forms for each institution based on characteristics such as institutional control, level of institution, and calendar system and the institutions’ ability to submit their data online are aspects of full-scale web-based collections that have improved response.

The response rate for the FY 2022 Finance component was greater than 99 percent: Of the 6,037 institutions and administrative offices that were expected to respond, 14 did not respond, and these missing data were imputed.

Further information on the IPEDS Finance component may be obtained from

Aida Ali Akreyi
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aida.ali-akreyi@ed.gov
https://nces.ed.gov/ipeds/

CLOSE

The Human Resources component was part of the IPEDS winter data collection from data collection years 2000–01 to 2011–12. For the 2012–13 data collection year, the Human Resources component was moved to the spring 2013 data collection in order to give institutions more time to prepare their survey responses.

IPEDS Collection Years, 2012–13 to Present

In 2012–13, new occupational categories replaced the primary function/occupational activity categories previously used in the IPEDS Human Resources component. This change was required in order to align the IPEDS Human Resources categories with the 2010 Standard Occupational Classification (SOC) system. In tandem with the change in 2012–13 from using primary function/occupational activity categories to using the new occupational categories, the sections making up the IPEDS Human Resources component (which previously had been Employees by Assigned Position, Fall Staff, and Salaries) were changed to Full-Time Instructional Staff, Full-Time Noninstructional Staff, Salaries, Part-Time Staff, and New Hires.

The webpages “Archived Changes—Changes to IPEDS Data Collections, 2012–13” (https://nces.ed.gov/ipeds/report-your-data/archived-changes/2012-13) and “2012–13 IPEDS Human Resources (HR) Occupational Categories Compared with 2011–12 IPEDS HR Primary Function/Occupational Activity Categories” (https://nces.ed.gov/ipeds/resource/download/IPEDS_HR_2012-13_compared_to_IPEDS_HR_2011-12.pdf) provide information on the redesign of the IPEDS Human Resources component initiated in the 2012–13 data collection year.

In 2018, an update to the Standard Occupational Classification (SOC) system was released. As a consequence, revisions were made to the occupational categories in the Human Resources component in the IPEDS spring 2019 data collection. These revisions are described on the webpage “Resources for Implementing Changes to the IPEDS Human Resources (HR) Survey Component Due to Updated 2018 Standard Occupational Classification (SOC) System” (https://nces.ed.gov/ipeds/report-your-data/taxonomies-standard-occupational-classification-soc-codes).

In the IPEDS spring 2022 data collection, the response rate for the Human Resources component was greater than 99 percent. Of the 6,041 institutions and administrative offices that were expected to respond, 9 institutions did not respond, and these missing data were imputed.

IPEDS Collection Years Prior to 2012–13

In collection years before 2001–02, IPEDS conducted a Fall Staff survey and a Salaries survey; in the 2001–02 collection year, the Employees by Assigned Position (EAP) survey was added to IPEDS. In the 2005–06 collection year, these three surveys became sections of the IPEDS “Human Resources” component.

Data gathered by the EAP section categorized all employees by full- or part-time status, faculty status, and primary function/occupational activity. Institutions with M.D. or D.O. programs were required to report their medical school employees separately.

The main functions/occupational activities of the EAP section were primarily instruction, instruction combined with research and/or public service, primarily research, primarily public service, executive/administrative/managerial, other professionals (support/service), graduate assistants, technical and paraprofessionals, clerical and secretarial, skilled crafts, and service/maintenance.

All full-time instructional faculty classified in the EAP full-time non-medical school part as either (1) primarily instruction or (2) instruction combined with research and/or public service were included in the Salaries section, unless they were exempt (i.e., unless they contributed their services, were employed on an ad hoc or occasional basis, or worked strictly in hospitals associated with medical schools).

The Fall Staff section categorized all staff on the institution’s payroll as of November 1 of the collection year by employment status (full time or part time), primary function/occupational activity, gender, and race/ethnicity. Title IV institutions and administrative offices were only required to respond to the Fall Staff section in odd-numbered reporting years, so they were not required to respond during the 2008–09 Human Resources data collection.

The Salaries section collected data for full-time instructional faculty (except those in medical schools in the EAP section, described above) on the institution’s payroll as of November 1 of the collection year by contract length/teaching period, gender, and academic rank. The reporting of data by faculty status in the Salaries section was required from 4-year degree-granting institutions and above only. Salary outlays and fringe benefits were also collected for full-time instructional staff on 9/10- and 11/12-month contracts/teaching periods. This section was applicable to degree-granting institutions unless exempt (i.e., unless they met one of the following exclusions: all instructional faculty were part time, all contributed their services, all were in the military, or all taught preclinical or clinical medicine).

Between 1966–67 and 1985–86, this survey differed from other HEGIS surveys in that imputations were not made for nonrespondents. Thus, there is some possibility that the salary averages presented in this report may differ from the results of a complete enumeration of all colleges and universities. Beginning with the surveys for 1987–88, the IPEDS data tabulation procedures included imputations for survey nonrespondents. Imputation methods for the 2010–11 Salaries survey section are discussed in Employees in Postsecondary Institutions, Fall 2010, and Salaries of Full-Time Instructional Staff, 2010–11 (https://nces.ed.gov/pubs2012/2012276.pdf).

Further information on the Human Resources component may be obtained from

Samuel Barbett
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
samuel.barbett@ed.gov
https://nces.ed.gov/ipeds/

CLOSE
CLOSE

The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects. The assessment is based on frameworks developed by the National Assessment Governing Board (NAGB). It includes both multiple-choice items and constructed-response items (those requiring written answers). Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students performing at or above three achievement levels (Basic, Proficient, and Advanced) are also reported for these groups.

Main NAEP Assessments

From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate. In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main NAEP, and an aggregate of all state samples replaced the separate national sample. (School district-level assessments—under the Trial Urban District Assessment [TUDA] program—also began in 2002.)

Results are available for the mathematics assessments administered in 1990, 1992, 1996, 2000, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017, 2019, and 2022. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and better assess the specific objectives for students at each grade level.

The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.

Since the 2005 changes to the mathematics framework were minimal for grades 4 and 8, comparisons over time can be made between assessments conducted before and after the framework’s implementation for these grades. The changes that the 2005 framework made to the grade 12 assessment, however, were too drastic to allow grade 12 results from before and after implementation to be directly compared. These changes included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework; merging the measurement and geometry content areas; and changing the reporting scale from 0–500 to 0–300. For more information regarding the 2005 mathematics framework revisions, see https://nces.ed.gov/nationsreportcard/mathematics/frameworkcomparison.asp.

Results are available for the reading assessments administered in 1992, 1994, 1998, 2000, 2002, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017, 2019, and 2022. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.

Both a content alignment study and a reading trend, or bridge, study were conducted to determine whether the new reading assessment was comparable to the prior assessment. Overall, the results of the special analyses suggested that the assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see https://nces.ed.gov/nationsreportcard/reading/whatmeasure.asp.

In spring 2013, NAEP released results from the NAEP 2012 economics assessment in The Nation’s Report Card: Economics 2012 (NCES 2013-453). First administered in 2006, the NAEP economics assessment measures 12th-graders’ understanding of a wide range of topics in three main content areas: market economy, national economy, and international economy. The 2012 assessment is based on a nationally representative sample of nearly 11,000 students in the 12th grade.

In The Nation’s Report Card: A First Look—2013 Mathematics and Reading (NCES 2014-451), NAEP released the results of the 2013 mathematics and reading assessments. Results can also be accessed using the interactive graphics and downloadable data available at the online Nation’s Report Card website (https://nationsreportcard.gov/reading_math_2013/).

The Nation’s Report Card: A First Look—2013 Mathematics and Reading Trial Urban District Assessment (NCES 2014-466) provides the results of the 2013 mathematics and reading TUDA, which measured the reading and mathematics progress of 4th- and 8th-graders from 21 urban school districts. Results from the 2013 mathematics and reading TUDA can also be accessed using the interactive graphics and downloadable data available at the online TUDA website (https://nationsreportcard.gov/reading_math_tuda_2013/).

The online interactive report The Nation’s Report Card: 2014 U.S. History, Geography, and Civics at Grade 8 (NCES 2015-112) provides grade 8 results for the 2014 NAEP U.S. history, geography, and civics assessments. Trend results for previous assessment years in these three subjects, as well as information on school and student participation rates and sample tasks and student responses, are also presented.

In 2014, the first administration of the NAEP Technology and Engineering Literacy (TEL) Assessment asked 8th-graders to respond to questions aimed at assessing their knowledge and skill in understanding technological principles, solving technology and engineering-related problems, and using technology to communicate and collaborate. The online report The Nation’s Report Card: Technology and Engineering Literacy (NCES 2016-119) presents national results for 8th-graders on the TEL assessment.

The Nation’s Report Card: 2015 Mathematics and Reading Assessments (NCES 2015-136) is an online interactive report that presents national and state results for 4th- and 8th-graders on the NAEP 2015 mathematics and reading assessments. The report also presents TUDA results in mathematics and reading for 4th- and 8th-graders. The online interactive report The Nation’s Report Card: 2015 Mathematics and Reading at Grade 12 (NCES 2016-108) presents grade 12 results from the NAEP 2015 mathematics and reading assessments.

Results from the 2015 NAEP science assessment are presented in the online report The Nation’s Report Card: 2015 Science at Grades 4, 8, and 12 (NCES 2016-162). The assessment measures the knowledge of 4th-, 8th-, and 12th-graders in the content areas of physical science, life science, and Earth and space sciences, as well as their understanding of four science practices (identifying science principles, using science principles, using scientific inquiry, and using technological design). National results are reported for grades 4, 8, and 12, and results from 46 participating states and one jurisdiction are reported for grades 4 and 8. Since a new NAEP science framework was introduced in 2009, results from the 2015 science assessment can be compared to results from the 2009 and 2011 science assessments, but cannot be compared to the science assessments conducted prior to 2009.

As a consequence of NAEP’s transition from paper-based assessments to technology-based assessments, data were needed regarding students’ access to and familiarity with technology, at home and at school. The Computer Access and Familiarity Study (CAFS) was designed to fulfill this need. CAFS was conducted as part of the main administration of the 2015 NAEP. A subset of the grade 4, 8, and 12 students who took the main NAEP were chosen to take the additional CAFS questionnaire. The main 2015 NAEP was administered in a paper-and-pencil format to some students and a digital-based format to others, and CAFS participants were given questionnaires in the same format as their NAEP questionnaires.

The online Highlights report 2017 NAEP Mathematics and Reading Assessments: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts (NCES 2018-037) presents an overview of results from the NAEP 2017 mathematics and reading reports. Highlighted results include key findings for the nation, states/jurisdictions, and 27 districts that participated in the Trial Urban District Assessment (TUDA) in mathematics and reading at grades 4 and 8.

Results from the NAEP 2018 TEL Assessment are contained in the online report The Nation's Report Card: Highlighted Results for the 2018 Technology and Engineering Literacy (TEL) Assessment at Grade 8 (NCES 2019-068). The digitally based assessment (participants took the assessment via laptop) was taken by approximately 15,400 eighth-graders from about 600 schools across the nation. Results were reported in terms of average scale scores (on a 0 to 300 scale) and in relation to the NAEP achievement levels NAEP Basic, NAEP Proficient, and NAEP Advanced.

The online reports 2019 NAEP Reading Assessment: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts and 2019 NAEP Mathematics Assessment: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts (NCES 2020-012) present overviews of results from the NAEP 2019 reading and mathematics reports. Highlighted results include key findings for the nation, states/jurisdictions, and 27 districts that participated in the Trial Urban District Assessment (TUDA) in mathematics and reading at grades 4 and 8.

Online highlights presenting overviews of grade 12 results from the NAEP 2019 mathematics report and the NAEP 2019 reading report can be found in 2019 NAEP Mathematics and Reading Assessments: Highlighted Results at Grade 12 for the Nation (NCES 2020-090).

The online report 2019 NAEP Science Assessment: Highlighted Results at Grades 4, 8, and 12 for the Nation (NCES 2021-045) presents an overview of results from the NAEP 2019 science report. Highlighted results include key findings for nationally representative samples of 4th-, 8th-, and 12th-grade students, presented in terms of average scale scores and as percentages of students performing at the three NAEP achievement levels.

The NAEP 2019 12th-grade mathematics and science assessment scores were linked to high school graduate transcript data collected in the 2019 NAEP High School Transcript Study. (Please see the “High School Transcript Studies” section, above, for additional information on the 2019 NAEP High School Transcript Study.) Results from the 2022 mathematics and reading assessments are presented in 2022 NAEP Mathematics Assessment: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts (NCES 2022-124) and 2022 NAEP Reading Assessment: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts (NCES 2022-126). These reports include national, state, and district results on the performance of fourth- and eighth-grade students. Results are provided in terms of average scores and as percentages of students performing at or above the three NAEP achievement levels: NAEP Basic, NAEP Proficient, and NAEP Advanced. Results are not only reported as overall scores; they are also reported by race/ethnicity, gender, type of school, and other demographic groups.

NAEP Long-Term Trend Assessments

In addition to conducting the main assessments, NAEP also conducts the long-term trend assessments. Long-term trend assessments provide an opportunity to observe educational progress in reading and mathematics of 9-, 13-, and 17-year-olds since the early 1970s. The long-term trend reading assessment measures students’ reading comprehension skills using an array of passages that vary by text types and length. The assessment was designed to measure students’ ability to locate specific information in the text provided; make inferences across a passage to provide an explanation; and identify the main idea in the text.

The NAEP long-term trend assessment in mathematics measures knowledge of mathematical facts; ability to carry out computations using paper and pencil; knowledge of basic formulas, such as those applied in geometric settings; and ability to apply mathematics to skills of daily life, such as those involving time and money.

The Nation’s Report Card: Trends in Academic Progress 2012 (NCES 2013-456) provides the results of 12 long-term trend reading assessments dating back to 1971 and 11 long-term trend mathematics assessments dating back to 1973.

The online report 2020 Long-Term Trend Reading and Mathematics Assessment Results at Age 9 and Age 13 (NCES 2021-077) presents the results of the NAEP long-term trend assessments in reading and mathematics administered during the 2019–20 school year to 9- and 13-year-old students. It provides trend results in terms of average scale scores, selected percentiles, and five performance levels.

Further information on NAEP may be obtained from

Emmanuel Sikali
Reporting and Dissemination Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
emmanuel.sikali@ed.gov
https://nces.ed.gov/nationsreportcard

CLOSE

The National Household Education Surveys Program (NHES) is a data collection system that is designed to address a wide range of education-related issues. Surveys have been conducted in 1991, 1993, 1995, 1996, 1999, 2001, 2003, 2005, 2007, 2012, 2016, and 2019. NHES targets specific populations for detailed data collection. It is intended to provide more detailed data on the topics and populations of interest than are collected through supplements to other household surveys.

The 2007 and earlier administrations of NHES used a random-digit-dial sample of landline phones and computer-assisted telephone interviewing to conduct interviews. However, due to declining response rates for all telephone surveys and the increase in households that only or mostly use a cell phone instead of a landline, the data collection method was changed to an address-based sample survey for NHES:2012. Because of this change in survey mode, readers should use caution when comparing NHES:2012 estimates to those of prior NHES administrations.

The topics addressed by NHES:1991 were early childhood education and adult education. About 60,000 households were screened for NHES:1991. In the Early Childhood Education Survey, about 14,000 parents/guardians of 3- to 8-year-olds completed interviews about their children’s early educational experiences. Included in this component were participation in nonparental care/education; care arrangements and school; and family, household, and child characteristics. In the NHES:1991 Adult Education Survey, about 9,800 people 16 years of age and over, identified as having participated in an adult education activity in the previous 12 months, were questioned about their activities. Data were collected on programs and up to four courses, including the subject matter, duration, sponsorship, purpose, and cost. Information on the household and the adult’s background and current employment was also collected.

In NHES:1993, nearly 64,000 households were screened. Approximately 11,000 parents of 3- to 7-year-olds completed interviews for the School Readiness Survey. Topics included the developmental characteristics of preschoolers; school adjustment and teacher feedback to parents for kindergartners and primary students; center-based program participation; early school experiences; home activities with family members; and health status. In the School Safety and Discipline Survey, about 12,700 parents of children in grades 3 to 12 and about 6,500 youth in grades 6 to 12 were interviewed about their school experiences. Topics included the school learning environment, discipline policy, safety at school, victimization, the availability and use of alcohol/drugs, and alcohol/drug education. Peer norms for behavior in school and substance use were also included in this topical component. Extensive family and household background information was collected, as well as characteristics of the school attended by the child.

In NHES:1995, the Early Childhood Program Participation Survey and the Adult Education Survey were similar to those fielded in 1991. In the Early Childhood component, about 14,000 parents of children from birth to 3rd grade were interviewed out of 16,000 sampled, for a completion rate of 90.4 percent. In the Adult Education Survey, about 24,000 adults were sampled and 82.3 percent (20,000) completed the interview.

NHES:1996 covered parent and family involvement in education and civic involvement. Data on homeschooling and school choice also were collected. The 1996 survey screened about 56,000 households. For the Parent and Family Involvement in Education Survey, nearly 21,000 parents of children in grades 3 to 12 were interviewed. For the Civic Involvement Survey, about 8,000 youth in grades 6 to 12, about 9,000 parents, and about 2,000 adults were interviewed. The 1996 survey also addressed public library use. Adults in almost 55,000 households were interviewed to support state-level estimates of household public library use.

NHES:1999 collected end-of-decade estimates of key indicators from the surveys conducted throughout the 1990s. Approximately 60,000 households were screened for a total of about 31,000 interviews with parents of children from birth through grade 12 (including about 6,900 infants, toddlers, and preschoolers) and adults age 16 or older not enrolled in grade 12 or below. Key indicators included participation of children in nonparental care and early childhood programs, school experiences, parent/family involvement in education at home and at school, youth community service activities, plans for future education, and adult participation in educational activities and community service.

NHES:2001 included two surveys that were largely repeats of similar surveys included in earlier NHES collections. The Early Childhood Program Participation Survey was similar in content to the Early Childhood Program Participation Survey fielded as part of NHES:1995, and the Adult Education and Lifelong Learning Survey was similar in content to the Adult Education Survey of NHES:1995. The Before- and After-School Programs and Activities Survey, while containing items fielded in earlier NHES collections, had a number of new items that collected information about what school-age children were doing during the time they spent in child care or in other activities, what parents were looking for in care arrangements and activities, and parent evaluations of care arrangements and activities. Parents of approximately 6,700 children from birth through age 6 who were not yet in kindergarten completed Early Childhood Program Participation Survey interviews. Nearly 10,900 adults completed Adult Education and Lifelong Learning Survey interviews, and parents of nearly 9,600 children in kindergarten through grade 8 completed Before- and After-School Programs and Activities Survey interviews.

NHES:2003 included two surveys: the Parent and Family Involvement in Education Survey and the Adult Education for Work-Related Reasons Survey (the first administration). Whereas previous adult education surveys were more general in scope, this survey had a narrower focus on occupation-related adult education programs. It collected in-depth information about training and education in which adults participated specifically for work-related reasons, either to prepare for work or a career or to maintain or improve work-related skills and knowledge they already had. The Parent and Family Involvement Survey expanded on the first survey fielded on this topic in 1996. In 2003, screeners were completed with 32,050 households. About 12,700 of the 16,000 sampled adults completed the Adult Education for Work-Related Reasons Survey, for a weighted response rate of 76 percent. For the Parent and Family Involvement in Education Survey, interviews were completed by the parents of about 12,400 of the 14,900 sampled children in kindergarten through grade 12, yielding a weighted unit response rate of 83 percent.

NHES:2005 included surveys that covered adult education, early childhood program participation, and after-school programs and activities. Data were collected from about 8,900 adults for the Adult Education Survey, from parents of about 7,200 children for the Early Childhood Program Participation Survey, and from parents of nearly 11,700 children for the After-School Programs and Activities Survey. These surveys were substantially similar to the surveys conducted in 2001, with the exceptions that the Adult Education Survey addressed a new topic—informal learning activities for personal interest—and the Early Childhood Program Participation Survey and After-School Programs and Activities Survey did not collect information about before-school care for school-age children.

NHES:2007 fielded the Parent and Family Involvement in Education Survey and the School Readiness Survey. These surveys were similar in design and content to surveys included in the 2003 and 1993 collections, respectively. New features added to the Parent and Family Involvement Survey were questions about supplemental education services provided by schools and school districts (including use of and satisfaction with such services), as well as questions that would efficiently identify the school attended by the sampled students. New features added to the School Readiness Survey were questions that collected details about TV programs watched by the sampled children. For the Parent and Family Involvement Survey, interviews were completed with parents of 10,680 sampled children in kindergarten through grade 12, including 10,370 students enrolled in public or private schools and 310 homeschooled children. For the School Readiness Survey, interviews were completed with parents of 2,630 sampled children ages 3 to 6 and not yet in kindergarten. Parents who were interviewed about children in kindergarten through 2nd grade for the Parent and Family Involvement Survey were also asked some questions about these children’s school readiness.

NHES:2012 included the Parent and Family Involvement in Education Survey and the Early Childhood Program Participation Survey. The Parent and Family Involvement in Education Survey gathered data on students age 20 or younger who were enrolled in kindergarten through grade 12 or who were homeschooled at equivalent grade levels. Survey questions that pertained to students enrolled in kindergarten through grade 12 requested information on various aspects of parent involvement in education (such as help with homework, family activities, and parent involvement at school) and survey questions pertaining to homeschooled students requested information on the student’s homeschooling experiences, the sources of the curriculum, and the reasons for homeschooling.

The 2012 Parent and Family Involvement in Education Survey questionnaires were completed for 17,563 (397 homeschooled and 17,166 enrolled) children, for a weighted unit response rate of 78.4 percent. The overall estimated unit response rate (the product of the screener unit response rate of 73.8 percent and the Parent and Family Involvement in Education Survey unit response rate) was 57.8 percent.

The 2012 Early Childhood Program Participation Survey collected data on the early care and education arrangements and early learning of children from birth through the age of 5 who were not yet enrolled in kindergarten. Questionnaires were completed for 7,893 children, for a weighted unit response rate of 78.7 percent. The overall estimated weighted unit response rate (the product of the screener weighted unit response rate of 73.8 percent and the Early Childhood Program Participation Survey unit weighted response rate) was 58.1 percent. 

NHES:2016 used a nationally representative address-based sample covering the 50 states and the District of Columbia. The 2016 administration of NHES included a screener survey and three topical surveys: The Parent and Family Involvement in Education Survey, the Early Childhood Program Participation Survey, and the Adult Training and Education Survey. The screener survey questionnaire identified households with children under age 20 and adults ages 16 to 65. A total of 206,000 households were selected based on this screener, and the screener response rate was 66.4 percent. All sampled households received initial contact by mail. Although the majority of respondents completed paper questionnaires, a small sample of cases was part of a web experiment with mailed invitations to complete the survey online.

The 2016 Parent and Family Involvement in Education Survey, like its predecessor in 2012, gathered data about students age 20 or under who were enrolled in kindergarten through grade 12 or who were being homeschooled for the equivalent grades. The 2016 survey’s questions also covered aspects of parental involvement in education similar to those in the 2012 survey. The total number of completed questionnaires in the 2016 survey was 14,075 (13,523 enrolled and 552 homeschooled children), representing a population of 53.2 million students either homeschooled or enrolled in a public or private school in 2015–16. The survey’s weighted unit response rate was 74.3 percent, and the overall response rate was 49.3 percent.

The 2016 Early Childhood Program Participation Survey collected data about children from birth through age 6 who were not yet enrolled in kindergarten. The survey asked about children’s participation in relative care, nonrelative care, and center-based care arrangements. It also requested information such as the main reason for choosing care, factors that were important to parents when choosing a care arrangement, the primary barriers to finding satisfactory care, activities the family does with the child, and what the child is learning. Questionnaires were completed for 5,844 children, representing a population of 21.4 million children from birth through age 6 who were not yet enrolled in kindergarten. The Early Childhood Program Participation Survey weighted unit response rate was 73.4 percent and the overall estimated weighted unit response rate (the product of the screener weighted unit response rate and the Early Childhood Program Participation Survey weighted unit response rate) was 48.7 percent.

The third topical survey of NHES:2016 was a new NHES survey, the Adult Training and Education Survey. The survey collected information from noninstitutionalized adults ages 16 to 65 not enrolled in high school—it also collected information from adults living at residential addresses associated with educational institutions such as colleges (thus, it collected information from enrolled college students). One of the main goals of the Adult Training and Education Survey is to capture the prevalence of nondegree credentials, including estimates of adults with occupational certifications or licenses, as well as to capture the prevalence of postsecondary educational certificates. A further goal is to learn more about work experience programs. The survey’s data, when weighted, were nationally representative of noninstitutionalized adults ages 16 to 65, not enrolled in grades 12 or below. The total number of completed questionnaires was 47,744, representing a population of 196.3 million. The survey had a weighted response rate of 73.1 percent and an overall response rate of 48.5 percent.

Data for the three topical surveys in the 2016 administration of NHES are available in Parent and Family Involvement in Education: Results From the National Household Education Surveys Program of 2016 (NCES 2017-102); Early Childhood Program Participation, Results From the National Household Education Surveys Program of 2016 (NCES 2017-101); and Adult Training and Education: Results From the National Household Education Surveys Program of 2016 (NCES 2017-103rev). In addition, public-use data for the three 2016 surveys are available at https://nces.ed.gov/nhes/dataproducts.asp.

NHES:2019 was a two-phase survey conducted primarily on the web, although a portion of the sample completed a paper-based version of the survey. It included a screener survey and two topical surveys: the Early Childhood Program Participation Survey and the Parent and Family Involvement in Education Survey. The screener survey questionnaire identified households with children or youth under age 20. A total of 205,000 households were selected based on this screener, and the screener response rate was 63.1 percent. The survey was initiated with the mailing of contact letter inviting respondents to complete the screener questionnaire. The topical questionnaires were provided in both paper and web versions.

The 2019 Early Childhood Program Participation Survey focused on children age 6 or younger who were not yet enrolled in kindergarten. The survey questionnaire covered children’s participation in early education and care arrangements provided by relatives or nonrelatives in private homes, center-based day care, or preschool programs (including Head Start). Additional topics included family learning activities, early literacy and numeracy skills, out-of-pocket expenses for nonparental care and education, factors related to parents’ selection of providers, and parents’ perceptions of care and education quality. Parents were also asked about child characteristics, including the child’s health and disability status; characteristics of the child’s parents or guardians who live in the household; and household characteristics. Questionnaires were completed for 7,092 children. The survey’s weighted unit response rate was 85.5 percent, and the overall response rate was 54.0 percent.

The 2019 Parent and Family Involvement in Education Survey focused on children and youth age 20 or younger who were in kindergarten through 12th grade and either attending a public or private school or being homeschooled. Parents of enrolled students were asked about school choice, parent and family involvement at school, the child’s behavior at school, grade retention, parents’ satisfaction with the school, and family involvement in schoolwork and activities outside of school. Parents of homeschooled students were asked to identify the person primarily responsible for homeschooling the child and to indicate the amount of time the child is homeschooled, the subjects covered, the parents’ reasons for homeschooling, and the resources (including the internet) used for homeschooling. In addition, for the first time in the survey, parents of children who attended online or virtual schools were asked about their reasons for choosing an online or virtual school and the cost of that type of schooling. Questionnaires were completed for 16,446 children. The survey’s weighted unit response rate was 83.4 percent, and the overall response rate was 52.6 percent.

Public-use data for the 2019 Early Childhood Program Participation Survey and 2019 Parent and Family Involvement in Education Survey are available at https://nces.ed.gov/nhes/dataproducts.asp#2019dp.

Further information on NHES may be obtained from

Michelle McNamara
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
michelle.mcnamara@ed.gov
https://nces.ed.gov/nhes

CLOSE

The National Postsecondary Student Aid Study (NPSAS) is a comprehensive nationwide study of how students and their families pay for postsecondary education. Data gathered from the study are used to help guide future federal student financial aid policy. The study is conducted with nationally representative samples of undergraduates, graduates, and first-professional students in the 50 states, the District of Columbia, and Puerto Rico, including students attending less-than-2-year institutions, community colleges, and 4-year colleges and universities. Participants include both students who receive financial aid and those who do not. Since NPSAS identifies nationally representative samples of student subpopulations of interest to policymakers and obtains baseline data for longitudinal study of these subpopulations, data from the study provide the base-year sample for the Beginning Postsecondary Students Longitudinal Study (BPS) and the Baccalaureate and Beyond Longitudinal Study (B&B).

Originally, NPSAS was conducted every 3 years. Beginning with the 1999–2000 study (NPSAS:2000), NPSAS has been conducted every 4 years. NPSAS:08 included a new set of instrument items to obtain baseline measures of the awareness of two new federal grants introduced in 2006: the Academic Competitiveness Grant (ACG) and the National Science and Mathematics Access to Retain Talent (SMART) grant.

The first NPSAS (NPSAS:87) was conducted during the 1986–87 school year. Data were gathered from about 1,100 colleges, universities, and other postsecondary institutions; 60,000 students; and 14,000 parents. These data provided information on the cost of postsecondary education, the distribution of financial aid, and the characteristics of both aided and nonaided students and their families.

NPSAS:90 included a stratified sample of approximately 69,000 eligible students (about 47,000 of whom were undergraduates) from about 1,100 institutions. For each of the students included in the NPSAS sample, there were up to three sources of data. First, institution registration and financial aid records were extracted. Second, a Computer Assisted Telephone Interview (CATI) designed for each student was conducted. Finally, a CATI designed for the parents or guardians of a subsample of students was conducted. The purpose of the parent survey was to obtain detailed information on the family and economic characteristics of dependent students who did not receive financial aid, especially first-time, first-year students. In keeping with this purpose, parents of financially independent students who were over 30 years of age and parents of graduate/first-professional students were excluded from the sample. Data from these three sources were synthesized into a single system with an overall response rate of 89 percent.

For NPSAS:93, information on 77,000 undergraduates and graduate students enrolled during the school year was collected at 1,000 postsecondary institutions. The sample included students who were enrolled at any time between July 1, 1992, and June 30, 1993. About 66,000 students and a subsample of their parents were interviewed by telephone. NPSAS:96 contained information on more than 48,000 undergraduate and graduate students from about 1,000 postsecondary institutions who were enrolled at any time during the 1995–96 school year. NPSAS:2000 included nearly 62,000 students (50,000 undergraduates and almost 12,000 graduate students) from 1,000 postsecondary institutions. NPSAS:04 collected data on about 80,000 undergraduates and 11,000 graduate students from 1,400 postsecondary institutions. For NPSAS:08, about 114,000 undergraduate students and 14,000 graduate students who were enrolled in postsecondary education during the 2007–08 school year were selected from more than 1,730 postsecondary institutions.

NPSAS:12 sampled about 95,000 undergraduates and 16,000 graduate students from approximately 1,500 postsecondary institutions.

NPSAS:16 sampled about 89,000 undergraduate and 24,000 graduate students attending approximately 1,800 Title IV eligible postsecondary institutions in the 50 states, the District of Columbia, and Puerto Rico. The sample represents approximately 20 million undergraduate and 4 million graduate students enrolled in postsecondary education at Title IV eligible institutions at any time between July 1, 2015, and June 30, 2016. Public access to the data is available online through PowerStats (https://nces.ed.gov/datalab/).

In NPSAS:18-AC, a special Administrative Collection of NPSAS, data were collected from the institutions attended by the enrolled students and from other relevant databases, including U.S. Department of Education records on student loan and grant programs—the students themselves were not interviewed. Data included were from approximately 245,600 undergraduate and 21,300 graduate students attending 1,900 institutions in the 50 states, the District of Columbia, and Puerto Rico who were enrolled at Title IV eligible institutions between July 1, 2017, and June 30, 2018. Additional information about this survey is available in 2017–18 National Postsecondary Student Aid Study, Administrative Collection (NPSAS:18-AC): First Look at Student Financial Aid Estimates for 2017–18 (NCES 2021-476).

Further information on NPSAS may be obtained from

Aurora D’Amico
Tracy Hunt-White
Longitudinal Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aurora.damico@ed.gov
tracy.hunt-white@ed.gov
https://nces.ed.gov/npsas/

CLOSE

The National Teacher and Principal Survey is a set of related questionnaires that collect descriptive data on the context of elementary and secondary education. Data reported by schools, principals, and teachers provide a variety of statistics on the condition of education in the United States that may be used by policymakers and the general public. The NTPS system covers a wide range of topics, including teacher demand, teacher and principal characteristics, teachers’ and principals’ perceptions of school climate and problems in their schools, teacher and principal compensation, general conditions in schools, and basic characteristics of the student population.

The NTPS is a redesign of the Schools and Staffing Survey (SASS), which was conducted from the 1987–88 school year to the 2011–12 school year. Although the NTPS maintains the SASS survey’s focus on schools, teachers, and administrators, the NTPS has a different structure and sample than SASS. In addition, whereas SASS operated on a 4-year survey cycle, the NTPS operates on a 2- or 3-year survey cycle. The NTPS universe of schools is confined to the 50 states plus the District of Columbia. It excludes the Department of Defense dependents schools overseas, schools in U.S. territories overseas, and CCD schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent. Bureau of Indian Education schools are included in the NTPS universe, but these schools were not oversampled and the data do not support separate BIE estimates.

The NTPS includes three key components: school questionnaires, principal questionnaires, and teacher questionnaires. NTPS data are collected by the U.S. Census Bureau through mail and online questionnaires with telephone and in-person field follow-up. The school and principal questionnaires were sent to sampled schools, and the teacher questionnaire was sent to a sample of teachers working at sampled schools. Teachers associated with a selected school were sampled from a list of teachers that was provided by the school, collected from school websites, or purchased from a vendor.

The school questionnaire asks knowledgeable school staff members about grades offered, student attendance and enrollment, staffing patterns, teaching vacancies, programs and services offered, curriculum, and community service requirements. In addition, basic information is collected about the school year, including the beginning time of students’ school days and the length of the school year. In order to reduce burden on sampled schools, note that some topics are included in every NTPS administration, and some are included in every other administration.

The principal questionnaire collects information about principal/school head demographic characteristics, training, experience, salary, goals for the school, and judgments about school working conditions and climate. Information is also obtained on professional development opportunities for teachers and principals, teacher performance, barriers to dismissal of underperforming teachers, school climate and safety, parent/guardian participation in school events, and attitudes about educational goals and school governance. In order to reduce burden on sampled principals, note that some topics are included in every NTPS administration, and some are included in every other administration.

The teacher questionnaire collects data from teachers about their current teaching assignment, workload, education history, and perceptions and attitudes about teaching. Questions are also asked about teacher preparation, induction, organization of classes, computers, and professional development. In order to reduce burden on sampled teachers, note that some topics are included in every NTPS administration, and some are included in every other administration.

The NTPS was first conducted during the 2015–16 school year. In the 2015–16 NTPS, the school sample consisted of about 8,300 public schools; the principal sample consisted of about 8,300 public school principals; and the teacher sample consisted of about 50,000 public school teachers. Weighted unit response rates were 72.5 percent for the school survey, 71.8 percent for the principal survey, and 67.8 percent for the teacher survey.

Whereas the 2015–16 NTPS surveyed only schools, teachers, and principals in the public sector, the 2017–18 NTPS surveyed schools, teachers, and principals in both the public and private sectors. The selected samples included about 10,600 traditional and charter public schools and their principals, 60,000 public school teachers, 4,000 private schools and their principals, and 9,600 private school teachers.

Weighted unit response rates for the 2017–18 NTPS were 72.5 percent for the public school survey and 64.5 percent for the private school survey, 70.2 percent for the public school principal survey and 62.6 percent for the private school principal survey, and 76.9 percent for the public school teacher survey and 75.9 percent for the private school teacher survey.

As with the 2017–18 NTPS, the 2020–21 NTPS surveyed schools, teachers, and principals in both the public and private sectors. The selected samples included about 9,900 public schools and their principals, 68,300 public school teachers, 3,000 private schools and their principals, and 8,000 private school teachers. Data collection was conducted during the COVID-19 pandemic, which affected school operations starting in March 2020. Items about how schools first adapted to COVID-19 during the spring of 2020 were included on the School, Principal, and Teacher Questionnaires. An item was also included at the beginning of each questionnaire that asked about the current operational effect of COVID-19 on instruction at the school at the time the survey was completed during the 2020–21 school year.

Weighted unit response rates for the 2020-21 NTPS were 65.6 percent for public schools and 61.4 percent for private schools; 68.0 percent for public school principals and 61.7 percent for private school principals; and 55.0 percent for public school teachers and 43.5 percent for private school teachers. 

Some NTPS estimates in the Condition of Education 2023 have been revised from previously published figures, due to corrections made to the school level variable in NTPS.

General information on NTPS and electronic copies of the questionnaires are available at the NTPS home page (https://nces.ed.gov/surveys/ntps).

Further information on the NTPS program can be obtained from

Maura Spiegelman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
maura.spiegelman@ed.gov
https://nces.ed.gov/surveys/ntps

CLOSE

The Principal Follow-up Survey (PFS), originally a component of the Schools and Staffing Survey (SASS) and currently a component of the National Teacher and Principal Survey (NTPS), was created in order to provide attrition rates for principals in K–12 schools. It assesses, from one year to the year following, how many principals are principals at the same school, how many are principals at a different school, and how many are no longer working as principals.

The 2012–13 PFS sample consisted of schools who had returned a completed 2011–12 SASS principal questionnaire. Schools that had returned the completed SASS questionnaire were mailed the 2012–13 PFS form in March 2013. The 2012–13 PFS sample included about 7,500 public schools and 1,700 private schools; it was made up of only one survey item and had a response rate of nearly 100 percent.

The 2016–17 PFS sample consisted of schools who had returned a completed 2015–16 NTPS principal questionnaire. Schools that had returned the completed NTPS questionnaire were mailed the 2016–17 PFS form in March 2017. The 2016–17 PFS sample included about 5,700 public schools. (The 2016–17 PFS did not include private schools because these schools were not included in the 2015–16 NTPS.) The survey was made up of only one item and had a response rate of about 95 percent.

Further information on the PFS may be obtained from

Julia Merlin
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
julia.merlin@ed.gov
https://nces.ed.gov/surveys/ntps/overview.asp?OverviewType=6

CLOSE

The purposes of the Private School Universe Survey (PSS) data collection activities are (1) to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools and (2) to report data on the total number of private schools, teachers, and students in the survey universe. Since its inception in 1989, the survey has been conducted every 2 years. Selected findings from the 2019–20 PSS are presented in the First Look report Characteristics of Private Schools in the United States: Results From the 2019–20 Private School Universe Survey (NCES 2021-061).

The PSS produces data similar to that of the Common Core of Data for public schools, and can be used for public-private comparisons. The data are useful for a variety of policy- and research-relevant issues, such as the growth of religiously affiliated schools, the number of private high school graduates, the length of the school year for various private schools, and the number of private school students and teachers.

The target population for this universe survey is all private schools in the United States that meet the PSS criteria of a private school (i.e., the private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home).

The survey universe is composed of schools identified from a variety of sources. The main source is a list frame initially developed for the 1989–90 PSS. The list is updated regularly by matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 124 geographic areas, conducted by the U.S. Census Bureau.

Of the 40,302 schools included in the 2009–10 sample, 10,229 considered as out-of-scope (not eligible for the PSS). Those not responding numbered 1,856, and those responding numbered 28,217. The unweighted response rate for the 2009–10 PSS survey was 93.8 percent.

Of the 39,325 schools included in the 2011–12 sample, 10,030 cases were considered as out-of-scope (not eligible for the PSS). A total of 26,983 private schools completed a PSS interview (15.8 percent completed online), while 2,312 schools refused to participate, resulting in an unweighted response rate of 92.1 percent.

There were 40,298 schools in the 2013–14 sample; of these, 10,659 were considered as out-of-scope (not eligible for the PSS). A total of 24,566 private schools completed a PSS interview (34.1 percent completed online), while 5,073 schools refused to participate, resulting in an unweighted response rate of 82.9 percent.

The 2015–16 PSS included 42,389 schools, of which 12,754 were considered as out-of-scope (not eligible for the PSS). A total of 22,428 private schools completed a PSS interview and 7,207 schools failed to respond, which resulted in an unweighted response rate of 75.7 percent.

Of the 43,384 schools included in the 2017–18 sample, 15,272 cases were considered as out-of-scope (not eligible for the PSS). A total of 22,895 private schools completed a PSS interview, while 5,217 schools refused to participate, resulting in an unweighted response rate of 81.4 percent.

There were 42,836 schools in the 2019–20 sample; of these, 13,895 were considered as out-of-scope (not eligible for the PSS). A total of 21,572 private schools completed a PSS interview, while 7,369 failed to respond, resulting in an unweighted response rate of 74.5 percent.

Further information on the PSS may be obtained from

Ryan Iaconelli
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
ryan.iaconelli@ed.gov
https://nces.ed.gov/surveys/pss/

CLOSE

Since 1964, NCES has published projections of key statistics for elementary and secondary schools and higher education institutions. The Condition of Education 2023 includes projections through 2031, which can be found in the Digest of Education Statistics. Each edition of the Projections provides revisions to estimates from the year before. These revisions can be the result of designed model improvements or incidental changes to the underlying data that feed the models. With the long-term impacts of the coronavirus pandemic being uncertain, NCES made minimal changes to the projection models in favor of consistency. However, due to the disruption of the pandemic, changes to underlying data were more pronounced than usual. For additional information on prior editions of the projections, see Projections of Education Statistics to 2028 (NCES 2020-024).

The Projections of Education Statistics series provides national data for elementary and secondary enrollment, high school graduates, elementary and secondary teachers, expenditures for public elementary and secondary education, enrollment in postsecondary degree-granting institutions, and postsecondary degrees conferred. The report also provides state-level projections for public elementary and secondary enrollment and public high school graduates.

Differences between the reported and projected values are, of course, almost inevitable. In Projections of Education Statistics to 2028, an evaluation of past projections revealed that, at the elementary and secondary levels, projections of public school enrollments have been quite accurate: mean absolute percentage differences for enrollment in public schools ranged from 0.3 to 1.2 percent for projections from 1 to 5 years in the future, while those for teachers in public schools were 3.0 percent or less. At the higher education level, projections of enrollment have been fairly accurate: mean absolute percentage differences were reported as 5.9 percent or less for projections from 1 to 5 years into the future in Projections of Education Statistics to 2026 (NCES 2018-019). (Projections of Education Statistics to 2027 and Projections of Education Statistics to 2028 did not report mean absolute percentage errors for institutions at the higher educational level because enrollment projections were calculated using a new model.)

Further information on Projections of Education Statistics may be obtained from

Véronique Irwin
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
veronique.irwin@ed.gov
https://nces.ed.gov/pubs2020/2020024.pdf

CLOSE

The School Pulse Panel is a study collecting information on the impact of the COVID-19 pandemic from a national sample of elementary, middle, high, and combined-grade public schools. Some survey questions are asked repeatedly to observe trends over time, while others examine unique topics in a single month.

The survey asks questions about topics such as instructional mode offered; enrollment counts of subgroups of students using various instructional modes; learning loss mitigation strategies; safe and healthy school mitigation strategies; special education services; use of technology; and information on staffing.

A sample of approximately 2,400 public elementary, middle, high, and combined-grade schools was selected to participate in a panel where school and district staff were asked to provide requested data monthly during the 2021–22 school year. While the results from the School Pulse Panel have been weighted and adjusted for non-response, these experimental data should be interpreted with caution. Experimental data may not meet all NCES quality standards. 

As part of a post-release quality evaluation of School Pulse Panel (SPP) data, an error was uncovered in the survey weighting procedure. This required a reweighting of the data and a recalculation of estimates released from the January 2022 through December 2022 SPP collections. Estimates in the Condition of Education 2023 have been revised as of August, 2023, based on a reweighting of the data. For a description of the reweighting and its effect on the estimates, see this memo at https://ies.ed.gov/schoolsurvey/spp/ReweightingMemo.pdf.

Further information on the School Pulse Panel may be obtained from

 

https://ies.ed.gov/schoolsurvey/spp/

https://nces.ed.gov/surveys/spp/staff.asp

CLOSE

The School Survey on Crime and Safety (SSOCS) is the only recurring federal survey that collects detailed information on the incidence, frequency, seriousness, and nature of violence affecting students and school personnel, as well as other indicators of school safety from the schools’ perspective. SSOCS is conducted by the National Center for Education Statistics (NCES) within the U.S. Department of Education and collected by the U.S. Census Bureau. Data from this collection can be used to examine the relationship between school characteristics and violent and serious violent crimes in primary, middle, high, and combined schools. In addition, data from SSOCS can be used to assess what crime prevention programs, practices, and policies are used by schools. SSOCS has been conducted in school years 1999–2000, 2003–04, 2005–06, 2007–08, 2009–10, 2015–16, 2017–18, and 2019–20.

The sampling frame for SSOCS:2020 was constructed from a preliminary version of the 2020–21 National Teacher and Principal Survey (NTPS) Universe File, which was created from the 2017–18 Common Core of Data (CCD) Public Elementary/Secondary School Universe data file. The sampling frame was restricted to regular public schools, charter schools, and schools with partial or total magnet programs in the 50 states and the District of Columbia. It excluded special education schools, vocational schools, alternative schools, virtual schools, newly closed schools, home schools, ungraded schools, schools with a highest grade of kindergarten or lower, Department of Defense Education Activity schools, and Bureau of Indian Education schools, as well as schools in Puerto Rico, American Samoa, the Northern Marianas, Guam, and the U.S. Virgin Islands.

The SSOCS:2020 universe totaled 83,852 schools. The SSOCS:2020 findings were based on a stratified, random sample of 4,800 public schools. Data collection for SSOCS:2020 began on February 13, 2020. Although SSOCS data collections typically use mail, telephone, and email to send reminders, the emergence of the coronavirus pandemic caused SSOCS:2020 to switch to primarily email operations from mid-March through the end of data collection in mid-October. A total of 2,370 public schools provided complete SSOCS:2020 questionnaires, yielding a weighted response rate of 54 percent.

Further information about SSOCS may be obtained from

Deanne Swan
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
deanne.swan@ed.gov
https://nces.ed.gov/surveys/ssocs/

CLOSE

Other Department of Education Agencies and Programs

The EDFacts Initiative

EDFacts is a centralized data collection through which state education agencies (SEAs) submit PK–12 education data to the U.S. Department of Education (ED). All data in EDFacts are organized into “data groups” and reported to ED using defined file specifications. Depending on the data group, SEAs may submit aggregate counts for the state as a whole or detailed counts for individual schools or school districts. EDFacts does not collect student-level records. The entities that are required to report EDFacts data vary by data group but may include the 50 states, the District of Columbia, the Department of Defense Education Activity, the Bureau of Indian Education, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. More information about EDFacts file specifications and data groups can be found at https://www2.ed.gov/about/inits/ed/edfacts/index.html.

EDFacts is a universe collection and is not subject to sampling error, although nonsampling errors such as nonresponse and inaccurate reporting may occur. ED attempts to minimize nonsampling errors by training data submission coordinators and reviewing the quality of state data submissions. However, anomalies may still be present in the data.

Differences in state data collection systems may limit the comparability of EDFacts data across states and across time. To build EDFacts files, SEAs rely on data that were reported by their schools and school districts. The systems used to collect these data are evolving rapidly and differ from state to state. For example, there is a large shift in California’s firearm incident data between 2010–11 and 2011–12. California cited a new student data system that more accurately collects firearm incident data as the reason for the magnitude of the difference.

In some cases, EDFacts data may not align with data reported on SEA websites. States may update their websites on schedules different from those they use to report data to ED. Furthermore, ED may use methods for protecting the privacy of individuals represented within the data that could be different from the methods used by an individual state.

Data on English language learners enrolled in public schools and four-year adjusted cohort graduation rates are collected by EDFacts on behalf of the Office of Elementary and Secondary Education.

For more information about EDFacts, please contact

Ross Santy
Administrative Data Division
National Center for Education Statistics
U.S. Department of Education
550 12th Street SW
Washington, DC 20202
ross.santy@ed.gov
https://www2.ed.gov/about/inits/ed/edfacts/index.html

CLOSE

NAEP Monthly School Survey

The Monthly School Survey Dashboard was created as part of a pilot study to provide data for understanding learning opportunities offered by schools during the COVID-19 pandemic. The pilot study is a key element of the response to the Executive Order on Supporting the Reopening and Continuing Operation of Schools and Early Childhood Education Providers (https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/21/executive-order-supporting-the-reopening-and-continuing-operation-of-schools-and-early-childhood-education-providers/). Results from this new pilot study are available for analyzing the 2022 National Assessment of Educational Progress (NAEP) data—contributing additional contextual factors for understanding educational outcomes of the nation’s 4th- and 8th-grade students. The survey collected data five times, once a month from February through June of 2021.

Further information on the NAEP Monthly School Survey may be obtained from

https://ies.ed.gov/schoolsurvey/mss-dashboard/
https://ies.ed.gov/schoolsurvey/mss-dashboard/about.aspx

CLOSE

Campus Safety and Security Survey

The Campus Safety and Security Survey is administered by the Office of Postsecondary Education. Since 1990, all postsecondary institutions participating in Title IV student financial aid programs have been required to comply with the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act, known as the Clery Act. Originally, Congress enacted the Crime Awareness and Campus Security Act, which was amended in 1992, 1998, 2000, 2008, and 2013. The 1998 amendments renamed the law the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act. The Clery Act requires schools to give timely warnings of crimes to the student body and staff; to publicize campus crime and safety policies; and to collect, report, and disseminate campus crime data.

Crime statistics are collected and disseminated by campus security authorities. These authorities include campus police; nonpolice security staff responsible for monitoring campus property; municipal, county, or state law enforcement agencies with institutional agreements for security services; individuals and offices designated by the campus security policies as those to whom crimes should be reported; and officials of the institution with significant responsibility for student and campus activities. The act requires disclosure for offenses committed at geographic locations associated with each institution. For on-campus crimes, this includes property and buildings owned or controlled by the institution. In addition to on-campus crimes, the act requires disclosure of crimes committed in or on a noncampus building or property owned or controlled by the institution for educational purposes or for recognized student organizations, and on public property within or immediately adjacent to and accessible from the campus.

There are three types of statistics described in this report: criminal offenses; arrests for illegal weapons possession and violation of drug and liquor laws; and disciplinary referrals for illegal weapons possession and violation of drug and liquor laws. Criminal offenses include homicide, sex offenses, robbery, aggravated assaults, burglary, motor vehicle theft, and arson. Only the most serious offense is counted when more than one offense was committed during an incident. The two other categories, arrests and referrals, include counts for illegal weapons possession and violation of drug and liquor laws. Arrests and referrals relate to only those that are in violation of the law and not just in violation of institutional policies. If no federal, state, or local law was violated, these events are not reported. Further, if an individual is arrested and referred for disciplinary action for an offense, only the arrest is counted. Arrest is defined to include persons processed by arrest, citation, or summons, including those arrested and released without formal charges being placed. Referral for disciplinary action is defined to include persons referred to any official who initiates a disciplinary action of which a record is kept and which may result in the imposition of a sanction. Referrals may or may not involve the police or other law enforcement agencies.

All criminal offenses and arrests may include students, faculty, staff, and the general public. These offenses may or may not involve students that are enrolled in the institution. Referrals primarily deal with persons associated formally with the institution (i.e., students, faculty, staff ).

Campus security and police statistics do not necessarily reflect the total amount or even the nature of crime on campus. Rather, they reflect incidents that have been reported and recorded by campus security and/or local police. The process of reporting and recording alleged criminal incidents involve some well-known social filters and steps beginning with the victim. First, the victim or some other party must recognize that a possible crime has occurred and report the event. The event must then be recorded, and if it is recorded, the nature and type of offense must be classified. This classification may differ from the initial report due to the collection of additional evidence, interviews with witnesses, or through officer discretion. Also, the date an incident is reported may be much later than the date of the actual incident. For example, a victim may not realize something was stolen until much later, or a victim of violence may wait a number of days to report a crime. Other factors are related to the probability that an incident is reported, including the severity of the event, the victim’s confidence and prior experience with the police or security agency, or influence from third parties (e.g., friends and family knowledgeable about the incident). Finally the reader should be mindful that these figures represent alleged criminal offenses reported to campus security and/or local police within a given year, and they do not necessarily reflect prosecutions or convictions for crime.

More information on the reporting of campus crime and safety data may be obtained from the Clery Act Appendix for FSA Handbook, at https://www2.ed.gov/admins/lead/safety/cleryappendixfinal.pdf.

Policy Coordination, Development, and Accreditation Service
Office of Postsecondary Education
U.S. Department of Education
https://ope.ed.gov/campussafety/#/

Campus Safety and Security Help Desk
(800) 435-5985
CampusSafetyHelp@westat.com



Title II of the Higher Education Act of 1965

Title II of the Higher Education Act of 1965, as amended (HEA), requires that states and teacher preparation providers annually report information about teacher preparation programs and their quality to the U.S. Department of Education. Such reports include information about the number of individuals who enrolled in teacher preparation programs, the number who completed the programs, and demographic characteristics of enrollees and completers. The information collected also includes the number of teacher preparation providers and programs, teacher preparation program entry and exit requirements, the criteria used by states to assess performance of the programs, and the number of programs that are low performing or at risk of being low performing based on the state’s criteria. In addition, information on state requirements for an initial teaching credential, test-taker performance on initial teaching credential assessments, and the number of individuals who received initial teaching credentials is reported.

Public access to data about teacher preparation and certification, in addition to technical assistance materials to support the collection, analysis, and reporting of Title II data can be found at https://title2.ed.gov/Public/Home.aspx.

CLOSE

Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act

The Individuals with Disabilities Education Act (IDEA) is a law ensuring services to children with disabilities throughout the nation. IDEA governs how states and public agencies provide early intervention, special education, and related services to more than 7.5 million eligible infants, toddlers, children, and youth with disabilities.

IDEA, formerly the Education of the Handicapped Act (EHA), requires the Secretary of Education to transmit, on an annual basis, a report to Congress describing the progress made in serving the nation’s children with disabilities. This annual report contains information on children served by public schools under the provisions of Part B of IDEA and on children served in state-operated programs for persons with disabilities under Chapter I of the Elementary and Secondary Education Act.

Statistics on children receiving special education and related services in various settings and school personnel providing such services are reported in an annual submission of data to the Office of Special Education Programs (OSEP) by the 50 states, the District of Columbia, the Bureau of Indian Education schools, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, the U.S. Virgin Islands, the Federated States of Micronesia, Palau, and the Marshall Islands. The child count information is based on the number of children with disabilities receiving special education and related services on December 1 of each year. Count information is available from https://ideadata.org/.

Since all participants in programs for persons with disabilities are reported to OSEP, the data are not subject to sampling error. However, nonsampling error can arise from a variety of sources. Some states only produce counts of students receiving special education services by disability category because Part B of the IDEA requires it. In those states that typically produce counts of students receiving special education services by disability category without regard to IDEA requirements, definitions and labeling practices vary.

Further information on this annual report to Congress may be obtained from

Office of Special Education Programs
Office of Special Education and Rehabilitative Services
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202
https://www2.ed.gov/about/reports/annual/osep/index.html
https://sites.ed.gov/idea/
https://ideadata.org/

CLOSE

Other Governmental Agencies and Programs

A division of the U.S. Department of Justice Office of Justice Programs, the Bureau of Justice Statistics (BJS) collects, analyzes, publishes, and disseminates statistical information on crime, criminal offenders, victims of crime, and the operations of the justice system at all levels of government and internationally. It also provides technical and financial support to state governments for development of criminal justice statistics and information systems on crime and justice.

For information on the BJS, see https://www.bjs.gov/.

National Crime Victimization Survey

The Bureau of Justice Statistics’ National Crime Victimization Survey (NCVS) is an annual data collection carried out by the U.S. Census Bureau. The NCVS is the nation’s primary source of information on criminal victimization. Each year, data are obtained from a nationally representative sample of about 240,000 persons in about 150,000 U.S. households. Persons are interviewed on the frequency, characteristics, and consequences of criminal victimization in the United States. The survey has been ongoing since 1973.

The NCVS is administered to persons age 12 or older from a nationally representative sample of U.S. households. It collects information on nonfatal personal crimes (rape or sexual assault, robbery, aggravated assault, simple assault, and personal larceny [purse snatching and pocket picking]) and household property crimes (burglary or trespassing, motor vehicle theft, and other types of theft). It collects information on threatened, attempted, and completed crimes. Survey respondents provide information about themselves (e.g., age, sex, race and Hispanic origin, marital status, education level, and income) and whether they experienced a victimization. For each victimization incident, respondents report information about the offender (including age, sex, race, Hispanic origin, and victim-offender relationship), characteristics of the crime (including time and place of occurrence, use of weapons, nature of injury, and economic consequences), whether the crime was reported to police, reasons the crime was or was not reported, and experiences with the criminal justice system.

Information about the sampled household is collected from a reference person, who is a responsible adult member of the household who is not likely to permanently leave the household. This includes information on household-level demographic characteristics (e.g., income) and property victimizations. A household is defined as a group of persons who all reside at a sampled address. Once selected, households remain in the sample for 3½ years, and eligible persons in these households are interviewed every 6 months for a total of 7 interviews. First interviews are typically conducted in person, with subsequent interviews conducted either in person or by phone. New households rotate into the sample on an ongoing basis to replace outgoing households that have been in the sample for the full 3½-year period. The sample includes persons living in group quarters, such as dormitories, rooming houses, and religious group dwellings, and excludes persons living on military bases or in institutional settings such as correctional or hospital facilities.

The 2020 NCVS data file includes 138,327 household interviews. Overall, 67 percent of eligible households completed interviews. Within participating households, interviews with 223,079 persons were completed in 2020, representing an 82 percent unweighted response rate among eligible persons from responding households. Victimizations that occurred outside of the United States were excluded from this report. In 2020, about 0.4 percent of the unweighted victimizations occurred outside of the United States. NCVS data are weighted to produce annual estimates of victimization for persons age 12 or older living in U.S. households. Because the NCVS relies on a sample rather than a census of the entire U.S. population, weights are designed to adjust to known population totals and to compensate for survey nonresponse and other aspects of the complex sample design.

Victimization weights used in this report account for the number of persons victimized during an incident and for high-frequency repeat victimizations (i.e., series victimizations). Series victimizations are similar in type to one another but occur with such frequency that a victim is unable to recall each individual event or describe each event in detail. Survey procedures allow NCVS interviewers to identify and classify these similar victimizations as series victimizations and to collect detailed information on only the most recent incident in the series.

The weighting counts series victimizations as the actual number of victimizations reported by the victim, up to a maximum of 10. Doing so produces more reliable estimates of crime levels than counting such victimizations only once, while the cap at 10 minimizes the effect of extreme outliers on rates. According to the 2020 data, series victimizations accounted for 1.1 percent of all victimizations and 2.7 percent of all violent victimizations. Additional information on the enumeration of series victimizations is detailed in the report Methods for Counting High-Frequency Repeat Victimizations in the National Crime Victimization Survey (Bureau of Justice Statistics, April 2012, NCJ 237308).

The 2020 NCVS weights include an additional adjustment to address the impact of modified field operations due to COVID-19. In addition, beginning in 2020, BJS incorporated another factor to moderate the contribution of outlier weights on NCVS estimates. For more information on both the weighting adjustments applied in 2020 and the methodology for moderating the contribution of outlier weights on NCVS estimates that came into use in 2020, see the Source and Accuracy Statement for the 2020 National Crime Victimization Survey in the NCVS 2020 Codebook (https://www.icpsr.umich.edu/web/NACJD/series/95) and Criminal Victimization, 2020 (Bureau of Justice Statistics, October 2021, NCJ 301775).

Every 10 years, the NCVS sample is redesigned to reflect changes in the population. Due to a sample increase and redesign in 2016, victimization estimates among youth were not comparable to estimates for other years and are not available. For more information on the redesign, see Criminal Victimization, 2016: Revised (Bureau of Justice Statistics, October 2018, NCJ 252121). In the 2006 NCVS, changes in the sample design and survey methodology affected the survey’s estimates. Caution should be used when comparing 2006 estimates to estimates of other years. For more information on the 2006 NCVS data, see Criminal Victimization, 2006 (Bureau of Justice Statistics, December 2007, NCJ 219413) and Criminal Victimization, 2007 (Bureau of Justice Statistics, December 2008, NCJ 224390).

In 2003, in accordance with changes to the U.S. Office of Management and Budget’s standards for classifying federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.

Generalized variance function (GVF) parameters were used to generate standard errors for each estimate (e.g., numbers and rates) in this report. To generate standard errors around victimization estimates from the NCVS, the U.S. Census Bureau produces GVF parameters for BJS. The GVFs account for aspects of the NCVS’s complex sample design and represent the curve fitted to a selection of individual standard errors based on the Balanced Repeated Replication technique. BJS conducted statistical tests to determine whether differences in estimated numbers and rates in this report were statistically significant once sampling error was considered. Findings described in this report as higher, lower, or different passed a test at the 0.05 level (95 percent confidence level) of significance.

Further information on the NCVS may be obtained from

Alexandra Thompson
Victimization Statistics Branch
Bureau of Justice Statistics
alexandra.thompson@usdoj.gov
https://bjs.ojp.gov/

School Crime Supplement

Created as a supplement to the NCVS and codesigned by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. This report includes data from the 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017, and 2019 collections. The 1989 data are not included in this report as a result of methodological changes to the NCVS and SCS. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Students are asked additional questions about security measures used by their school, students’ participation in after-school activities, students’ perceptions of school rules, the presence of weapons and gangs in school, the presence of hate-related words and graffiti in school, student reports of bullying and reports of rejection at school, and the availability of drugs and alcohol in school. Students are also asked attitudinal questions relating to fear of victimization and avoidance behavior at school.

The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the NCVS sampling design and changes to the race/ethnicity variable beginning in 2003). Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 6–12, and were not homeschooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 12–18 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview. It should be noted that the first or unbounded NCVS interview has always been included in analysis of the SCS data and may result in the reporting of events outside of the requested reference period.

The prevalence of victimization for 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017, and 2019 was calculated by using NCVS incident variables appended to the SCS data files of the same year. The NCVS type of crime variable was used to classify victimizations of students in the SCS as serious violent, violent, or theft. The NCVS variables asking where the incident happened (at school) and what the victim was doing when it happened (attending school or on the way to or from school) were used to ascertain whether the incident happened at school. Only incidents that occurred inside the United States are included.

In 2001, the SCS survey instrument was modified from previous collections. First, in 1995 and 1999, “at school” was defined for respondents as in the school building, on the school grounds, or on a school bus. In 2001, the definition for “at school” was changed to mean in the school building, on school property, on a school bus, or going to and from school. This change was made to the 2001 questionnaire in order to be consistent with the definition of “at school” as it is constructed in the NCVS and was also used as the definition in subsequent SCS collections. Cognitive interviews conducted by the U.S. Census Bureau on the 1999 SCS suggested that modifications to the definition of “at school” would not have a substantial impact on the estimates.

Shown in table A, below, are the number of students participating, household completion rates, student completion rates, and overall unit response rates in the SCS from 1995 to 2019:

Table A. Student participation in the School Crime Supplement (SCS) by number participating, household completion rate, student completion rate, and overall unit response rate: Selected years, 1995 to 2017

SCS collection year Number participating Household completion rate (percent) Student completion rate (percent) Overall unit response rate (percent)1
1995 9,700 95 78 74
1999 8,400 94 78 73
2001 8,400 93 77 72
2003 7,200 92 70 64
2005 6,300 91 62 56
2007 5,600 90 58 53
2009 5,000 92 56 51
2011 6,500 91 63 57
2013 5,700 86 60 51
2015 5,500 82 58 48
2017 7,100 77 52 40
2019 7,000 73 49 36

1 The overall unit response rate is calculated by multiplying the household completion rate by the student completion rate. Prior to 2011, overall SCS unit response rates were unweighted; starting in 2011, overall SCS unit response rates are weighted

SOURCE: U.S. Department of Justice, Bureau of Justice Statistics, School Crime Supplement (SCS) to the National Crime Victimization Survey, 1995 through 2019.

There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (NCES Statistical Standards, 2002, at https://nces.ed.gov/statprog/2002/std4_4.asp). Due to the low unit response rate in 2005, 2007, 2009, 2011, 2013, 2015, 2017, and 2019, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents. In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has several key student or school characteristic variables for which data are known for respondents and nonrespondents: sex, age, race/ethnicity, household income, region, and urbanicity, all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.

In 2005, the analysis of unit nonresponse bias found evidence of bias for the race, household income, and urbanicity variables. White (non-Hispanic) and Other (non-Hispanic) respondents had higher response rates than Black (non-Hispanic) and Hispanic respondents. Respondents from households with an income of $35,000–$49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500–$14,999, $15,000–$24,999, and $25,000–$34,999. Respondents who live in urban areas had lower response rates than those who live in rural or suburban areas. Although the extent of nonresponse bias cannot be determined, weighting adjustments, which corrected for differential response rates, should have reduced the problem.

In 2007, the analysis of unit nonresponse bias found evidence of bias by the race/ethnicity and household income variables. Hispanic respondents had lower response rates than other races/ethnicities. Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $25,000. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other races/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2011, the analysis of unit nonresponse bias found evidence of potential bias for the age variable. Respondents 12 to 17 years old had higher response rates than did 18-year-old respondents in the NCVS and SCS interviews. Weighting the data adjusts for unequal selection probabilities and for the effects of nonresponse. The weighting adjustments that correct for differential response rates are created by region, age, race, and sex, and should have reduced the effect of nonresponse.

In 2013, the analysis of unit nonresponse bias found evidence of potential bias for the age, region, and Hispanic origin variables in the NCVS interview response. Within the SCS portion of the data, only the age and region variables showed significant unit nonresponse bias. Further analysis indicated only the age 14 and the west region categories showed positive response biases that were significantly different from some of the other categories within the age and region variables. Based on the analysis, nonresponse bias seems to have little impact on the SCS results.

In 2015, the analysis of unit nonresponse bias found evidence of potential bias for age, race, Hispanic origin, urbanicity, and region in the NCVS interview response. For the SCS interview, the age, race, urbanicity, and region variables showed significant unit nonresponse bias. The age 14 group and rural areas showed positive response biases that were significantly different from other categories within the age and urbanicity variables. The northeast region and Asian race group showed negative response biases that were significantly different from other categories within the region and race variables. These results provide evidence that these subgroups may have a nonresponse bias associated with them.

In 2017, the analysis of unit nonresponse bias found that the race/ethnicity and census region variables showed significant differences in response rates between different race/ethnicity and census region subgroups. Respondent and nonrespondent distributions were significantly different for the race/ethnicity subgroup only. However, after using weights adjusted for person nonresponse, there was no evidence that these response differences introduced nonresponse bias in the final victimization estimates.

In 2019, the nonresponse bias analysis found significant differences in response rates and in respondent and nonrespondent distributions between region and the race/ethnicity demographic subgroups. However, nonresponse weighting adjustments are expected to minimize these differences. The modeled estimates show no evidence of nonresponse bias in the SCS estimates before or after nonresponse weighting adjustments, but there could still be bias in these estimates because the response data available to conduct the study is unable to capture responses from a large portion of the supplement’s target population.

Further information about the SCS may be obtained from

Deanne Swan
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
deanne.swan@ed.gov
https://nces.ed.gov/programs/crime/

CLOSE

Consumer Price Indexes

The Consumer Price Index (CPI) represents changes in prices of all goods and services purchased for consumption by urban households. Indexes are available for two population groups: a CPI for All Urban Consumers (CPI-U) and a CPI for Urban Wage Earners and Clerical Workers (CPI-W). Unless otherwise specified, data in this report are adjusted for inflation using the CPI-U. These values are generally adjusted to a school-year basis by averaging the July through June figures. Price indexes are available for the United States, the 4 Census regions, 9 Census divisions, 2 size of city classes, 8 cross-classifications of regions and size-classes, and 23 local areas. The major uses of the CPI include as an economic indicator, as a deflator of other economic series, and as a means of adjusting income.

Also available is the Consumer Price Index research series using current methods (CPI-U-RS), which presents an estimate of the CPI-U from 1978 to the present that incorporates most of the improvements that the Bureau of Labor Statistics has made over that time span into the entire series. The historical price index series of the CPI-U does not reflect these changes, though these changes do make the present and future CPI more accurate. The limitations of the CPI-U-RS include considerable uncertainty surrounding the magnitude of the adjustments and the several improvements in the CPI that have not been incorporated into the CPI-U-RS for various reasons. Nonetheless, the CPI-U-RS can serve as a valuable proxy for researchers needing a historical estimate of inflation using current methods. This series has not been used in NCES tables.

Further information on consumer price indexes may be obtained from

Bureau of Labor Statistics
U.S. Department of Labor
2 Massachusetts Avenue NE
Washington, DC 20212
https://www.bls.gov/cpi/

Employment and Unemployment Surveys

Statistics on the employment and unemployment status of the population and related data are compiled by the Bureau of Labor Statistics (BLS) using data from the Current Population Survey (CPS) (see below) and other surveys. The CPS, a monthly household survey conducted by the U.S. Census Bureau for the Bureau of Labor Statistics, provides a comprehensive body of information on the employment and unemployment experience of the nation’s population, classified by age, sex, race, and various other characteristics.

Further information on unemployment surveys may be obtained from

Bureau of Labor Statistics
U.S. Department of Labor
2 Massachusetts Avenue NE
Washington, DC 20212
cpsinfo@bls.gov
https://www.bls.gov/bls/employment.htm

CLOSE

American Community Survey

The Census Bureau introduced the American Community Survey (ACS) in 1996. Fully implemented in 2005, it provides a large monthly sample of demographic, socioeconomic, and housing data comparable in content to the Long Forms of the Decennial Census up to and including the 2000 long form. Aggregated over time, these data serve as a replacement for the Long Form of the Decennial Census. The survey includes questions mandated by federal law, federal regulations, and court decisions.

The survey is currently mailed to approximately 295,000 addresses in the United States and Puerto Rico each month, or about 3.5 million addresses annually. A larger proportion of addresses in small governmental units (e.g., American Indian reservations, small counties, and towns) also receive the survey. The monthly sample size is designed to approximate the ratio used in the 2000 Census, which requires more intensive distribution in these areas. The ACS covers the U.S. resident population, which includes the entire civilian, noninstitutionalized population; incarcerated persons; institutionalized persons; and the active duty military who are in the United States. In 2006, the ACS began collecting data from the population living in group quarters. Institutionalized group quarters include adult and juvenile correctional facilities, nursing facilities, and other health care facilities. Noninstitutionalized group quarters include college and university housing, military barracks, and other noninstitutional facilities such as workers and religious group quarters and temporary shelters for the homeless.

National-level data from the ACS are available from 2000 onward. The ACS normally produces 1-year estimates for jurisdictions with populations of 65,000 and over using data collected between January 1 and December 31 of the data year. But the impacts of the COVID-19 pandemic on data collection resulted in 1-year estimates for 2020 that did not meet Census Bureau standards. Consequently, the Census Bureau did not release its standard 1-year estimates from the 2020 ACS. It released experimental estimates developed from 2020 ACS 1-year data instead. The Census Bureau urges caution in using the experimental estimates as a replacement for standard 2020 ACS 1-year estimates (https://www.census.gov/programs-surveys/acs/technical-documentation/user-notes/2021-02.html). The 1-year estimates for 2021 were released in the usual sequence-based format as well as in a new table-based format.

The ACS also produces 5-year estimates for jurisdictions with populations smaller than 65,000. The 5-year estimates for 2016–2020 used data collected between January 1, 2016, and December 31, 2020. Notwithstanding the impacts of the COVID-19 pandemic on data collection in 2020, the Census Bureau determined that a revision in the methodology used for the 2016–2020 ACS 5-year estimates produced data that met Census Bureau standards for public release (https://www.census.gov/programs-surveys/acs/technical-documentation/user-notes/2021-02.html). The 5-year estimates for 2017–2021 were released in the usual sequence-based format as well as in a new table-based format.

The ACS produced 3-year estimates (for jurisdictions with populations of 20,000 or over) for the periods 2005–2007, 2006–2008, 2007–2009, 2008–2010, 2009–2011, 2010–2012, and 2011–2013. Three-year estimates for these periods will continue to be available to data users, but no further 3-year estimates will be produced.

Further information about the ACS is available at https://www.census.gov/programs-surveys/acs/.

Census of Population—Education in the United States

Some NCES tables are based on a part of the decennial census that consisted of questions asked of a 1 in 6 sample of people and housing units in the United States. This sample was asked more detailed questions about income, occupation, and housing costs, as well as questions about general demographic information. This decennial census “long form” is no longer used; it has been replaced by the American Community Survey (ACS).

School enrollment. People classified as enrolled in school reported attending a “regular” public or private school or college. They were asked whether the institution they attended was public or private and what level of school they were enrolled in.

Educational attainment. Data for educational attainment were tabulated for people ages 15 and over and classified according to the highest grade completed or the highest degree received. Instructions were also given to include the level of the previous grade attended or the highest degree received for people currently enrolled in school.

Poverty status. To determine poverty status, answers to income questions were used to make comparisons to the appropriate poverty threshold. All people except those who were institutionalized, people in military group quarters and college dormitories, and unrelated people under age 15 were considered. If the total income of each family or unrelated individual in the sample was below the corresponding cutoff, that family or individual was classified as “below the poverty level.”

Further information on the 1990 and 2000 Census of Population may be obtained from

Population Division
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
https://www.census.gov/main/www/cen1990.html
https://www.census.gov/main/www/cen2000.html

Current Population Survey

The Current Population Survey (CPS) is a monthly survey of about 50,000 households conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. The CPS is the primary source of labor force statistics on the U.S. population. In addition, supplemental questionnaires are used to provide further information about the U.S. population. The March supplement (also known as the Annual Social and Economic [ASEC] supplement) contains detailed questions on topics such as income, employment, and educational attainment; additional questions, such as items on disabilities, have also been included. In the November supplement, items on computer and internet use are the principal focus. The October supplement also contains some questions about computer and internet use, but most of its questions relate to school enrollment and school characteristics.

CPS samples are initially selected based on results from the decennial census and are periodically updated to reflect new housing construction. The current sample design for the main CPS, last revised in July 2015, includes about 70,000 households. Each month, about 50,000 of the 70,000 households are interviewed. Information is obtained each month from those in the household who are 15 years of age and over, and demographic data are collected for children 0–14 years of age. In addition, supplemental questions regarding school enrollment are asked about eligible household members age 3 and over in the October CPS supplement.

In January 1992, the CPS educational attainment variable was changed. The “Highest grade attended” and “Year completed” questions were replaced by the question “What is the highest level of school ... has completed or the highest degree ... has received?” Thus, for example, while the old questions elicited data for those who completed more than 4 years of high school, the new question elicited data for those who were high school completers, that is, those who graduated from high school with a diploma as well as those who completed high school through equivalency programs, such as a GED program.

A major redesign of the CPS was implemented in January 1994 to improve the quality of the data collected. Survey questions were revised, new questions were added, and computer-assisted interviewing methods were used for the survey data collection. Further information about the redesign is available in Current Population Survey, October 1995: (School Enrollment Supplement) Technical Documentation at https://www.census.gov/prod/techdoc/cps/cpsoct95.pdf.

Caution should be used when comparing data from 2012 through 2021 (which reflect 2010 Census-based controls) with data from 2002 through 2011 (which reflect 2000 Census-based controls) and with data from 2001 and earlier (which reflect population controls based on the 1990 and earlier Censuses). Changes in population controls generally have relatively little impact on summary measures such as means, medians, and percentage distributions; they can, however, have a significant impact on population counts. For example, use of 2010 census-based population controls results in about a 0.2 percent increase from the 2000 Census-based controls in the civilian noninstitutionalized population and in the number of families and households. Thus, estimates of levels for data collected in 2012 and later years will differ from those for earlier years by more than what could be attributed to actual changes in the population. These differences could be disproportionately greater for certain subpopulation groups than for the total population.

Caution should also be exercised when comparing March CPS (ASEC) estimates from data collected in 2020 and 2021 to those from previous years due to the effects that the coronavirus (COVID-19) had on interviewing and response rates. Interviewing for the March CPS began on March 15, 2020. In order to protect the health and safety of Census Bureau staff and respondents, the survey suspended in-person interviewing and closed the two CATI contact centers on March 20. For the rest of March and through April, the Census Bureau continued to attempt all interviews by phone. While the Census Bureau went to great lengths to complete interviews by telephone, the response rate for the CPS basic household survey in March 2020 was 73 percent, about 10 percentage points lower than in preceding months and in the same period in 2019.

In 2021, for the safety of both interviewers and respondents, in-person interviews were only conducted when telephone interviews could not be done. The response rate for the CPS basic household survey declined from about 76 percent in March 2021 to 72 percent in March 2022.

Beginning in 2003, the race/ethnicity questions were expanded. Information on people of Two or more races were included, and the Asian and Pacific Islander race category was split into two categories—Asian and Native Hawaiian or Other Pacific Islander. In addition, questions were reworded to make it clear that self-reported data on race/ethnicity should reflect the race/ethnicity with which the responder identifies, rather than what may be written in official documentation.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services. Generalized standard error tables are provided in the Current Population Reports; methods for deriving standard errors can be found within the CPS technical documentation at https://www.census.gov/programs-surveys/cps/technical-documentation/complete.html. The CPS data are subject to both nonsampling and sampling errors.

Standard errors were estimated using the generalized variance function prior to 2005 for March CPS data and prior to 2010 for October CPS data. The generalized variance function is a simple model that expresses the variance as a function of the expected value of a survey estimate.  Standard errors were estimated using replicate weight methodology beginning in 2005 for March CPS data and beginning in 2010 for October CPS data. Those interested in using CPS household-level supplement replicate weights to calculate variances may refer to Estimating Current Population Survey (CPS) Household-Level Supplement Variances Using Replicate Weights at https://www.nber.org/cps/HH-level_Use_of_the_Public_Use_Replicate_Weight_File.doc.

Further information on the CPS may be obtained from

Associate Directorate for Demographic Programs—Survey Operations
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
301-763-3806
dsd.cps@census.gov
https://www.census.gov/programs-surveys/cps.html

Computer and Internet Use

The Current Population Survey (CPS) has been conducting supplemental data collections regarding computer use since 1984. In 1997, these supplemental data collections were expanded to include data on internet access. More recently, data regarding computer and internet use were collected in October 2010, July 2011, October 2012, July 2013, July 2015, November 2017, November 2019, and November 2021.

In the July 2011, 2013, and 2015 supplements, as well as in the November 2017, 2019, and 2021 supplements, the sole focus was on computer and internet use. In the October 2010 and 2012 supplements questions on school enrollment were the principal focus, and questions on computer and internet use were less prominent. Measurable differences in estimates taken from these supplements across years could reflect actual changes in the population; however, differences could also reflect any unknown bias from major changes in the questionnaire over time due to rapidly changing technology. In addition, data may vary slightly due to seasonal variations in data collection between the July, October, and November supplements. Therefore, caution should be used when making year-to-year comparisons of CPS computer and internet use estimates.

The most recent computer and internet use supplement, conducted in November 2021, collected household information from all eligible CPS households, as well as information from individual household members age 3 and over. Information was collected about the household’s use of digital devices and the internet, and about the household member's use of the internet from any location in the past year. Information was also collected about internet activities of a single randomly selected respondent.

For the November 2021 basic CPS, the household-level unweighted nonresponse rate was 26.1 percent. The person-level unweighted nonresponse rate for the computer and internet use supplement was an additional 26.3 percent. Since one rate is a person-level rate and the other a household-level rate, the rates cannot be combined to derive an overall rate.

Further information on the CPS Computer and Internet Use Supplement may be obtained from

Associate Directorate for Demographic Programs—Survey Operations
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
301-763-3806
dsd.cps@census.gov
https://census.gov/programs-surveys/cps.html

Dropouts

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population age 3 years and over as part of the monthly basic survey on labor force participation. In addition to gathering the information on school enrollment, with the limitations on accuracy as noted below under “School Enrollment,” the survey data permit calculations of dropout rates. Both status and event dropout rates are tabulated from the October CPS. Event rates describe the proportion of students who leave school each year without completing a high school program. Status rates provide cumulative data on dropouts among all young adults within a specified age range. Status rates are higher than event rates because they include all dropouts ages 16 through 24, regardless of when they last attended school.

In addition to other survey limitations, dropout rates may be affected by survey coverage and exclusion of the institutionalized population. The incarcerated population has increased and has a high dropout rate. Dropout rates for the total population might be higher than those for the noninstitutionalized population if the prison and jail populations were included in the dropout rate calculations. On the other hand, if military personnel, who tend to be high school graduates, were included, it might offset some or all of the impact from the theoretical inclusion of the jail and prison populations. Tables on status dropout rates based on the American Community Survey do include the institutionalized population and are also included in the Digest of Education Statistics.

Another area of concern with tabulations involving young people in household surveys is the relatively low coverage ratio compared to older age groups. CPS undercoverage results from missed housing units and missed people within sample households. Overall CPS undercoverage for October 2019 is estimated to be about 11 percent.

CPS coverage varies with age, sex, and race. Generally, coverage is larger for females than for males and larger for non-Blacks than for Blacks. This differential coverage is a general problem for most household-based surveys. Further information on CPS methodology may be found in the technical documentation at https://www.census.gov/programs-surveys/cps.html.

Further information on the calculation of dropouts and dropout rates may be obtained from the Trends in High School Dropout and Completion Rates in the United States report at https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2020117 or by contacting

Cristobal de Brey
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
cristobal.debrey@ed.gov

Educational Attainment

Reports documenting educational attainment are produced by the Census Bureau using the March Current Population Survey (CPS) supplement (Annual Social and Economic supplement [ASEC]). Currently, the ASEC supplement consists of approximately 50,000 interviewed households. Both recent and earlier editions of Educational Attainment in the United States may be downloaded at https://www.census.gov/topics/education/educational-attainment/data/tables.All.html.

In 2014, the CPS ASEC included redesigned questions on income (specifically retirement income) and health insurance coverage, which were followed, in the 2015 CPS ASEC, by changes to allow spouses and unmarried partners to specifically identify as opposite- or same-sex. Beginning with the 2019 CPS ASEC, the Census Bureau used a modified processing system that improved procedures for imputing income and health insurance variables. The Census Bureau analyzed the impact of the use of the new processing system by comparing its use with the use of the legacy processing system on income, poverty, and health insurance coverage data from 2017 ASEC files. The Census Bureau found that differences in the overall poverty rate and household income resulting from the use of the new processing system compared to the legacy processing system were not statistically significant, although there were differences for some demographic groups. Use of the new processing system caused the supplemental poverty rate (https://www.census.gov/topics/income-poverty/supplemental-poverty-measure.html) to decrease overall and for most demographic groups. The Census Bureau attributed the decrease to improvements in the new processing system’s imputation of medical-out-of-pocket expenses, housing subsidies, and school lunch receipts. More information on these changes can be found at https://www.census.gov/newsroom/blogs/research-matters/2019/09/cps-asec.html.

As noted in “Current Population Survey,” above, caution should be exercised when comparing ASEC estimates from data collected in 2020 and 2021 to those from previous years due to the effects that the coronavirus (COVID-19) had on interviewing and response rates.

In addition to the general constraints of CPS, some data indicate that the respondents have a tendency to overestimate the educational level of members of their household. Some inaccuracy is due to a lack of the respondent’s knowledge of the exact educational attainment of each household member and the hesitancy to acknowledge anything less than a high school education.

Further information on educational attainment data from CPS may be obtained from

Associate Directorate for Demographic Programs—Survey Operations
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
(301) 763-3806
dsd.cps@census.gov
https://www.census.gov/programs-surveys/cps.html

School Enrollment

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population age 3 years and over. Currently, the October supplement consists of approximately 50,000 interviewed households, the same households interviewed in the basic Current Population Survey. The primary sources of nonsampling variability in the responses to the supplement are those inherent in the main survey instrument. The question of current enrollment may not be answered accurately for various reasons. Some respondents may not know current grade information for every student in the household, a problem especially prevalent for households with members in college or in nursery school. Confusion over college credits or hours taken by a student may make it difficult to determine the year in which the student is enrolled. Problems may occur with the definition of nursery school (a group or class organized to provide educational experiences for children) where respondents’ interpretations of “educational experiences” vary.

For the October 2021 basic CPS, the household-level nonresponse rate was 24.1 percent. The person-level nonresponse rate for the school enrollment supplement was an additional 10.1 percent. Since the basic CPS nonresponse rate is a household-level rate and the school enrollment supplement nonresponse rate is a person-level rate, these rates cannot be combined to derive an overall nonresponse rate. Nonresponding households may have more or fewer persons than interviewed ones, so combining these rates may lead to an under- or overestimate of the true overall nonresponse rate for persons for the school enrollment supplement.

Although the principal focus of the October supplement is school enrollment, in some years the supplement has included additional questions on other topics. In 2010 and 2012, for example, the October supplement included additional questions on computer and internet use.

Further information on CPS methodology may be obtained from https://www.census.gov/programs-surveys/cps.html.

Further information on the CPS School Enrollment Supplement may be obtained from

Associate Directorate for Demographic Programs—Survey Operations
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
(301) 763-3806
dsd.cps@census.gov
https://www.census.gov/programs-surveys/cps.html

Decennial Census, Population Estimates, and Population Projections

The decennial census is a universe survey mandated by the U.S. Constitution. It is a questionnaire sent to every household in the country every 10 years, and it is composed of seven questions about the household and its members (name, sex, age, relationship, Hispanic origin, race, and whether the housing unit is owned or rented). The Census Bureau also produces annual estimates of the resident population by demographic characteristics (age, sex, race, and Hispanic origin) for the nation, states, and counties, as well as national and state projections for the resident population. The reference date for population estimates is July 1 of the given year. With each new issue of July 1 estimates, the Census Bureau revises estimates for each year back to the last census. Previously published estimates are superseded and archived.

Census respondents self-report race and ethnicity. The race questions on the 1990 and 2000 censuses differed in some significant ways. In 1990, the respondent was instructed to select the one race “that the respondent considers himself/herself to be,” whereas in 2000, the respondent could select one or more races that the person considered himself or herself to be. American Indian, Eskimo, and Aleut were three separate race categories in 1990; in 2000, the American Indian and Alaska Native categories were combined, with an option to write in a tribal affiliation. This write-in option was provided only for the American Indian category in 1990. There was a combined Asian and Pacific Islander race category in 1990, but the groups were separated into two categories in 2000.

The census question on ethnicity asks whether the respondent is of Hispanic origin, regardless of the race option(s) selected; thus, persons of Hispanic origin may be of any race. In the 2000 census, respondents were first asked, “Is this person Spanish/Hispanic/Latino?” and then given the following options: No, not Spanish/Hispanic/Latino; Yes, Puerto Rican; Yes, Mexican, Mexican American, Chicano; Yes, Cuban; and Yes, other Spanish/Hispanic/Latino (with space to print the specific group). In the 2010 census, respondents were asked “Is this person of Hispanic, Latino, or Spanish origin?” The options given were No, not of Hispanic, Latino, or Spanish origin; Yes, Mexican, Mexican Am., Chicano; Yes, Puerto Rican; Yes, Cuban; and Yes, another Hispanic, Latino, or Spanish origin—along with instructions to print “Argentinean, Colombian, Dominican, Nicaraguan, Salvadoran, Spaniard, and so on” in a specific box.

The 2000 and 2010 censuses each asked the respondent “What is this person’s race?” and allowed the respondent to select one or more options. The options provided were largely the same in both the 2000 and 2010 censuses: White; Black, African American, or Negro; American Indian or Alaska Native (with space to print the name of enrolled or principal tribe); Asian Indian; Japanese; Native Hawaiian; Chinese; Korean; Guamanian or Chamorro; Filipino; Vietnamese; Samoan; Other Asian; Other Pacific Islander; and Some other race. The last three options included space to print the specific race. Two significant differences between the 2000 and 2010 census questions on race were that no race examples were provided for the “Other Asian” and “Other Pacific Islander” responses in 2000, whereas the race examples of “Hmong, Laotian, Thai, Pakistani, Cambodian, and so on” and “Fijian, Tongan, and so on,” were provided for the “Other Asian” and “Other Pacific Islander” responses, respectively, in 2010.

The census population estimates program modified the enumerated population from the 2010 census to produce the population estimates base for 2010 and onward. As part of the modification, the Census Bureau recoded the “Some other race” responses from the 2010 census to one or more of the five OMB race categories used in the estimates program (for more information, see https://www.census.gov/programs-surveys/popest/technical-documentation/methodology.html ).

Further information on the decennial census may be obtained from https://www.census.gov.

Household Pulse Survey

The Census Bureau Household Pulse Survey (HPS) is a weekly or biweekly survey that provides statistical information about the impact of the COVID-19 pandemic on our nation. The HPS provides key statistics on employment, income, health, education, and housing. Recognizing the extraordinary information needs of policymakers during the COVID-19 pandemic, the Census Bureau developed this new survey in partnership with the seven other agencies from the Federal Statistical System in early 2020: Bureau of Labor Statistics (BLS), National Center for Health Statistics (NCHS), Department of Agriculture Economic Research Service (ERS), National Center for Education Statistics (NCES), Department of Housing and Urban Development (HUD), Social Security Administration (SSA), and the Bureau of Transportation Statistics. Currently, 16 federal agencies participate in the HPS partnership.

The new survey was designed to gather information on the impact of the COVID-19 pandemic across a broad range of indicators (please see table A, below). The experimental HPS began development on March 23, 2020, and data collection began on April 23, 2020. This new survey originally provided weekly national and state estimates for 12 iterations, which were released to the public in tabular formats one week after the end of data collection. Following this Phase 1 series, the survey continued in similar structure through Phases 2 and 3 with a biweekly timeline.

The HPS gathers information from adults about employment status, spending patterns, food security, housing, physical and mental health, access to health care, program receipt, and educational disruption. This survey was designed to represent adults 18 years old and over. The HPS is designed to produce estimates at three different geographical levels. The lowest level is for the 15 largest Metropolitan Statistical Areas (MSAs). The second level of geography is for the states and the District of Columbia. The final level of aggregation is for the United States as a whole.

The HPS uses the Census Bureau’s Master Address File (MAF) as the source of the sampled housing units (HUs). The sample design is a systematic sample of all eligible HUs, with adjustments applied to the sample intervals to select a large enough sample to create state level estimates and estimates for the largest 15 MSAs. Sixty-six independent sample areas were defined. For each collection period, independent samples were selected. Approximately 145 million housing units are represented in the MAF and were considered valid for sampling.

Table A. Census Bureau Household Pulse Survey dates of administration, number of respondents, and response rates: April 23 to May 5, 2020 through September 29 to October 11, 2021

Week designation1 Survey dates Number of respondents Response rates
Phase 1
1 April 23 to May 5, 2020 74,413 3.8
2 May 7 to May 12, 2020 41,996 1.3
3 May 14 to May 19, 2020 132,961 2.3
4 May 21 to May 26, 2020 101,215 3.1
5 May 28 to June 2, 2020 105,066 3.5
6 June 4 to June 9, 2020 83,302 3.1
7 June 11 to June 16, 2020 73,472 2.3
8 June 18 to June 23, 2020 108,062 2.9
9 June 25 to June 30, 2020 98,663 3.3
10 July 2 to July 7, 2020 90,767 3.2
11 July 9 to July 14, 2020 91,605 3.1
12 July 16 to July 21, 2020 86,792 2.9
Phase 2
13 August 19 to August 31, 2020 109,051 10.3
14 September 2 to September 14, 2020 110,019 10.3
15 September 16 to September 28, 2020 99,302 9.2
16 September 30 to October 12, 2020 95,604 8.8
17 October 14 to October 26, 2020 88,716 8.1
Phase 3
18 October 28 to November 9, 2020 58,729 5.3
19 November 11 to November 23, 2020 71,939 6.6
20 November 25 to December 7, 2020 72,484 6.7
21 December 9 to December 21, 2020 69,944 6.5
22 January 6 to January 18, 2021 68,348 6.4
23 January 20 to February 1, 2021 80,567 7.5
24 February 3 to February 15, 2021 77,122 7.3
25 February 17 to March 1, 2021 77,788 7.3
26 March 3 to March 15, 2021 78,306 7.4
27 March 17 to March 29, 2021 77,104 7.2
Phase 3.1
28 April 14 to April 26, 2021 68,913 6.6
29 April 28 to May 10, 2021 78,467 7.4
30 May 12 to May 24, 2021 72,897 6.8
31 May 26 to June 7, 2021 70,854 6.7
32 June 9 to June 21, 2021 68,067 6.4
33 June 23 to July 5, 2021 66,262 6.3
Phase 3.2
34 July 21 to August 2, 2021 64,562 6.1
35 August 4 to August 16, 2021 68,799 6.5
36 August 18 to August 30, 2021 69,114 6.5
37 September 1 to September 13, 2021 63,536 6.0
38 September 15 to September 27, 2021 59,833 5.6
39 September 29 to October 11, 2021 57,064 5.4

1 The week designation is maintained in the Household Pulse Survey literature, although all Phase 2 and Phase 3 surveys are 2 weeks in duration.

SOURCE: U.S. Department of Commerce, Census Bureau, Household Pulse Survey, Source of the Data and Accuracy of the Estimates for the 2020 Household Pulse Survey, retrieved February 5, 2021, from https://www2.census.gov/programs-surveys/demo/technical-documentation/hhp/Source-and-Accuracy-Statement-July-16-July-21.pdf; Source of the Data and Accuracy of the Estimates for the 2020 Household Pulse Survey—Phase 2, retrieved February 5, 2021, from https://www2.census.gov/programs-surveys/demo/technical-documentation/hhp/Phase2_Source_and_Accuracy_Week_17.pdf; Source of the Data and Accuracy of the Estimates for Household Pulse Survey—Phase 3, retrieved February 5, 2021, from https://www2.census.gov/programs-surveys/demo/technical-documentation/hhp/Phase3_Source_and_Accuracy_Week_22.pdf; Source of the Data and Accuracy of the Estimates for Household Pulse Survey—Phase 3, retrieved April 13, 2022, https://www2.census.gov/programs-surveys/demo/technical-documentation/hhp/Phase3_Source_and_Accuracy_Week_27.pdf; Source of the Data and Accuracy of the Estimates for Household Pulse Survey—Phase 3.1, retrieved April 13, 2022, from https://www2.census.gov/programs-surveys/demo/technical-documentation/hhp/Phase3-1_Source_and_Accuracy_Week_33.pdf; and Source of the Data and Accuracy of the Estimates for the Household Pulse Survey—Phase 3.2, retrieved April 13, 2022, from https://www2.census.gov/programs-surveys/demo/technical-documentation/hhp/Phase3-2_Source_and_Accuracy_Week39.pdf. (This table was prepared April 2022.)

It is important to note that the speed of the survey development and the pace of the data collection efforts led to policies and procedures for the experimental HPS that were not always consistent with traditional federal survey operations. For example, the timeline for the weekly/biweekly surveys meant that opportunities to follow up with nonrespondents were very limited. This has led to response rates of 1 to 4 percent through the first 12 weeks of the survey (Phase 1), and between 5 and 10 percent for Phases 2, 3, 3.1, and 3.2 of the survey, which are much lower than the 70 percent or higher rates typically set as targets in most federal surveys. Low response rates for this survey were anticipated and the survey population was large enough so that the resulting number of respondents had sufficient counts for state-level estimates. While the responses have been statistically adjusted so that respondents represent the nation and states in terms of geographic distribution, sex, race/ethnicity, age, and educational attainment, the impact of survey bias has not been fully explored. The technical limitations and cautions on the use of the data are more extensively explored in the Census Bureau documentation for the survey.

Further information about the HPS is available at https://www.census.gov/programs-surveys/household-pulse-survey/technical-documentation.html.

CLOSE

National Vital Statistics System

The National Vital Statistics System (NVSS) is the method by which data on vital events—births, deaths, marriages, divorces, and fetal deaths—are provided to the National Center for Health Statistics (NCHS), part of the Centers for Disease Control and Prevention (CDC). The data are provided to NCHS through the Vital Statistics Cooperative Program (VSCP). Detailed mortality data from NVSS are accessed through CDC’s Wide-ranging Online Data for Epidemiologic Research (WONDER), providing the counts of homicides among youth ages 5–18 and suicides among youth ages 10–18 by school year (i.e., from July 1 through June 30). These counts are used to estimate the proportion of all youth homicides and suicides that are school associated in a given school year.

For more information on the NCHS and the NVSS, see https://www.cdc.gov/nchs/nvss/index.htm.

School-Associated Violent Death Surveillance System

The School-Associated Violent Death Surveillance System (SAVD-SS) was developed by the Centers for Disease Control and Prevention (CDC) in conjunction with the U.S. Department of Education and the U.S. Department of Justice. The system contains descriptive data on all school-associated violent deaths in the United States, including homicides, suicides, and legal intervention deaths where the fatal injury occurred on the campus of a functioning elementary or secondary school; while the victim was on the way to or from regular sessions at such a school; or while attending or on the way to or from an official school-sponsored event. Victims of such incidents include students as well as nonstudents (e.g., students’ parents, community residents, and school staff). SAVD-SS includes data on the school, event, victim(s), and offender(s). These data are used to describe the epidemiology of school-associated violent deaths, identify common features of these deaths, estimate the rate of school-associated violent deaths in the United States, and identify potential risk factors for these deaths. The CDC has collected SAVD-SS data from July 1, 1992, to the present.

SAVD-SS uses a three-step process to identify and collect data on school-associated violent deaths. First, cases are identified through a systematic search of the LexisNexis newspaper and media database. Second, law enforcement officials from the office that investigated the death(s) are contacted to confirm the details of the case and to determine if the event meets the case definition. Third, once a case is confirmed, a copy of the full law enforcement report is requested for each case. Finally, in previous data years whenever possible, interviews were conducted with law enforcement and/or school officials familiar with cases to obtain contextual information about the incidents. However, interviews are no longer conducted as a part of SAVD-SS protocol. Information regarding the fatal incident is abstracted from law enforcement reports and includes the location of injury, context of injury (while classes were being held, during break, etc.), motives for injury, method of injury, and relationship, school, and community circumstances that may have been related to the incident (e.g., relationship problems with family members, school disciplinary issues, gang-related activity in the community). Information obtained on victim(s) and offender(s) includes demographics, contextual information about the event (date/time, alcohol or drug use, number of persons involved), types and origins of weapons, criminal history, psychological risk factors, school-related problems, extracurricular activities, and family history, including structure and stressors. For specific SAVD studies, school-level data for schools where incidents occur are obtained through the Common Core of Data survey of the National Center for Education Statistics and include school demographics, locale (e.g., urban, suburban, rural), grade levels offered by the school, Title I eligibility, and percentage of students eligible for free/reduced-price lunch, among other variables.

All data years are flagged as preliminary. For some recent cases, the law enforcement reports have not yet been received. The details learned during data abstraction from law enforcement reports can occasionally change the classification of a case. Also, new cases may be identified because of the expansion of the scope of the media files used for case identification or as a result of newly published media articles describing the incident. Finally, other cases may occasionally be identified while the law enforcement and school interviews are being conducted to verify known cases.

Further information on SAVD-SS may be obtained from

Ruth Leemis, M.P.H.
Principal Investigator and Behavioral Scientist
School-Associated Violent Death Surveillance System
Division of Violence Prevention
National Center for Injury Control and Prevention
Centers for Disease Control and Prevention
770- 488-0681
xbf2@cdc.gov

Youth Risk Behavior Surveillance System

The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS), as well as surveys conducted in states, territories, tribes, and local school districts.

The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 9–12 in the United States. In each survey, the target population consisted of all public and private school students in grades 9–12 in the 50 states and the District of Columbia. The first-stage sampling frame included selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are either counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.

The final stage of sampling consisted of randomly selecting, in each chosen school and in each of grades 9–12, one or two classrooms from either a required subject, such as English or social studies, or a required period, such as homeroom or second period. All students in selected classes were eligible to participate. In surveys conducted before 2013, three strategies were used to oversample Black and Hispanic students: (1) larger sampling rates were used to select PSUs that are in high-Black and high-Hispanic strata; (2) a modified measure of size was used that increased the probability of selecting schools with a disproportionately high minority enrollment; and (3) two classes per grade, rather than one, were selected in schools with a high percentage of Black or Hispanic enrollment. In 2013, 2015, 2017, and 2019, only selection of two classes per grade was needed to achieve an adequate precision with minimum variance. Approximately 16,300 students participated in the 1993 survey; 10,900 students participated in the 1995 survey; 16,300 students participated in the 1997 survey; 15,300 students participated in 1999; 13,600 students participated in 2001; 15,200 students participated in 2003; 13,900 participated in 2005; 14,000 participated in 2007; 16,400 participated in 2009; 15,400 participated in 2011; 13,600 participated in 2013; 15,600 participated in 2015; 14,800 participated in 2017; and 13,700 participated in 2019.

The overall response rate was 70 percent for the 1993 survey; 60 percent for the 1995 survey; 69 percent for the 1997 survey; 66 percent for 1999; 63 percent for 2001; 67 percent for 2003 and 2005; 68 percent for 2007; 71 percent for 2009 and 2011; 68 percent for 2013; and 60 percent for 2015, 2017, and 2019. NCES standards call for response rates of 85 percent or better for cross-sectional surveys, and bias analyses are generally required by NCES when that percentage is not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available (e.g., differences between participating and non-participating students cannot be measured, because no survey data are available from non-participating students). A school nonresponse bias analysis, however, was done for the 2019 survey. This analysis found some evidence of potential bias by school urbanicity and school affluence, but concluded that the bias was unlikely to have impacted the national estimates in a meaningful way and would be further reduced by weight adjustment. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.

State-level data were downloaded from the Youth Online: Comprehensive Results web page (https://nccd.cdc.gov/Youthonline/App/Default.aspx). Each state and district school-based YRBS employs a two-stage, cluster sample design to produce representative samples of students in grades 9–12 in their jurisdiction. In 2019, all state and district samples include only public schools, and each district sample includes only schools in the funded school district (e.g., San Diego Unified School District) rather than in the entire city (e.g., greater San Diego area).

In the first sampling stage in all except a few states and districts, schools are selected with probability proportional to school enrollment size. In the second sampling stage, intact classes of a required subject or intact classes during a required period (e.g., second period) are selected randomly. All students in sampled classes are eligible to participate. Certain states and districts modify these procedures to meet their individual needs. For example, in a given state or district, all schools, rather than a sample of schools, might be selected to participate. State and local surveys that have a scientifically selected sample, appropriate documentation, and an overall response rate greater than or equal to 60 percent (or nonresponse bias analysis indicating no significant bias) are weighted. The overall response rate reflects the school response rate multiplied by the student response rate. These three criteria are used to ensure that the data from those surveys can be considered representative of students in grades 9–12 in that jurisdiction. A weight is applied to each record to adjust for student nonresponse and the distribution of students by grade, sex, and race/ethnicity in each jurisdiction. Therefore, weighted estimates are representative of all students in grades 9–12 attending schools in each jurisdiction. Surveys that do not have an overall response rate of greater than or equal to 60 percent and that do not have appropriate documentation are not weighted and are not included in this report.

In the 2019 YRBS, a total of 44 states, 28 local school districts, 3 territories, and 2 tribal governments had representative data. (For information on the location of the states, districts, territories, and tribal governments, please see https://www.cdc.gov/healthyyouth/data/yrbs/participation.htm.) In sites with representative data, the student sample sizes for the state and district YRBS ranged from 970 to 41,091. School response rates ranged from 65 to 100 percent, student response rates ranged from 56 to 92 percent, and overall response rates ranged from 40 to 92 percent.

Readers should note that reports of these data published by the CDC and in this report do not include percentages where the denominator includes fewer than 100 unweighted cases.

In 1999, in accordance with changes to the Office of Management and Budget’s standards for the classification of federal data on race and ethnicity, the YRBS item on race/ethnicity was modified. The version of the race and ethnicity question used in 1993, 1995, and 1997 was

  • How do you describe yourself?
    • a. White—not Hispanic
    • b. Black—not Hispanic
    • c. Hispanic or Latino
    • d. Asian or Pacific Islander
    • e. American Indian or Alaskan Native
    • f. Other

The version used in 1999, 2001, and 2003, as well as in the 2005 state and local district surveys was

  • How do you describe yourself? (Select one or more responses.)
    • a. American Indian or Alaska Native
    • b. Asian
    • c. Black or African American
    • d. Hispanic or Latino
    • e. Native Hawaiian or Other Pacific Islander
    • f. White

In the 2005 national survey and in all 2007, 2009, 2011, 2013, 2015, 2017, and 2019 surveys, race/ethnicity was computed from two questions: (1) “Are you Hispanic or Latino?” (response options were “Yes” and “No”), and (2) “What is your race?” (response options were “American Indian or Alaska Native,” “Asian,” “Black or African American,” “Native Hawaiian or Other Pacific Islander,” or “White”). For the second question, students could select more than one response option. For this report, students were classified as “Hispanic” if they answered “Yes” to the first question, regardless of how they answered the second question. Students who answered “No” to the first question and selected more than one race/ethnicity in the second category were classified as “More than one race.” Students who answered “No” to the first question and selected only one race/ethnicity were classified as that race/ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered “No” to the first question but did not answer the second question.

CDC has conducted two studies to understand the effect of changing the race/ethnicity item on the YRBS. Brener, Kann, and McManus (Public Opinion Quarterly, 67:227–226, 2003) found that allowing students to select more than one response to a single race/ethnicity question on the YRBS had only a minimal effect on reported race/ethnicity among high school students. Eaton, Brener, Kann, and Pittman (Journal of Adolescent Health, 41: 488–494, 2007) found that self-reported race/ethnicity was similar regardless of whether the single-question or a two-question format was used.

Further information on the YRBSS may be obtained from

Nancy Brener
Division of Adolescent and School Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
Centers for Disease Control and Prevention
1600 Clifton Road
Atlanta, GA 30329
nad1@cdc.gov
http://www.cdc.gov/yrbs

CLOSE

Naval Postgraduate School, Center for Homeland Defense and Security

School Shooting Safety Compendium

The School Shooting Database Project was developed from 2018 to 2022 as part of the Advanced Thinking in Homeland Security (HSx) pilot program at the Naval Postgraduate School’s Center for Homeland Defense and Security (CHDS). In July 2022, ongoing collection of the incident data transitioned away from CHDS to an independent project. CHDS continues to make school shooting data and resources available through the School Shooting and Safety Compendium (SSSC).

The SSSC provides a widely inclusive database that documents each and every instance in which a gun is brandished, a gun is fired, or a bullet hits school property, regardless of the number of victims (including zero), time or day of the week of the incident, or reason (e.g., planned attack, accidental, domestic violence, gang-related). It is a filtered, deconflicted, and cross-referenced database of more than 2,000 K–12 school shootings from 1970 to June of 2022.

Available for download as a csv file, the database compiles information from more than 25 different sources, including peer-reviewed studies, government reports, archived newspapers, mainstream media, nonprofits, private websites, blogs, and crowd-sourced lists that have been analyzed, filtered, deconflicted, and cross-referenced. All of the information is based on open-source information and third-party reporting.

 The report K–12 School Shooting Database Methods (https://www.chds.us/sssc/methods/) provides information on such topics as how school shootings are defined in the database, as well as how data reliability is assessed and how data are validated.

Further information about the School Shooting Safety Compendium may be obtained from

 

https://www.chds.us/sssc/

CLOSE

Other Organization Sources

The International Association for the Evaluation of Educational Achievement (IEA) is composed of governmental research centers and national research institutions around the world whose aim is to investigate education problems common among countries. Since its inception in 1958, the IEA has conducted more than 30 research studies of cross-national achievement. The regular cycle of studies encompasses learning in basic school subjects. Examples are the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS). IEA projects also include studies of particular interest to IEA members, such as the TIMSS 1999 Video Study of Mathematics and Science Teaching, the Civic Education Study, and studies on information technology in education.

The international bodies that coordinate international assessments vary in the labels they apply to participating education systems, most of which are countries. IEA differentiates between IEA members, which IEA refers to as “countries” in all cases, and “benchmarking participants.” IEA members include countries such as the United States and Ireland, as well as subnational entities such as England and Scotland (which are both part of the United Kingdom), the Flemish community of Belgium, and Hong Kong (a Special Administrative Region of China). IEA benchmarking participants are all subnational entities and include Canadian provinces, U.S. states, and Dubai in the United Arab Emirates (among others). Benchmarking participants, like the participating countries, are given the opportunity to assess the comparative international standing of their students’ achievement and to view their curriculum and instruction in an international context.

Some IEA studies, such as TIMSS and PIRLS, include an assessment portion, as well as contextual questionnaires for collecting information about students’ home and school experiences. The TIMSS and PIRLS scales, including the scale averages and standard deviations, are designed to remain constant from assessment to assessment so that education systems (including countries and subnational education systems) can compare their scores over time as well as compare their scores directly with the scores of other education systems. Although each scale was created to have a mean of 500 and a standard deviation of 100, the subject matter and the level of difficulty of items necessarily differ by grade, subject, and domain/dimension. Therefore, direct comparisons between scores across grades, subjects, and different domain/dimension types should not be made.

Further information on the International Association for the Evaluation of Educational Achievement may be obtained from https://www.iea.nl.

Trends in International Mathematics and Science Study

The Trends in International Mathematics and Science Study (TIMSS, formerly known as the Third International Mathematics and Science Study) provides data on the mathematics and science achievement of U.S. 4th- and 8th-graders compared with that of their peers in other countries. TIMSS collects information through mathematics and science assessments and questionnaires. The questionnaires request information to help provide a context for student performance. They focus on such topics as students’ attitudes and beliefs about learning mathematics and science, what students do as part of their mathematics and science lessons, students’ completion of homework, and their lives both in and outside of school; teachers’ perceptions of their preparedness for teaching mathematics and science, teaching assignments, class size and organization, instructional content and practices, collaboration with other teachers, and participation in professional development activities; and principals’ viewpoints on policy and budget responsibilities, curriculum and instruction issues, and student behavior. The questionnaires also elicit information on the organization of schools and courses. The assessments and questionnaires are designed to specifications in a guiding framework. The TIMSS framework describes the mathematics and science content to be assessed and provides grade-specific objectives, an overview of the assessment design, and guidelines for item development.

TIMSS is on a 4-year cycle. Data collections occurred in 1995, 1999 (8th grade only), 2003, 2007, 2011, 2015, and 2019. TIMSS 2015 consisted of assessments in 4th-grade mathematics; numeracy (a less difficult version of 4th-grade mathematics, newly developed for 2015); 8th-grade mathematics; 4th-grade science; and 8th-grade science. Students in Bahrain, Indonesia, Iran, Kuwait, Jordan, Morocco, and South Africa as well as Buenos Aires participated in the 4th-grade mathematics assessment through the numeracy assessment. In addition, TIMSS 2015 included the third administration of TIMSS Advanced since 1995. TIMSS Advanced is an international comparative study that measures the advanced mathematics and physics achievement of students in their final year of secondary school (the equivalent of 12th grade in the United States) who are taking or have taken advanced courses. The TIMSS 2015 survey also collected policy-relevant information about students, curriculum emphasis, technology use, and teacher preparation and training.

In TIMSS 2019, mathematics and science assessments and related questionnaires were administered in 64 education systems at the 4th-grade level and 46 education systems at the 8th-grade level. The 2019 assessment introduced eTIMSS, a digital version of TIMSS designed for computer- and tablet-based administration. Approximately half of the participating education systems—including the United States—elected to administer eTIMSS, and the remainder administered the assessment in the traditional paper-and-pencil format (paperTIMSS).

Countries participating in eTIMSS also administered paperTIMSS to a smaller “bridge” sample of students to evaluate mode effects and to link the two versions of the TIMSS assessment. An additional sample of 1,500 tested students was required to administer paper TIMSS booklets (paperTIMSS) containing the TIMSS 2015 trend assessment blocks. This bridge sample was to be obtained by selecting one additional class from a subset of the sampled schools, by selecting a distinct sample of schools, or by a combination of both strategies. In the United States, the bridge sample was obtained using a combination of both strategies. This bridge study enabled the eTIMSS and paperTIMSS achievement results to be reported on the same achievement scale in each grade and subject.

TIMSS 2019 was administered between April and May of 2019 in the United States. The U.S. sample was randomly selected and weighted to be representative of the nation. In order to reliably and accurately represent the performance of each country, international guidelines required that countries sample at least 150 schools and at least 4,000 students per grade (countries with small class sizes of fewer than 30 students per school were directed to consider sampling more schools, more classrooms per school, or both, to meet the minimum target of 4,000 tested students). In the United States, 287 schools and 8,776 students participated at the 4th-grade level, and 273 schools and 8,698 students at the 8th-grade level. The weighted school participation rate for the United States was 76 percent for grade 4 before the inclusion of replacement schools and 88 percent after the inclusion of replacement schools. For grade 8, the weighted school participation rate for the United States was 72 percent before the inclusion of replacement schools and 85 percent after the inclusion of replacement schools. The weighted student participation rate was 96 percent for grade 4 and 94 percent for grade 8.

Progress in International Reading Literacy Study

The Progress in International Reading Literacy Study (PIRLS) provides data on the reading literacy of U.S. 4th-graders compared with that of their peers in other countries. PIRLS is on a 5-year cycle: PIRLS data collections have been conducted in 2001, 2006, 2011, and 2016. In 2016, a total of 58 education systems, including both IEA members and IEA benchmarking participants, participated in the survey. Sixteen of the education systems participating in PIRLS also participated in ePIRLS, an innovative, computer-based assessment of online reading designed to measure students’ approaches to informational reading in an online environment.

PIRLS collects information through a reading literacy assessment and questionnaires that help to provide a context for student performance. Questionnaires are administered to collect information about students’ home and school experiences in learning to read. A student questionnaire addresses students’ attitudes toward reading and their reading habits. In addition, questionnaires are given to students’ teachers and school principals in order to gather information about students’ school experiences in developing reading literacy. In countries other than the United States, a parent questionnaire is also administered. The assessments and questionnaires are designed to specifications in a guiding framework. The PIRLS framework describes the reading content to be assessed and provides objectives specific to 4th grade, an overview of the assessment design, and guidelines for item development.

In PIRLS 2016, representative samples of students in the United States were selected in the manner used in all participating countries and other education systems. The sample design that was employed is generally referred to as a two-stage stratified cluster sample. In the first stage of sampling, individual schools were selected with a probability proportionate to size (PPS) approach, which means that the probability is proportional to the estimated number of students enrolled in the target grade. In the second stage of sampling, intact classrooms were selected within sampled schools.

PIRLS guidelines call for a minimum of 150 schools to be sampled, with a minimum of 4,000 students assessed. The basic sample design of one classroom per school was designed to yield a total sample of approximately 4,500 students per population. About 4,400 U.S. students participated in PIRLS in 2016, joining 319,000 other student participants around the world. Accommodations were not provided for students with disabilities or students who were unable to read or speak the language of the test. These students were excluded from the sample. The IEA requirement is that the overall exclusion rate, of which exclusions of schools and students are a part, should not exceed more than 5 percent of the national desired target population.

In order to minimize the potential for response biases, the IEA has developed participation or response rate standards that apply to all participating education systems. The standards govern whether an education system’s data are included in the PIRLS international datasets and how they are presented in the international reports, if they are included. These standards were set using composites of response rates at the school, classroom, and student and teacher levels. Response rates were calculated with and without the inclusion of substitute schools that were selected to replace schools refusing to participate. In the 2016 PIRLS administered in the United States, the unweighted school response rate was 76 percent, and the weighted school response rate was 75 percent. All schools selected for PIRLS were also asked to participate in ePIRLS. The unweighted school response rate for ePIRLS in the final sample with replacement schools was 89.0 percent and the weighted response rate was 89.1 percent. The weighted and unweighted student response rates for PIRLS were both 94 percent. The weighted and unweighted student response rates for ePIRLS were both 90 percent.

Further information on the TIMSS study may be obtained from

Lydia Malley
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-7266
lydia.malley@ed.gov
https://nces.ed.gov/timss
https://www.iea.nl/studies/iea/timss

Further information on the PIRLS study may be obtained from

Sheila Thompson
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-8330
sheila.thompson@ed.gov
https://nces.ed.gov/surveys/pirls/
https://www.iea.nl/studies/iea/pirls

CLOSE

The Organization for Economic Cooperation and Development (OECD) publishes analyses of national policies and survey data in education, training, and economics in OECD and partner countries. Newer studies include student survey data on financial literacy and on digital literacy.

Education at a Glance

To highlight current education issues and create a set of comparative education indicators that represent key features of education systems, OECD initiated the Indicators of Education Systems (INES) project and charged the Centre for Educational Research and Innovation (CERI) with developing the cross-national indicators for it. The development of these indicators involved representatives of the OECD countries and the OECD Secretariat. Improvements in data quality and comparability among OECD countries have resulted from the country-to-country interaction sponsored through the INES project. The most recent publication in this series is Education at a Glance 2022: OECD Indicators.

Education at a Glance 2022 features data on the 38 OECD countries (Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, the Republic of Korea, Latvia, Lithuania, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, the Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Türkiye, the United Kingdom, and the United States) and a number of partner countries, including Argentina, Brazil, China, India, Indonesia, Saudi Arabia, and South Africa.

The OECD Handbook for Internationally Comparative Education Statistics: Concepts, Standards, Definitions and Classifications provides countries with specific guidance on how to prepare information for OECD education surveys; facilitates countries’ understanding of OECD indicators and their use in policy analysis; and provides a reference for collecting and assimilating educational data. Chapter 6 of the OECD Handbook for Internationally Comparative Education Statistics contains a discussion of data quality issues. Users should examine footnotes carefully to recognize some of the data limitations.

Further information on international education statistics may be obtained from

Andreas Schleicher
Director for the Directorate of Education and Skills
  and Special Advisor on Education Policy
  to the OECD’s Secretary General
OECD Directorate for Education and Skills
2, rue André Pascal
75775 Paris Cedex 16
France
andreas.schleicher@oecd.org
https://www.oecd.org/

Online Education Database (OECD.Stat)

The statistical online platform of the OECD, OECD.Stat, allows users to access OECD’s databases for OECD member countries and selected nonmember economies. A user can build tables using selected variables and customizable table layouts, extract and download data, and view metadata on methodology and sources.

Data for educational attainment in this report are pulled directly from OECD.Stat. (Information on these data can be found in chapter A, indicator A1, of annex 3A in Education at a Glance 2022 and accessed at https://oecd-ilibrary.org/education/education-at-a-glance-2022_3197152b-en.) However, to support statistical testing for NCES publications, standard errors for some countries had to be estimated and therefore may not be included on OECD.Stat.

Standard errors for 2010, 2015, 2019, 2020, and 2021 for Japan and the Republic of Korea; for 2010, 2015, and 2021 for Türkiye; for 2010 and 2015 for Israel, the Netherlands, and Poland; for 2010 and 2019 for Slovenia; for 2010 for Australia, Belgium, the Czech Republic, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, Norway, Portugal, the Slovak Republic, and the United Kingdom; and for 2015 for New Zealand were estimated by NCES using a simple random sample assumption. These standard errors are likely to be lower than standard errors that take into account complex sample designs. Standard errors for Canada were calculated by Statistics Canada. Lastly, NCES estimated the standard errors for the OECD average using the sum of squares technique.

OECD.Stat can be accessed at https://stats.oecd.org/. A user’s guide for OECD.Stat can be accessed at https://stats.oecd.org/Content/themes/OECD/static/help/WBOS%20User%20Guide%20(EN).pdf.

Program for International Student Assessment

The Program for International Student Assessment (PISA) is a system of international assessments organized by the Organization for Economic Cooperation and Development (OECD), an intergovernmental organization of industrialized countries, that focuses on 15-year-olds’ capabilities in reading literacy, mathematics literacy, and science literacy. PISA also includes measures of general, or cross-curricular, competencies such as learning strategies. PISA emphasizes functional skills that students have acquired as they near the end of compulsory schooling.

PISA is a 2-hour exam. Assessment items include a combination of multiple-choice questions and open-ended questions that require students to develop their own response. PISA scores are reported on a scale that ranges from 0 to 1,000, with the OECD mean set at 500 and a standard deviation set at 100. In each education system, the assessment is translated into the primary language of instruction; in the United States, all materials are written in English.

Forty-three education systems participated in the 2000 PISA; 41 education systems participated in 2003; 57 (30 OECD member countries and 27 nonmember countries or education systems) participated in 2006; and 65 (34 OECD member countries and 31 nonmember countries or education systems) participated in 2009. (An additional nine education systems administered the 2009 PISA in 2010.) In PISA 2012, 65 education systems (34 OECD member countries and 31 nonmember countries or education systems), as well as the states of Connecticut, Florida, and Massachusetts, participated. In the 2015 PISA, 70 education systems (35 OECD member countries and 35 nonmember countries or education systems), as well as the states of Massachusetts and North Carolina and the territory of Puerto Rico, participated. In PISA 2018, 79 education systems (37 OECD member countries and 42 nonmember countries or education systems) participated.

To implement PISA, each of the participating education systems scientifically draws a nationally representative sample of 15-year-olds, regardless of grade level. In the 2018 PISA, there were 162 participating schools and 4,811 participating students. The overall weighted school response rate was 76 percent, and the overall weighted student response rate was 85 percent.

The intent of PISA reporting is to provide an overall description of performance in reading literacy, mathematics literacy, and science literacy every 3 years, and to provide a more detailed look at each domain in the years when it is the major focus. These cycles will allow education systems to compare changes in trends for each of the three subject areas over time. In the first cycle, PISA 2000, reading literacy was the major focus, occupying roughly two-thirds of assessment time. For 2003, PISA focused on mathematics literacy as well as the ability of students to solve problems in real-life settings. In 2006, PISA focused on science literacy; in 2009, it focused on reading literacy again; and in 2012, it focused on mathematics literacy. PISA 2015 focused on science, as it did in 2006. PISA 2018 focused on reading, as it did in 2009; it also offered an optional assessment of financial literacy, administered by the United States.

Further information on PISA may be obtained from

Samantha Burg
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
samantha.burg@ed.gov
https://nces.ed.gov/surveys/pisa

Teaching and Learning International Survey (TALIS)

The Teaching and Learning International Survey (TALIS) is an international large-scale survey of the teachers, teaching, and the learning environments in schools conducted in 2008, 2013, and 2018 by the Organization for Economic Cooperation and Development (OECD). Data from the survey are based on questionnaire responses from nationally representative samples of teachers and their principals in participating countries and education systems.

The main objective of TALIS is to provide accurate and relevant international indicators on teachers and teaching, with the goal of helping countries review current conditions and develop informed education policy. The survey’s core target population is International Standard Classification of Education (ISCED) level 2 (lower secondary) teachers and school principals. ISCED level 2 corresponds to grades 7, 8, and 9 in the United States.

The sample design for TALIS 2018 was a stratified systematic sample, with the school sampling probability proportional to the estimated number of ISCED 2 teachers within each school. Samples were drawn using a two-stage sampling process. In the first stage, a sample of schools was drawn; in the second stage, a sample of teachers within each selected school was drawn.

A minimum sample size of 4,000 teachers from a minimum of 200 schools was required for each participating education system. Replacement schools were identified at the same time the TALIS sample was selected by designating the two neighboring schools in the sampling frame as replacement schools. Within schools, a sample of 20 teachers was to be selected in an equal probability sample unless fewer than 20 teachers were available (in which case all teachers were selected).

Each education system collected its own data following international guidelines and specifications. The technical standards required that eligible teachers were those teaching at least one ISCED Level 2 class, regardless of subject matter. School principals or head administrators of each sampled school were also asked to participate. School principal and teacher data were collected independently so that teacher eligibility was not dependent on principal participation (or vice versa).

The response-rate target was at least 75 percent of schools and at least 75 percent of teachers across the participating schools in each education system. A minimum participation rate of 50 percent of schools from the original school sample and 75 percent of schools after replacement was required in order for an education system’s data to be included in the main international comparisons. Education systems were allowed to use replacement schools (selected during the sampling process) to increase the response rate as long as the 50 percent benchmark before replacement had been reached.

The data collected by each participating education system was adjudicated to ensure that it met the TALIS technical standards for data collection. The principal and teacher data were adjudicated separately. For school-level data, adjudication depended only on school data (the principal participation); for teacher-level data, adjudication depended only on teacher data (50 percent of teachers in the school had to participate).

The United States first participated in TALIS in 2013, along with 37 other education systems. The most recent round of data collection was in 2018, with 49 education systems participating. U.S. results for the 2018 administration of TALIS are available at https://nces.ed.gov/surveys/talis/talis2018/, and full results from all three rounds of TALIS are available at https://www.oecd.org/education/talis/.

Further information on TALIS may be obtained from

Mary Coleman
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
mary.coleman@ed.gov
https://nces.ed.gov/surveys/talis/

CLOSE