Skip Navigation
Click to open navigation
Indicators

Guide to Sources

National Center for Education Statistics (NCES)

Beginning Postsecondary Students Longitudinal Study

The Beginning Postsecondary Students Longitudinal Study (BPS) provides information on persistence, progress, and attainment for 6 years after initial time of entry into postsecondary education. BPS includes traditional and nontraditional (e.g., older) students and is representative of all beginning students in postsecondary education in a given year. Initially, these individuals are surveyed in the National Postsecondary Student Aid Study (NPSAS) during the year in which they first begin their postsecondary education. These same students are surveyed again 2 and 5 years later through the BPS. By starting with a cohort that has already entered postsecondary education and following it for 6 years, the BPS can determine the extent to which students who start postsecondary education at various ages differ in their progress, persistence, and attainment, as well as their entry into the workforce. The first BPS was conducted in 1989–90, with follow-ups in 1992 (BPS:90/92) and 1994 (BPS:90/94). The second BPS was conducted in 1995–96, with follow-ups in 1998 (BPS:96/98) and 2001 (BPS:96/01). The third BPS was conducted in 2003–04, with follow-ups in 2006 (BPS:04/06) and 2009 (BPS:04/09).

The fourth BPS was conducted in 2012, with a follow-up in 2014 (BPS:12/14) and one planned for 2017. In the base year, 1,690 institutions were sampled, of which all were confirmed eligible to participate. In addition, 128,120 students were sampled, and 123,600 were eligible to participate in the NPSAS:12 study. In the first follow-up (BPS:12/14), of the 35,540 eligible NPSAS:12 sample students, 24,770 responded, for an unweighted student response rate of 70 percent and a weighted response rate of 68 percent.

Further information on BPS may be obtained from

Aurora D’Amico
David Richards
Sample Surveys Division
Longitudinal Surveys Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aurora.damico@ed.gov
david.richards@ed.gov
https://nces.ed.gov/surveys/bps

Common Core of Data

The Common Core of Data (CCD) is NCES’s primary database on public elementary and secondary education in the United States. It is a comprehensive, annual, national statistical database of all public elementary and secondary schools and school districts containing data designed to be comparable across all states. This database can be used to select samples for other NCES surveys and provide basic information and descriptive statistics on public elementary and secondary schools and schooling in general.

The CCD collects statistical information annually from approximately 100,000 public elementary and secondary schools and approximately 18,000 public school districts (including supervisory unions and regional education service agencies) in the 50 states, the District of Columbia, Department of Defense (DoD) dependents schools, the Bureau of Indian Education (BIE), Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. Three categories of information are collected in the CCD survey: general descriptive information on schools and school districts, data on students and staff, and fiscal data. The general school and district descriptive information includes name, address, phone number, and type of locale; the data on students and staff include selected demographic characteristics; and the fiscal data pertain to revenues and current expenditures.

The EDFacts data collection system is the primary collection tool for the CCD. NCES works collaboratively with the Department of Education’s Performance Information Management Service to develop the CCD collection procedures and data definitions. Coordinators from state education agencies (SEAs) submit the CCD data at different levels (school, agency, and state) to the EDFacts collection system. Prior to submitting CCD files to EDFacts, SEAs must collect and compile information from their respective local education agencies (LEAs) through established administrative records systems within their state or jurisdiction.

Once SEAs have completed their submissions, the CCD survey staff analyzes and verifies the data for quality assurance. Even though the CCD is a universe collection and thus not subject to sampling errors, nonsampling errors can occur. The two potential sources of nonsampling errors are nonresponse and inaccurate reporting. NCES attempts to minimize nonsampling errors through the use of annual training of SEA coordinators, extensive quality reviews, and survey editing procedures. In addition, each year SEAs are given the opportunity to revise their state-level aggregates from the previous survey cycle.

The CCD survey consists of five components: The Public Elementary/Secondary School Universe Survey, the Local Education Agency (School District) Universe Survey, the State Nonfiscal Survey of Public Elementary/Secondary Education, the National Public Education Financial Survey (NPEFS), and the School District Finance Survey (F-33).

Public Elementary/Secondary School Universe Survey

The Public Elementary/Secondary School Universe Survey includes all public schools providing education services to prekindergarten (preK), kindergarten, grades 1–13, and ungraded students. For school year (SY) 2015–16, the survey included records for each public elementary and secondary school in the 50 states, the District of Columbia, the DoD dependents schools (overseas and domestic), the Bureau of Indian Education (BIE), Puerto Rico, American Samoa, the Northern Mariana Islands, Guam, and the U.S. Virgin Islands.

The Public Elementary/Secondary School Universe Survey includes data for the following variables: NCES school ID number, state school ID number, name of the school, name of the agency that operates the school, mailing address, physical location address, phone number, school type, operational status, locale code, latitude, longitude, county number, county name, full-time-equivalent (FTE) classroom teacher count, low/high grade span offered, congressional district code, school level, students eligible for free lunch, students eligible for reduced-price lunch, total students eligible for free and reduced-price lunch, and student totals and detail (by grade, by race/ethnicity, and by sex). The survey also contains flags indicating whether a school is Title I eligible, schoolwide Title I eligible, a magnet school, a charter school, a shared-time school, or a BIE school, as well as which grades are offered at the school.

Local Education Agency (School District) Universe Survey

The coverage of the Local Education Agency Universe Survey includes all school districts and administrative units providing education services to preK, kindergarten, grades 1–13, and ungraded students. The Local Education Agency Universe Survey includes records for the 50 states, the District of Columbia, Puerto Rico, the Bureau of Indian Education (BIE), American Samoa, Guam, the Northern Mariana Islands, the U.S. Virgin Islands, and the DoD dependents schools (overseas and domestic).

The Local Education Agency Universe Survey includes the following variables: NCES agency ID number, state agency ID number, agency name, phone number, mailing address, physical location address, agency type code, supervisory union number, American National Standards Institute (ANSI) state and county code, county name, core based statistical area (CBSA), metropolitan/micropolitan code, metropolitan status code, locale code, congressional district, operational status code, BIE agency status, low/high grade span offered, agency charter status, number of schools, number of full-time-equivalent teachers, number of ungraded students, number of preK–13 students, number of special education/Individualized Education Program students, number of English language learner students, instructional staff fields, support staff fields, and LEA charter status.

State Nonfiscal Survey of Public Elementary/Secondary Education

The State Nonfiscal Survey of Public Elementary/Secondary Education for the 2015–16 school year provides state-level, aggregate information about students and staff in public elementary and secondary education. It includes data from the 50 states, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa. The DoD dependents schools (overseas and domestic) and the BIE are also included in the survey universe. This survey covers public school student membership by grade, race/ethnicity, and state or jurisdiction and covers number of staff in public schools by category and state or jurisdiction. Beginning with the 2006–07 school year, the number of diploma recipients and other high school completers are no longer included in the State Nonfiscal Survey of Public Elementary/Secondary Education File. These data are now published in the public-use CCD State Dropout and Completion Data File.

National Public Education Financial Survey

The purpose of the National Public Education Financial Survey (NPEFS) is to provide district, state, and federal policymakers, researchers, and other interested users with descriptive information about revenues and expenditures for public elementary and secondary education. The data collected are useful to (1) chief officers of state education agencies; (2) policymakers in the executive and legislative branches of federal and state governments; (3) education policy and public policy researchers; and (4) the public, journalists, and others.

Data for NPEFS are collected from state education agencies (SEAs) in the 50 states, the District of Columbia, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. The data file is organized by state or jurisdiction and contains revenue data by funding source; expenditure data by function (the activity being supported by the expenditure) and object (the category of expenditure); average daily attendance data; and total student membership data from the CCD State Nonfiscal Survey of Public Elementary/Secondary Education.

School District Finance Survey

The purpose of the School District Finance Survey (F-33) is to provide finance data for all local education agencies (LEAs) that provide free public elementary and secondary education in the United States. National and state totals are not included (national- and state-level figures are presented, however, in the National Public Education Financial Survey).

NCES partners with the U.S. Census Bureau in the collection of school district finance data. The Census Bureau distributes Census Form F-33, Annual Survey of School System Finances, to all SEAs, and representatives from the SEAs collect and edit data from their LEAs and submit data to the Census Bureau. The Census Bureau then produces two data files: one for distribution and reporting by NCES and the other for distribution and reporting by the Census Bureau. The files include variables for revenues by source, expenditures by function and object, indebtedness, assets, and student membership counts, as well as identification variables.

Further information on the nonfiscal CCD data may be obtained from

Mark Glander
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
mark.glander@ed.gov
https://nces.ed.gov/ccd

Further information on the fiscal CCD data may be obtained from

Stephen Cornman
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
stephen.cornman@ed.gov
https://nces.ed.gov/ccd

Early Childhood Longitudinal Study, Kindergarten Class of 2010–11

The Early Childhood Longitudinal Study, Kindergarten Class of 2010–11 (ECLS-K:2011) provides detailed information on the school achievement and experiences of students throughout their elementary school years. The students who participated in the ECLS-K:2011 were followed longitudinally from the kindergarten year (the 2010–11 school year) through the spring of 2016, when most of them were expected to be in 5th grade. This sample of students is designed to be nationally representative of all students who were enrolled in kindergarten or who were of kindergarten age and being educated in an ungraded classroom or school in the United States in the 2010–11 school year, including those in public and private schools, those who attended full-day and part-day programs, those who were in kindergarten for the first time, and those who were kindergarten repeaters. Students who attended early learning centers or institutions that offered education only through kindergarten are included in the study sample and represented in the cohort.

The ECLS-K:2011 places emphasis on measuring students’ experiences within multiple contexts and development in multiple domains. The design of the study includes the collection of information from the students, their parents/guardians, their teachers, and their schools. Information was collected from their before- and after-school care providers in the kindergarten year.

A nationally representative sample of approximately 18,170 children from about 1,310 schools participated in the base-year administration of the ECLS-K:2011 in the 2010–11 school year. The sample included children from different racial/ethnic and socioeconomic backgrounds. Asian/Pacific Islander students were oversampled to ensure that the sample included enough students of this race/ethnicity to make accurate estimates for the group as a whole. Eight data collections have been conducted to date: fall and spring of the children’s kindergarten year (the base year), fall 2011 and spring 2012 (the 1st-grade year), fall 2012 and spring 2013 (the 2nd-grade year), spring 2014 (the 3rd-grade year), and spring 2015 (the 4th-grade year). The final data collection was conducted in the spring of 2016. Although the study refers to later rounds of data collection by the grade the majority of children are expected to be in (that is, the modal grade for children who were in kindergarten in the 2010–11 school year), children are included in subsequent data collections regardless of their grade level.

A total of approximately 780 of the 1,310 originally sampled schools participated during the base year of the study. This translates to a weighted unit response rate (weighted by the base weight) of 63 percent for the base year. In the base year, the weighted child assessment unit response rate was 87 percent for the fall data collection and 85 percent for the spring collection, and the weighted parent unit response rate was 74 percent for the fall collection and 67 percent for the spring collection.

Fall and spring data collections were conducted in the 2011–12 school year, when the majority of the children were in the 1st grade. The fall collection was conducted within a 33 percent subsample of the full base-year sample, and the spring collection was conducted within the full base-year sample. The weighted child assessment unit response rate was 89 percent for the fall data collection and 88 percent for the spring collection, and the weighted parent unit response rate was 87 percent for the fall data collection and 76 percent for the spring data collection.

In the 2012–13 data collection (when the majority of the children were in the 2nd grade) the weighted child assessment unit response rate was 84.0 percent in the fall and 83.4 percent in the spring. In the 2014 spring data collection (when the majority of the children were in the 3rd grade), the weighted child assessment unit response rate was 79.9 percent.

Further information on ECLS-K:2011 may be obtained from

Gail Mulligan
Jill McCarroll
Sample Surveys Division
Longitudinal Surveys Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
ecls@ed.gov
https://nces.ed.gov/ecls/kindergarten2011.asp

EDFacts

EDFacts is a centralized data collection through which state education agencies submit preK–12 education data to the U.S. Department of Education (ED). All data in EDFacts are organized into "data groups" and reported to ED using defined file specifications. Depending on the data group, state education agencies may submit aggregate counts for the state as a whole or detailed counts for individual schools or school districts. EDFacts does not collect student-level records. The entities that are required to report EDFacts data vary by data group but may include the 50 states, the District of Columbia, the Department of Defense (DoD) dependents schools, the Bureau of Indian Education, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. More information about EDFacts file specifications and data groups can be found at https://www.ed.gov/EDFacts.

EDFacts is a universe collection and is not subject to sampling error, but nonsampling errors such as nonresponse and inaccurate reporting may occur. The U.S. Department of Education attempts to minimize nonsampling errors by training data submission coordinators and reviewing the quality of state data submissions. However, anomalies may still be present in the data.

Differences in state data collection systems may limit the comparability of EDFacts data across states and across time. To build EDFacts files, state education agencies rely on data that were reported by their schools and school districts. The systems used to collect these data are evolving rapidly and differ from state to state.

In some cases, EDFacts data may not align with data reported on state education agency websites. States may update their websites on schedules different from those they use to report data to ED. Furthermore, ED may use methods for protecting the privacy of individuals represented within the data that could be different from the methods used by an individual state.

EDFacts data on homeless students enrolled in public schools are collected in data group 655 within file 118. EDFacts data on English language learners enrolled in public schools are collected in data group 678 within file 141. EDFacts four-year adjusted cohort graduation rate (ACGR) data are collected in data group 695 within file 150 and in data group 696 within file 151. EDFacts collects these data groups on behalf of the Office of Elementary and Secondary Education.

For more information about EDFacts, please contact

EDFacts
Elementary/Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
EDFacts@ed.gov
https://www.ed.gov/EDFacts

Fast Response Survey System

The Fast Response Survey System (FRSS) was established in 1975 to collect issue-oriented data quickly, with a minimal burden on respondents. The FRSS, whose surveys collect and report data on key education issues at the elementary and secondary levels, was designed to meet the data needs of Department of Education analysts, planners, and decisionmakers when information could not be collected quickly through NCES’s large recurring surveys. Findings from FRSS surveys have been included in congressional reports, testimony to congressional subcommittees, NCES reports, and other Department of Education reports. The findings are also often used by state and local education officials.

Data collected through FRSS surveys are representative at the national level, drawing from a sample that is appropriate for each study. The FRSS collects data from state education agencies and national samples of other educational organizations and participants, including local education agencies, public and private elementary and secondary schools, elementary and secondary school teachers and principals, and public libraries and school libraries. To ensure a minimal burden on respondents, the surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly.

Further information on the FRSS may be obtained from

John Ralph
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
john.ralph@ed.gov
https://nces.ed.gov/surveys/frss

School Safety and Discipline

The FRSS survey "School Safety and Discipline: 2013–14" (FRSS 106, 2014) collected nationally representative data on public school safety and discipline for the 2013–14 school year. The topics covered included specific safety and discipline plans and practices, training for classroom teachers and aides related to school safety and discipline issues, security personnel, frequency of specific discipline problems, and number of incidents of various offenses.

The survey was mailed to approximately 1,600 regular public schools in the 50 states and the District of Columbia. Recipients were informed that the survey was designed to be completed by the person most knowledgeable about safety and discipline at the school. The unweighted survey response rate was 86 percent, and the weighted response rate using the initial base weights was 85 percent. The survey weights were adjusted for questionnaire nonresponse, and the data were then weighted to yield national estimates that represent all eligible regular public schools in the United States. The report Public School Safety and Discipline: 2013–14 (NCES 2015-051) presents selected findings from the survey.

Further information on this FRSS survey may be obtained from

John Ralph
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
john.ralph@ed.gov
https://nces.ed.gov/surveys/frss

High School and Beyond Longitudinal Study

The High School and Beyond Longitudinal Study (HS&B) is a nationally representative sample survey of individuals who were high school sophomores and seniors in 1980. As a large-scale, longitudinal survey, its primary purpose is to observe the educational and occupational plans and activities of young people as they pass through the American educational system and take on their adult roles. The study contributes to the understanding of the development of young adults and the factors that determine individual education and career outcomes. The availability of this longitudinal data encourages research in such areas as the strength of secondary school curricula, the quality and effectiveness of secondary and postsecondary schooling, the demand for postsecondary education, problems of financing postsecondary education, and the adequacy of postsecondary alternatives open to high school students.

The HS&B survey gathered data on the education, work, and family experiences of young adults for the pivotal years during and immediately following high school. The student questionnaire covered school experiences, activities, attitudes, plans, selected background characteristics, and language proficiency. Parents were asked about their educational aspirations for their children and plans for how their postsecondary education would be financed. Teachers were surveyed regarding their assessments of their students’ futures. The survey also collected detailed information, from complete high school transcripts, on courses taken and grades achieved.

The base-year survey (conducted in 1980) was a probability sample of 1,015 high schools with a target number of 36 sophomores and 36 seniors in each school. A total of 58,270 students participated in the base-year survey. Substitutions were made for nonparticipating schools—but not for students—in those strata where it was possible. Overall, 1,120 schools were selected in the original sample and 810 of these schools participated in the survey. An additional 200 schools were drawn in a replacement sample. Student refusals and absences resulted in an 82 percent completion rate for the survey.

Several small groups in the population were oversampled to allow for special study of certain types of schools and students. Students completed questionnaires and took a battery of cognitive tests. In addition, a sample of parents of sophomores and seniors (about 3,600 for each cohort) was surveyed.

HS&B first follow-up activities took place in the spring of 1982. The sample for the first follow-up survey included approximately 30,000 individuals who were sophomores in 1980. The completion rate for sample members eligible for on-campus survey administration was about 96 percent. About 89 percent of the students who left school between the base-year and first follow-up surveys (e.g., dropouts, transfer students, and early graduates) completed the first follow-up sophomore questionnaire.

As part of the first follow-up survey of HS&B, transcripts were requested in fall 1982 for an 18,150-member subsample of the sophomore cohort. Of the 15,940 transcripts actually obtained, 12,120 transcripts represented students who had graduated in 1982 and thus were eligible for use in the overall curriculum analysis presented in this publication. All courses in each transcript were assigned a 6-digit code based on the Classification of Secondary School Courses (a coding system developed to standardize course descriptions; see https://nces.ed.gov/surveys/hst/courses.asp). Credits earned in each course are expressed in Carnegie units. (The Carnegie unit is a standard of measurement that represents one credit for the completion of a 1-year course. To receive credit for a course, the student must have received a passing grade—"pass," "D," or higher.) Students who transferred from public to private schools or from private to public schools between their sophomore and senior years were eliminated from public/private analyses.

In designing the senior cohort first follow-up survey, one of the goals was to reduce the size of the retained sample while still keeping sufficient numbers of various racial/ethnic groups to allow important policy analyses. A total of about 11,230 (93.6 percent) of the 12,000 individuals subsampled completed the questionnaire. Information was obtained about the respondents’ school and employment experiences, family status, and attitudes and plans.

The samples for the second follow-up, which took place in spring 1984, consisted of about 12,000 members of the senior cohort and about 15,000 members of the sophomore cohort. The completion rate for the senior cohort was 91 percent, and the completion rate for the sophomore cohort was 92 percent.

HS&B third follow-up data collection activities were performed in spring 1986. Both the sophomore and senior cohort samples for this round of data collection were the same as those used for the second follow-up survey. The completion rates for the sophomore and senior cohort samples were 91 percent and 88 percent, respectively.

HS&B fourth follow-up data collection activities were performed in 1992 but only covered the 1980 sophomore class. These activities included examining aspects of these students’ early adult years, such as enrollment in postsecondary education, experience in the labor market, marriage and child rearing, and voting behavior.

An NCES series of technical reports and data file user’s manuals, available electronically, provides additional information on the survey methodology.

Further information on HS&B may be obtained from

Aurora D’Amico
Sample Surveys Division
Longitudinal Surveys Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aurora.damico@ed.gov
https://nces.ed.gov/surveys/hsb

High School Longitudinal Study of 2009

The High School Longitudinal Study of 2009 (HSLS:09) is a nationally representative, longitudinal study of approximately 21,000 9th-grade students in 944 schools who will be followed through their secondary and postsecondary years. The study focuses on understanding students’ trajectories from the beginning of high school into postsecondary education, the workforce, and beyond. The HSLS:09 questionnaire is focused on, but not limited to, information on science, technology, engineering, and mathematics (STEM) education and careers. It is designed to provide data on mathematics and science education, the changing high school environment, and postsecondary education. This study features a new student assessment in algebra skills, reasoning, and problem solving and includes surveys of students, their parents, math and science teachers, and school administrators, as well as a new survey of school counselors.

The HSLS:09 base year took place in the 2009–10 school year, with a randomly selected sample of fall-term 9th-graders in more than 900 public and private high schools that had both a 9th and an 11th grade. Students took a mathematics assessment and survey online. Students’ parents, principals, and mathematics and science teachers and the school’s lead counselor completed surveys on the phone or online.

The HSLS:09 student questionnaire includes interest and motivation items for measuring key factors predicting choice of postsecondary paths, including majors and eventual careers. This study explores the roles of different factors in the development of a student’s commitment to attend college and then take the steps necessary to succeed in college (the right courses, courses in specific sequences, etc.). Questionnaires in this study have asked more questions of students and parents regarding reasons for selecting specific colleges (e.g., academic programs, financial aid and access prices, and campus environment).

The first follow-up of HSLS:09 occurred in the spring of 2012, when most sample members were in the 11th grade. Data files and documentation for the first follow-up were released in fall 2013 and are available on the NCES website.

A between-round postsecondary status update survey took place in the spring of students’ expected graduation year (2013). It asked respondents about college applications, acceptances, and rejections, as well as their actual college choices. In the fall of 2013 and the spring of 2014, high school transcripts were collected and coded.

A full second follow-up took place in 2016, when most sample members were 3 years beyond high school graduation. Additional follow-ups are planned, to at least age 30.

Further information on HSLS:09 may be obtained from

Elise Christopher
Sample Surveys Division
Longitudinal Surveys Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
hsls09@ed.gov
https://nces.ed.gov/surveys/hsls09

Integrated Postsecondary Education Data System

The Integrated Postsecondary Education Data System (IPEDS) surveys over 6,000 postsecondary institutions, including universities and colleges, as well as institutions offering technical and vocational education beyond the high school level. IPEDS, an annual universe collection that began in 1986, replaced the Higher Education General Information Survey (HEGIS).

IPEDS consists of 12 interrelated survey components that provide information on postsecondary institutions and academic libraries at these institutions, student enrollment, student financial aid, programs offered, retention and graduation rates, degrees and certificates conferred, and the human and financial resources involved in the provision of institutionally based postsecondary education. Prior to 2000, the IPEDS survey had the following subject-matter components: Institutional Characteristics; Enrollment; Fall Staff; Salaries, Tenure, and Fringe Benefits of Full-Time Faculty; Completions; Finance; Graduation Rates; and Academic Libraries (in 2000, the Academic Libraries component separated from the IPEDS collection). Since 2000, IPEDS survey components occurring in a particular collection year have been organized into three seasonal collection periods: fall, winter, and spring. The Institutional Characteristics and Completions components first took place during the fall 2000 collection. The Employees by Assigned Position (EAP); Salaries, Tenure, and Fringe Benefits of Full-Time Faculty; and Fall Staff components first took place during the winter 2001–02 collection. The Enrollment, Student Financial Aid, Finance, and Graduation Rates components first took place during the spring 2001 collection. In the winter 2005–06 data collection, the EAP; Fall Staff; and Salaries, Tenure, and Fringe Benefits of Full-Time Faculty components were merged into the Human Resources component. During the 2007–08 collection year, the Enrollment component was broken into two separate components: 12-Month Enrollment (taking place in the fall collection) and Fall Enrollment (taking place in the spring collection). In the 2011–12 IPEDS data collection year, the Student Financial Aid component was moved to the winter data collection to aid in the timing of the net price of attendance calculations displayed on the College Navigator (https://nces.ed.gov/collegenavigator/ ). In the 2012–13 IPEDS data collection year, the Human Resources component was moved from the winter data collection to the spring data collection, and in the 2013–14 data collection year, the Graduation Rates and Graduation Rates 200 Percent components were moved from the spring data collection to the winter data collection. In the 2014–15 data collection year, a new component (Admissions) was added to IPEDS and a former IPEDS component (Academic Libraries) was reintegrated into IPEDS. The Admissions component, created out of admissions data contained in the fall collection’s Institutional Characteristics component, was made a part of the winter collection. The Academic Libraries component, after having been conducted as a survey independent of IPEDS between 2000 and 2012, was reintegrated into IPEDS as part of the spring collection. Finally, in the 2015-16 data collection year, the Outcomes Measure survey component was added to IPEDS.

Beginning in 2008–09, the first-professional degree category was combined with the doctor’s degree category. However, some degrees formerly identified as first-professional that take more than 2 full-time-equivalent academic years to complete, such as those in Theology (M.Div, M.H.L./Rav), are included in the master’s degree category. Doctor’s degrees were broken out into three distinct categories: research/scholarship, professional practice, and other doctor’s degrees.

IPEDS race/ethnicity data collection also changed in 2008–09. The “Asian” race category is now separate from a “Native Hawaiian or Other Pacific Islander” category, and a new category of “Two or more races” has been added.

The degree-granting institutions portion of IPEDS is a census of colleges that award associate’s or higher degrees and are eligible to participate in Title IV financial aid programs. Prior to 1993, data from technical and vocational institutions were collected through a sample survey. Beginning in 1993, all data are gathered in a census of all postsecondary institutions. Beginning in 1997, the survey was restricted to institutions participating in Title IV programs.

The classification of institutions offering college and university education changed as of 1996. Prior to 1996, institutions that either had courses leading to an associate’s or higher degree or that had courses accepted for credit toward those degrees were considered higher education institutions. Higher education institutions were accredited by an agency or association that was recognized by the U.S. Department of Education or were recognized directly by the Secretary of Education. The newer standard includes institutions that award associate’s or higher degrees and that are eligible to participate in Title IV federal financial aid programs. Tables that contain any data according to this standard are titled “degree-granting” institutions. Time-series tables may contain data from both series, and they are noted accordingly. The impact of this change on data collected in 1996 was not large. For example, tables on faculty salaries and benefits were only affected to a very small extent. Also, degrees awarded at the bachelor’s level or higher were not heavily affected. The largest impact was on private 2-year college enrollment. In contrast, most of the data on public 4-year colleges were affected to a minimal extent. The impact on enrollment in public 2-year colleges was noticeable in certain states, such as Arizona, Arkansas, Georgia, Louisiana, and Washington, but was relatively small at the national level. Overall, total enrollment for all institutions was about one-half of 1 percent higher in 1996 for degree-granting institutions than for higher education institutions.

Prior to the establishment of IPEDS in 1986, HEGIS acquired and maintained statistical data on the characteristics and operations of higher education institutions. Implemented in 1966, HEGIS was an annual universe survey of institutions accredited at the college level by an agency recognized by the Secretary of the U.S. Department of Education. These institutions were listed in NCES’s Education Directory, Colleges and Universities.

HEGIS surveys collected information on institutional characteristics, faculty salaries, finances, enrollment, and earned degrees. Since these surveys, like IPEDS, were distributed to all higher education institutions, the data presented are not subject to sampling error. However, they are subject to nonsampling error, the sources of which varied with the survey instrument.

The NCES Taskforce for IPEDS Redesign recognized that there were issues related to the consistency of data definitions as well as the accuracy, reliability, and validity of other quality measures within and across surveys. The IPEDS redesign in 2000 provided institution-specific web-based data forms. While the new system shortened data processing time and provided better data consistency, it did not address the accuracy of the data provided by institutions.

Beginning in 2003–04 with the Prior Year Data Revision System, prior-year data have been available to institutions entering current data. This allows institutions to make changes to their prior-year entries either by adjusting the data or by providing missing data. These revisions allow the evaluation of the data’s accuracy by looking at the changes made.

NCES conducted a study (NCES 2005-175) of the 2002–03 data that were revised in 2003–04 to determine the accuracy of the imputations, track the institutions that submitted revised data, and analyze the revised data they submitted. When institutions made changes to their data, NCES accepted that the revised data were the most accurate, correct, and “true” data. The data were analyzed for the number and type of institutions making changes, the type of changes, the magnitude of the changes, and the impact on published data.

Because NCES imputes for missing data, imputation procedures were also addressed by the Redesign Taskforce. For the 2003–04 assessment, differences between revised values and values that were imputed in the original files were compared (i.e., revised value minus imputed value). These differences were then used to provide an assessment of the effectiveness of imputation procedures. The size of the differences also provides an indication of the accuracy of imputation procedures. To assess the overall impact of changes on aggregate IPEDS estimates, published tables for each component were reconstructed using the revised 2002–03 data. These reconstructed tables were then compared to the published tables to determine the magnitude of aggregate bias and the direction of this bias.

Since the 2000–01 data collection year, IPEDS data collections have been web-based. Data have been provided by “keyholders,” institutional representatives appointed by campus chief executives, who are responsible for ensuring that survey data submitted by the institution are correct and complete. Because Title IV institutions are the primary focus of IPEDS and because these institutions are required to respond to IPEDS, response rates for Title IV institutions have been high (data on specific components are cited below). More details on the accuracy and reliability of IPEDS data can be found in the Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175).

Further information on IPEDS may be obtained from

Sam Barbett
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
samuel.barbett@ed.gov
https://nces.ed.gov/ipeds

Fall (12-Month Enrollment)

The 12-month period during which data are collected is July 1 through June 30. Data are collected by race/ethnicity, gender, and level of study (undergraduate or postbaccalaureate) and include unduplicated headcounts and instructional activity (contact or credit hours). These data are also used to calculate a full-time-equivalent (FTE) enrollment based on instructional activity. FTE enrollment is useful for gauging the size of the educational enterprise at the institution. Prior to the 2007–08 IPEDS data collection, the data collected in the 12-Month Enrollment component were part of the Fall Enrollment component, which is conducted during the spring data collection period. However, to improve the timeliness of the data, a separate 12-Month Enrollment survey component was developed in 2007. These data are now collected in the fall for the previous academic year. The response rate for the 12-Month Enrollment component of the fall 2016 data collection was nearly 100 percent. Data from 5 of 6,756 Title IV institutions that were expected to respond to this component contained item nonresponse, and these missing items were imputed.

Further information on the IPEDS 12-Month Enrollment component may be obtained from

Aida Aliyeva
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aaliyeva@air.org
https://nces.ed.gov/ipeds

Fall (Completions)

This survey was part of the HEGIS series throughout its existence. However, the degree classification taxonomy was revised in 1970–71, 1982–83, 1991–92, 2002–03, and 2009–10. Collection of degree data has been maintained through IPEDS.

The nonresponse rate does not appear to be a significant source of nonsampling error for this survey. The response rate over the years has been high; for the fall 2017 Completions component, it rounded to 100 percent. Data from 3 of the 6,642 Title IV institutions that were expected to respond to this component were imputed due to unit nonresponse. Imputation methods for the fall 2017 IPEDS Completions component are discussed in the 2017–18 Integrated Postsecondary Education Data System (IPEDS) Methodology Report (NCES 2018-195).

Further information on the IPEDS Completions component may be obtained from

Tara Lawley
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
tara.lawley@ed.gov
https://nces.ed.gov/ipeds

Fall (Institutional Characteristics)

This survey collects the basic information necessary to classify institutions, including control, level, and types of programs offered, as well as information on tuition, fees, and room and board charges. Beginning in 2000, the survey collected institutional pricing data from institutions with first-time, full-time, degree/certificate-seeking undergraduate students. Unduplicated full-year enrollment counts and instructional activity are now collected in the 12-Month Enrollment survey. Beginning in 2008–09, the student financial aid data collected include greater detail.

In the fall 2017 data collection, the response rate for Title IV entities on the Institutional Characteristics component rounded to 100 percent. Of the 6,715 Title IV entities that were expected to respond to this component, 2 responses were missing, and these data were imputed. In addition, some data were imputed for 2 institutions that partially responded to the Institutional Characteristics component.

Further information on the IPEDS Institutional Characteristics component may be obtained from

Moussa Ezzeddine
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
moussa.ezzeddine@ed.gov
https://nces.ed.gov/ipeds

Winter (Student Financial Aid)

This component was part of the spring data collection from IPEDS data collection years 2000–01 to 2010–11, but it moved to the winter data collection starting with the 2011–12 IPEDS data collection year. This move assists with the timing of the net price of attendance calculations displayed on College Navigator (https://nces.ed.gov/collegenavigator/).

Financial aid data are collected for undergraduate students. Data are collected regarding federal grants, state and local government grants, institutional grants, and loans. The collected data include the number of students receiving each type of financial assistance and the average amount of aid received by type of aid. Beginning in 2008–09, student financial aid data collected includes greater detail on types of aid offered.

In the winter 2017–18 data collection, the Student Financial Aid component collected data about financial aid awarded to undergraduate students, with particular emphasis on full-time, first-time degree/certificate-seeking undergraduate students awarded financial aid for the 2016–17 academic year. In addition, the component collected data on undergraduate and graduate students receiving benefits for veterans and members of the military service. Finally, student counts and awarded aid amounts were collected to calculate the net price of attendance for two subsets of full-time, first-time degree/certificate-seeking undergraduate students: those awarded any grant aid, and those awarded Title IV aid.

The response rate for the Student Financial Aid component in 2017–18 was nearly 100 percent. Of the 6,544 Title IV institutions that were expected to respond, responses were missing for 28 institutions, and these missing data were imputed. Additionally, data from 2 institutions that responded to the Student Financial Aid component contained item nonresponse, and these missing items were imputed.

Further information on the IPEDS Student Financial Aid component may be obtained from

Tara Lawley
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
tara.lawley@ed.gov
https://nces.ed.gov/ipeds

Winter (Graduation Rates and Graduation Rates 200 Percent)

In IPEDS data collection years 2012–13 and earlier, the Graduation Rates and Graduation Rates 200 Percent components were collected during the spring collection. In the IPEDS 2013–14 data collection year, however, the Graduation Rates and Graduation Rates 200 Percent collections were moved to the winter data collection.

The 2016–17 Graduation Rates component collected counts of full-time, first-time degree/certificate-seeking undergraduate students beginning their postsecondary education in the specified cohort year and their completion status as of 150 percent of normal program completion time at the same institution where the students started. If 150 percent of normal program completion time extended beyond August 31, 2016, the counts as of that date were collected. Four-year institutions used 2010 as the cohort year, while less-than-4-year institutions used 2013 as the cohort year. Of the 5,995 institutions that were expected to respond to the Graduation Rates component, responses were missing for 11 institutions, resulting in a response rate that rounded to 100 percent.

The 2016–17 Graduation Rates 200 Percent component was designed to combine information reported in a prior collection via the Graduation Rates component with current information about the same cohort of students. From previously collected data, the following counts were obtained: the number of students entering the institution as full-time, first-time degree/certificate-seeking students in a cohort year; the number of students in this cohort completing within 100 and 150 percent of normal program completion time; and the number of cohort exclusions (such as students who left for military service). Then the number of additional cohort exclusions and additional program completers between 151 and 200 percent of normal program completion time was collected. Four-year institutions reported on bachelor’s or equivalent degree-seeking students and used cohort year 2008 as the reference period, while less-than-4-year institutions reported on all students in the cohort and used cohort year 2012 as the reference period. Of the 5,594 institutions that were expected to respond to the Graduation Rates 200 Percent component, responses were missing for 10 institutions, resulting in a response rate that rounded to 100 percent.

Further information on the IPEDS Graduation Rates and Graduation Rates 200 Percent components may be obtained from

Andrew Mary
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
andrew.mary@ed.gov
https://nces.ed.gov/ipeds/

Winter (Admissions)

In the 2014–15 survey year, an Admissions component was added to the winter data collection. This component was created out of the admissions data that had previously been a part of the fall Institutional Characteristics component. Situating these data in a new component in the winter collection enables all institutions to report data for the most recent fall period.

The Admissions component collects information about the selection process for entering first-time degree/certificate-seeking undergraduate students. Data obtained from institutions include admissions considerations (e.g., secondary school records, admission test scores), the number of first-time degree/certificate-seeking undergraduate students who applied, the number admitted, and the number enrolled. Admissions data were collected only from institutions that do not have an open admissions policy for entering first-time students. Data collected for the IPEDS winter 2016–17 Admissions component relate to individuals applying to be admitted during the fall of the 2016–17 academic year (the fall 2016 reporting period). Of the 2,045 Title IV institutions that were expected to respond to the Admissions component, responses were missing for 2 institutions.

Further information on the IPEDS Admissions component may be obtained from

Moussa Ezzeddine
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
moussa.ezzeddine@ed.gov
https://nces.ed.gov/ipeds

Spring (Fall Enrollment)

This survey has been part of the HEGIS and IPEDS series since 1966. Response rates have been relatively high, generally exceeding 85 percent. Beginning in 2000, with web-based data collection, higher response rates were attained. In the spring 2017 data collection, the Fall Enrollment component covered fall 2016. Of the 6,742 institutions that were expected to respond, 6,734 provided data, for a response rate that rounded to 100 percent. Data collection procedures for the Fall Enrollment component of the spring 2017 data collection are presented in Enrollment and Employees in Postsecondary Institutions, Fall 2016; and Financial Statistics and Academic Libraries, Fiscal Year 2016: First Look (Provisional Data) (NCES 2018-002).

Beginning with the fall 1986 survey and the introduction of IPEDS (see above), the survey was redesigned. The survey allows (in alternating years) for the collection of age and residence data. Beginning in 2000, the survey collected instructional activity and unduplicated headcount data, which are needed to compute a standardized, full-time-equivalent (FTE) enrollment statistic for the entire academic year. As of 2007–08, the timeliness of the instructional activity data has been improved by collecting these data in the fall as part of the 12-Month Enrollment component instead of in the spring as part of the Fall Enrollment component.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) showed that public institutions made the majority of changes to enrollment data during the 2004 revision period. The majority of changes were made to unduplicated headcount data, with the net differences between the original data and the revised data being about 1 percent. Part-time students in general and enrollment in private not-for-profit institutions were often underestimated. The fewest changes by institutions were to Classification of Instructional Programs (CIP) code data. (The CIP is a taxonomic coding scheme that contains titles and descriptions of primarily postsecondary instructional programs.)

Further information on the IPEDS Fall Enrollment component may be obtained from

Aida Aliyeva
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aaliyeva@air.org
https://nces.ed.gov/ipeds

Spring (Finance)

This survey was part of the HEGIS series and has been continued under IPEDS. Substantial changes were made in the financial survey instruments in fiscal year (FY) 1976, FY 1982, FY 1987, FY 1997, and FY 2002. While these changes were significant, a considerable effort has been made to present only comparable information on trends and to note inconsistencies. The FY 1976 survey instrument contained numerous revisions to earlier survey forms, which made direct comparisons of line items very difficult. Beginning in FY 1982, Pell Grant data were collected in the categories of federal restricted grant and contract revenues and restricted scholarship and fellowship expenditures. The introduction of IPEDS in the FY 1987 survey included several important changes to the survey instrument and data processing procedures. Beginning in FY 1997, data for private institutions were collected using new financial concepts consistent with Financial Accounting Standards Board (FASB) reporting standards, which provide a more comprehensive view of college finance activities. The data for public institutions continued to be collected using the older survey form. The data for public and private institutions were no longer comparable and, as a result, no longer presented together in analysis tables. In FY 2001, public institutions had the option of either continuing to report using Government Accounting Standards Board (GASB) standards or using the new FASB reporting standards. Beginning in FY 2002, public institutions could use either the original GASB standards, the FASB standards, or the new GASB Statement 35 standards (GASB35).

Possible sources of nonsampling error in the financial statistics include nonresponse, imputation, and misclassification. The unweighted response rate has been about 85 to 90 percent for most years these data appeared in NCES reports; however, in more recent years, response rates have been much higher because Title IV institutions are required to respond. Since 2002, the IPEDS data collection has been a full-scale web-based collection, which has improved the quality and timeliness of the data. For example, the ability of IPEDS to tailor online data entry forms for each institution based on characteristics such as institutional control, level of institution, and calendar system and the institutions’ ability to submit their data online are aspects of full-scale web-based collections that have improved response.

The response rate for the FY 2016 Finance component was nearly 100 percent: Of the 6,825 institutions and administrative offices that were expected to respond, 6,816 provided data. Data collection procedures for the FY 2016 component are discussed in Enrollment and Employees in Postsecondary Institutions, Fall 2016; and Financial Statistics and Academic Libraries, Fiscal Year 2016: First Look (Provisional Data) (NCES 2018-002).

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) found that only a small percentage (2.9 percent, or 168) of postsecondary institutions either revised 2002–03 data or submitted data for items they previously left unreported. Though relatively few institutions made changes, the changes made were relatively large—greater than 10 percent of the original data. With a few exceptions, these changes, large as they were, did not greatly affect the aggregate totals.

Further information on the IPEDS Finance component may be obtained from

Bao Le
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
bao.le@ed.gov
https://nces.ed.gov/ipeds

Spring (Human Resources)

The Human Resources component was part of the IPEDS winter data collection from data collection years 2000–01 to 2011–12. For the 2012–13 data collection year, the Human Resources component was moved to the spring 2013 data collection, in order to give institutions more time to prepare their survey responses (the spring and winter collections begin on the same date, but the reporting deadline for the spring collection is several weeks later than the reporting deadline for the winter collection).

IPEDS Collection Years 2012–13 and Later

In 2012–13, new occupational categories replaced the primary function/occupational activity categories previously used in the IPEDS Human Resources component. This change was required in order to align the IPEDS Human Resources categories with the 2010 Standard Occupational Classification (SOC) system. In tandem with the change in 2012–13 from using primary function/occupational activity categories to using the new occupational categories, the sections making up the IPEDS Human Resources component (which previously had been Employees by Assigned Position, Fall Staff, and Salaries) were changed to Full-Time Instructional Staff, Full-time Noninstructional Staff, Salaries, Part-Time Staff, and New Hires.

The webpage “Archived Changes—Changes to IPEDS Data Collections, 2012–13”  (https://nces.ed.gov/ipeds/InsidePages/ArchivedChanges?year=2012-13) provides information on the redesigned IPEDS Human Resources component. “Resources for Implementing Changes to the IPEDS Human Resources (HR) Survey Component Due to Updated 2010 Standard Occupational Classification (SOC) System”  (https://nces.ed.gov/ipeds/Section/resources_soc) is a webpage containing additional information, including notes comparing the new classifications with the old (“Comparison of New IPEDS Occupational Categories with Previous Categories”), a crosswalk from the new IPEDS occupational categories to the 2010 SOC occupational categories (“New IPEDS Occupational Categories and 2010 SOC”), answers to frequently asked questions, and a link to current IPEDS Human Resources survey screens.

Of the 6,819 institutions and administrative offices that were expected to respond to the spring 2017 Human Resources component, 6,811 provided data, for a response rate that rounded to 100 percent. Data collection procedures for this component are presented in Enrollment and Employees in Postsecondary Institutions, Fall 2016; and Financial Statistics and Academic Libraries, Fiscal Year 2016: First Look (Provisional Data) (NCES 2018-002).

IPEDS Collection Years Prior to 2012–13

In collection years before 2001–02, IPEDS conducted a Fall Staff survey and a Salaries survey; in the 2001–02 collection year, the Employees by Assigned Position survey was added to IPEDS. In the 2005–06 collection year, these three surveys became sections of the IPEDS “Human Resources” component.

Data gathered by the Employees by Assigned Position section categorized all employees by full- or part-time status, faculty status, and primary function/occupational activity. Institutions with M.D. or D.O. programs were required to report their medical school employees separately. A response to the EAP was required of all 6,858 Title IV institutions and administrative offices in the United States and other jurisdictions for winter 2008–09, and 6,845, or 99.8 percent unweighted, responded. Of the 6,970 Title IV institutions and administrative offices required to respond to the winter 2009–10 EAP, 6,964, or 99.9 percent, responded. And of the 7,256 Title IV institutions and administrative offices required to respond to the EAP for winter 2010–11, 7,252, or 99.9 percent, responded.

The main functions/occupational activities of the EAP section were primarily instruction, instruction combined with research and/or public service, primarily research, primarily public service, executive/administrative/managerial, other professionals (support/service), graduate assistants, technical and paraprofessionals, clerical and secretarial, skilled crafts, and service/maintenance.

All full-time instructional faculty classified in the EAP full-time nonmedical school part as either (1) primarily instruction or (2) instruction combined with research and/or public service were included in the Salaries section, unless they were exempt.

The Fall Staff section categorized all staff on the institution’s payroll as of November 1 of the collection year by employment status (full time or part time), primary function/occupational activity, gender, and race/ethnicity. These data elements were collected from degree-granting and non-degree-granting institutions; however, additional data elements were collected from degree-granting institutions and related administrative offices with 15 or more full-time staff. These elements include faculty status, contract length/teaching period, academic rank, salary class intervals, and newly hired full-time permanent staff.

The Fall Staff section, which was required only in odd-numbered reporting years, was not required during the 2008–09 Human Resources data collection. However, of the 6,858 Title IV institutions and administrative offices in the United States and other jurisdictions, 3,295, or 48.0 percent unweighted, did provide data in the Fall Staff section that year. During the 2009–10 Human Resources data collection, when all 6,970 Title IV institutions and administrative offices were required to respond to the Fall Staff section, 6,964, or 99.9 percent, did so. A response to the Fall Staff section of the 2010–11 Human Resources collection was optional, and 3,364 Title IV institutions and administrative offices responded that year (a response rate of 46.3 percent).

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2015-012) found that for 2003–04 employee data items, changes were made by 1.2 percent (77) of the institutions that responded. For all institutions making changes, the changes resulted in different employee counts. For both institutional and aggregate differences, however, the changes had little impact on the original employee count submissions. A large number of institutions reported different staff data to IPEDS and Thomson Peterson; however, the magnitude of the differences was small—usually no more than 17 faculty members for any faculty variable.

The Salaries section collected data for full-time instructional faculty (except those in medical schools in the EAP section, described above) on the institution’s payroll as of November 1 of the collection year by contract length/teaching period, gender, and academic rank. The reporting of data by faculty status in the Salaries section was required from 4-year degree-granting institutions and above only. Salary outlays and fringe benefits were also collected for full-time instructional staff on 9/10- and 11/12-month contracts/teaching periods. This section was applicable to degree-granting institutions unless exempt.

Between 1966–67 and 1985–86, this survey differed from other HEGIS surveys in that imputations were not made for nonrespondents. Thus, there is some possibility that the salary averages presented in this report may differ from the results of a complete enumeration of all colleges and universities. Beginning with the surveys for 1987–88, the IPEDS data tabulation procedures included imputations for survey nonrespondents. The unweighted response rate for the 2008–09 Salaries survey section was 99.9 percent. The response rate for the 2009–10 Salaries section was 100.0 percent (4,453 of the 4,455 required institutions responded), and the response rate for 2010–11 was 99.9 percent (4,561 of the 4,565 required institutions responded). Imputation methods for the 2010–11 Salaries survey section are discussed in Employees in Postsecondary Institutions, Fall 2010, and Salaries of Full-Time Instructional Staff, 2010–11 (NCES 2012-276).

Although data from this survey are not subject to sampling error, sources of nonsampling error may include computational errors and misclassification in reporting and processing. The electronic reporting system does allow corrections to prior-year reported or missing data, and this should help alleviate these problems. Also, NCES reviews individual institutions’ data for internal and longitudinal consistency and contacts institutions to check inconsistent data.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2015-012) found that only 1.3 percent of the responding Title IV institutions in 2003–04 made changes to their salaries data. The differences between the imputed data and the revised data were small and found to have little impact on the published data.

Further information on the Human Resources component may be obtained from

Imani Stutely
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
imani.stutely@ed.gov
https://nces.ed.gov/ipeds

National Assessment of Educational Progress

The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects. The assessment is based on frameworks developed by the National Assessment Governing Board (NAGB). It includes both multiple-choice items and constructed-response items (those requiring written answers). Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students performing at or above three achievement levels (Basic, Proficient, and Advanced) are also reported for these groups.

Main NAEP Assessments

From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate. In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main NAEP, and an aggregate of all state samples replaced the separate national sample. (School district-level assessments—under the Trial Urban District Assessment [TUDA] program—also began in 2002.)

Results are available for the mathematics assessments administered in 2000, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and better assess the specific objectives for students at each grade level.

The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.

Since the 2005 changes to the mathematics framework were minimal for grades 4 and 8, comparisons over time can be made between assessments conducted before and after the framework’s implementation for these grades. The changes that the 2005 framework made to the grade 12 assessment, however, were too drastic to allow grade 12 results from before and after implementation to be directly compared. These changes included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework; merging the measurement and geometry content areas; and changing the reporting scale from 0–500 to 0–300. For more information regarding the 2005 mathematics framework revisions, see https://nces.ed.gov/nationsreportcard/mathematics/frameworkcomparison.asp.

Results are available for the reading assessments administered in 2000, 2002, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.

Both a content alignment study and a reading trend, or bridge, study were conducted to determine if the new reading assessment was comparable to the prior assessment. Overall, the results of the special analyses suggested that the assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see https://nces.ed.gov/nationsreportcard/reading/whatmeasure.asp.

In spring 2013, NAEP released results from the NAEP 2012 economics assessment in The Nation’s Report Card: Economics 2012 (NCES 2013-453). First administered in 2006, the NAEP economics assessment measures 12th-graders’ understanding of a wide range of topics in three main content areas: market economy, national economy, and international economy. The 2012 assessment is based on a nationally representative sample of nearly 11,000 students in the 12th grade.

In The Nation’s Report Card: A First Look—2013 Mathematics and Reading (NCES 2014-451), NAEP released the results of the 2013 mathematics and reading assessments. Results can also be accessed using the interactive graphics and downloadable data available at the online Nation’s Report Card website (http://nationsreportcard.gov/reading_math_2013/#/).

The Nation’s Report Card: A First Look—2013 Mathematics and Reading Trial Urban District Assessment (NCES 2014-466) provides the results of the 2013 mathematics and reading TUDA, which measured the reading and mathematics progress of 4th- and 8th-graders from 21 urban school districts. Results from the 2013 mathematics and reading TUDA can also be accessed using the interactive graphics and downloadable data available at the online TUDA website (http://nationsreportcard.gov/reading_math_tuda_2013/#/).

The online interactive report The Nation’s Report Card: 2014 U.S. History, Geography, and Civics at Grade 8 (NCES 2015-112) provides grade 8 results for the 2014 NAEP U.S. history, geography, and civics assessments. Trend results for previous assessment years in these three subjects, as well as information on school and student participation rates and sample tasks and student responses, are also presented.

In 2014, the first administration of the NAEP Technology and Engineering Literacy (TEL) Assessment asked 8th-graders to respond to questions aimed at assessing their knowledge and skill in understanding technological principles, solving technology and engineering-related problems, and using technology to communicate and collaborate. The online report The Nation’s Report Card: Technology and Engineering Literacy (NCES 2016-119) presents national results for 8th-graders on the TEL assessment.

The Nation’s Report Card: 2015 Mathematics and Reading Assessments (NCES 2015-136) is an online interactive report that presents national and state results for 4th- and 8th-graders on the NAEP 2015 mathematics and reading assessments. The report also presents TUDA results in mathematics and reading for 4th- and 8th-graders. The online interactive report The Nation’s Report Card: 2015 Mathematics and Reading at Grade 12 (NCES 2016-018) presents grade 12 results from the NAEP 2015 mathematics and reading assessments.

Results from the 2015 NAEP science assessment are presented in the online report The Nation’s Report Card: 2015 Science at Grades 4, 8, and 12 (NCES 2016-162).The assessment measures the knowledge of 4th-, 8th-, and 12th-graders in the content areas of physical science, life science, and Earth and space sciences, as well as their understanding of four science practices (identifying science principles, using science principles, using scientific inquiry, and using technological design). National results are reported for grades 4, 8, and 12, and results from 46 participating states and one jurisdiction are reported for grades 4 and 8. Since a new NAEP science framework was introduced in 2009, results from the 2015 science assessment can be compared to results from the 2009 and 2011 science assessments, but cannot be compared to the science assessments conducted prior to 2009.

NAEP is in the process of transitioning from paper-based assessments to technology-based assessments; consequently, data are needed regarding students’ access to and familiarity with technology, at home and at school. The Computer Access and Familiarity Study (CAFS) is designed to fulfill this need. CAFS was conducted as part of the main administration of the 2015 NAEP. A subset of the grade 4, 8, and 12 students who took the main NAEP were chosen to take the additional CAFS questionnaire. The main 2015 NAEP was administered in a paper-and-pencil format to some students and a digital-based format to others, and CAFS participants were given questionnaires in the same format as their NAEP questionnaires.

The online Highlights report 2017 NAEP Mathematics and Reading Assessments: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts (NCES 2018-037) presents an overview of results from the NAEP 2017 mathematics and reading reports. Highlighted results include key findings for the nation, states/jurisdictions, and 27 districts that participated in the Trial Urban District Assessment (TUDA) in mathematics and reading at grades 4 and 8.

NAEP Long-Term Trend Assessments

In addition to conducting the main assessments, NAEP also conducts the long-term trend assessments. Long-term trend assessments provide an opportunity to observe educational progress in reading and mathematics of 9-, 13-, and 17-year-olds since the early 1970s. The long-term trend reading assessment measures students’ reading comprehension skills using an array of passages that vary by text types and length. The assessment was designed to measure students’ ability to locate specific information in the text provided; make inferences across a passage to provide an explanation; and identify the main idea in the text.

The NAEP long-term trend assessment in mathematics measures knowledge of mathematical facts; ability to carry out computations using paper and pencil; knowledge of basic formulas, such as those applied in geometric settings; and ability to apply mathematics to skills of daily life, such as those involving time and money.

The Nation’s Report Card: Trends in Academic Progress 2012 (NCES 2013-456) provides the results of 12 long-term trend reading assessments dating back to 1971 and 11 long-term trend mathematics assessments dating back to 1973.

Further information on NAEP may be obtained from

Daniel McGrath
Reporting and Dissemination Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
daniel.mcgrath@ed.gov
https://nces.ed.gov/nationsreportcard

National Household Education Surveys Program

The National Household Education Surveys Program (NHES) is a data collection system that is designed to address a wide range of education-related issues. Surveys have been conducted in 1991, 1993, 1995, 1996, 1999, 2001, 2003, 2005, 2007, 2012, and 2016. NHES targets specific populations for detailed data collection. It is intended to provide more detailed data on the topics and populations of interest than are collected through supplements to other household surveys.

The topics addressed by NHES:1991 were early childhood education and adult education. About 60,000 households were screened for NHES:1991. In the Early Childhood Education Survey, about 14,000 parents/guardians of 3- to 8-year-olds completed interviews about their children’s early educational experiences. Included in this component were participation in nonparental care/education; care arrangements and school; and family, household, and child characteristics. In the NHES:1991 Adult Education Survey, about 9,800 people 16 years of age and over, identified as having participated in an adult education activity in the previous 12 months, were questioned about their activities. Data were collected on programs and up to four courses, including the subject matter, duration, sponsorship, purpose, and cost. Information on the household and the adult’s background and current employment was also collected.

In NHES:1993, nearly 64,000 households were screened. Approximately 11,000 parents of 3- to 7-year-olds completed interviews for the School Readiness Survey. Topics included the developmental characteristics of preschoolers; school adjustment and teacher feedback to parents for kindergartners and primary students; center-based program participation; early school experiences; home activities with family members; and health status. In the School Safety and Discipline Survey, about 12,700 parents of children in grades 3 to 12 and about 6,500 youth in grades 6 to 12 were interviewed about their school experiences. Topics included the school learning environment, discipline policy, safety at school, victimization, the availability and use of alcohol/drugs, and alcohol/drug education. Peer norms for behavior in school and substance use were also included in this topical component. Extensive family and household background information was collected, as well as characteristics of the school attended by the child.

In NHES:1995, the Early Childhood Program Participation Survey and the Adult Education Survey were similar to those fielded in 1991. In the Early Childhood component, about 14,000 parents of children from birth to 3rd grade were interviewed out of 16,000 sampled, for a completion rate of 90.4 percent. In the Adult Education Survey, about 24,000 adults were sampled and 82.3 percent (20,000) completed the interview.

NHES:1996 covered parent and family involvement in education and civic involvement. Data on homeschooling and school choice also were collected. The 1996 survey screened about 56,000 households. For the Parent and Family Involvement in Education Survey, nearly 21,000 parents of children in grades 3 to 12 were interviewed. For the Civic Involvement Survey, about 8,000 youth in grades 6 to 12, about 9,000 parents, and about 2,000 adults were interviewed. The 1996 survey also addressed public library use. Adults in almost 55,000 households were interviewed to support state-level estimates of household public library use.

NHES:1999 collected end-of-decade estimates of key indicators from the surveys conducted throughout the 1990s. Approximately 60,000 households were screened for a total of about 31,000 interviews with parents of children from birth through grade 12 (including about 6,900 infants, toddlers, and preschoolers) and adults age 16 or older not enrolled in grade 12 or below. Key indicators included participation of children in nonparental care and early childhood programs, school experiences, parent/family involvement in education at home and at school, youth community service activities, plans for future education, and adult participation in educational activities and community service.

NHES:2001 included two surveys that were largely repeats of similar surveys included in earlier NHES collections. The Early Childhood Program Participation Survey was similar in content to the Early Childhood Program Participation Survey fielded as part of NHES:1995, and the Adult Education and Lifelong Learning Survey was similar in content to the Adult Education Survey of NHES:1995. The Before- and After-School Programs and Activities Survey, while containing items fielded in earlier NHES collections, had a number of new items that collected information about what school-age children were doing during the time they spent in child care or in other activities, what parents were looking for in care arrangements and activities, and parent evaluations of care arrangements and activities. Parents of approximately 6,700 children from birth through age 6 who were not yet in kindergarten completed Early Childhood Program Participation Survey interviews. Nearly 10,900 adults completed Adult Education and Lifelong Learning Survey interviews, and parents of nearly 9,600 children in kindergarten through grade 8 completed Before- and After-School Programs and Activities Survey interviews.

NHES:2003 included two surveys: the Parent and Family Involvement in Education Survey and the Adult Education for Work-Related Reasons Survey (the first administration). Whereas previous adult education surveys were more general in scope, this survey had a narrower focus on occupation-related adult education programs. It collected in-depth information about training and education in which adults participated specifically for work-related reasons, either to prepare for work or a career or to maintain or improve work-related skills and knowledge they already had. The Parent and Family Involvement Survey expanded on the first survey fielded on this topic in 1996. In 2003, screeners were completed with 32,050 households. About 12,700 of the 16,000 sampled adults completed the Adult Education for Work-Related Reasons Survey, for a weighted response rate of 76 percent. For the Parent and Family Involvement in Education Survey, interviews were completed by the parents of about 12,400 of the 14,900 sampled children in kindergarten through grade 12, yielding a weighted unit response rate of 83 percent.

NHES:2005 included surveys that covered adult education, early childhood program participation, and after-school programs and activities. Data were collected from about 8,900 adults for the Adult Education Survey, from parents of about 7,200 children for the Early Childhood Program Participation Survey, and from parents of nearly 11,700 children for the After-School Programs and Activities Survey. These surveys were substantially similar to the surveys conducted in 2001, with the exceptions that the Adult Education Survey addressed a new topic—informal learning activities for personal interest—and the Early Childhood Program Participation Survey and After-School Programs and Activities Survey did not collect information about before-school care for school-age children.

NHES:2007 fielded the Parent and Family Involvement in Education Survey and the School Readiness Survey. These surveys were similar in design and content to surveys included in the 2003 and 1993 collections, respectively. New features added to the Parent and Family Involvement Survey were questions about supplemental education services provided by schools and school districts (including use of and satisfaction with such services), as well as questions that would efficiently identify the school attended by the sampled students. New features added to the School Readiness Survey were questions that collected details about TV programs watched by the sampled children. For the Parent and Family Involvement Survey, interviews were completed with parents of 10,680 sampled children in kindergarten through grade 12, including 10,370 students enrolled in public or private schools and 310 homeschooled children. For the School Readiness Survey, interviews were completed with parents of 2,630 sampled children ages 3 to 6 and not yet in kindergarten. Parents who were interviewed about children in kindergarten through 2nd grade for the Parent and Family Involvement Survey were also asked some questions about these children’s school readiness.

The 2007 and earlier administrations of NHES used a random-digit-dial sample of landline phones and computer-assisted telephone interviewing to conduct interviews. However, due to declining response rates for all telephone surveys and the increase in households that only or mostly use a cell phone instead of a landline, the data collection method was changed to an address-based sample survey for NHES:2012. Because of this change in survey mode, readers should use caution when comparing NHES:2012 estimates to those of prior NHES administrations.

NHES:2012 included the Parent and Family Involvement in Education Survey and the Early Childhood Program Participation Survey. The Parent and Family Involvement in Education Survey gathered data on students age 20 or younger who were enrolled in kindergarten through grade 12 or who were homeschooled at equivalent grade levels. Survey questions that pertained to students enrolled in kindergarten through grade 12 requested information on various aspects of parent involvement in education (such as help with homework, family activities, and parent involvement at school) and survey questions pertaining to homeschooled students requested information on the student’s homeschooling experiences, the sources of the curriculum, and the reasons for homeschooling.

The 2012 Parent and Family Involvement in Education Survey questionnaires were completed for 17,563 (397 homeschooled and 17,166 enrolled) children, for a weighted unit response rate of 78.4 percent. The overall estimated unit response rate (the product of the screener unit response rate of 73.8 percent and the Parent and Family Involvement in Education Survey unit response rate) was 57.8 percent.

The 2012 Early Childhood Program Participation Survey collected data on the early care and education arrangements and early learning of children from birth through the age of 5 who were not yet enrolled in kindergarten. Questionnaires were completed for 7,893 children, for a weighted unit response rate of 78.7 percent. The overall estimated weighted unit response rate (the product of the screener weighted unit response rate of 73.8 percent and the Early Childhood Program Participation Survey unit weighted response rate) was 58.1 percent. 

NHES:2016 used a nationally representative address-based sample covering the 50 states and the District of Columbia. The 2016 administration included a screener survey questionnaire that identified households with children or youth under age 20 and adults ages 16 to 65. A total of 206,000 households were selected based on this screener, and the screener response rate was 66.4 percent. All sampled households received initial contact by mail. Although the majority of respondents completed paper questionnaires, a small sample of cases was part of a web experiment with mailed invitations to complete the survey online.

The 2016 Parent and Family Involvement in Education Survey, like its predecessor in 2012, gathered data about students age 20 or under who were enrolled in kindergarten through grade 12 or who were being homeschooled for the equivalent grades. The 2016 survey’s questions also covered aspects of parental involvement in education similar to those in the 2012 survey. The total number of completed questionnaires in the 2016 survey was 14,075 (13,523 enrolled and 552 homeschooled children), representing a population of 53.2 million students either homeschooled or enrolled in a public or private school in 2015–16. The survey’s weighted unit response rate was 74.3 percent, and the overall response rate was 49.3 percent.

The 2016 Early Childhood Program Participation Survey collected data about children from birth through age 6 who were not yet enrolled in kindergarten. The survey asked about children’s participation in relative care, nonrelative care, and center-based care arrangements. It also requested information such as the main reason for choosing care, factors that were important to parents when choosing a care arrangement, the primary barriers to finding satisfactory care, activities the family does with the child, and what the child is learning. Questionnaires were completed for 5,844 children, for a weighted unit response rate of 73.4 percent and an overall estimated weighted unit response rate of 48.7 percent.

Data for the 2016 Parent and Family Involvement in Education Survey are available in Parent and Family Involvement in Education: Results From the National Household Education Surveys Program of 2016 (NCES 2017-102); data for the 2016 Early Childhood Program Participation Survey are available in Early Childhood Program Participation, Results From the National Household Education Surveys Program of 2016 (NCES 2017-101).

Further information on NHES may be obtained from

Sarah Grady
Andrew Zukerberg
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
sarah.grady@ed.gov
andrew.zukerberg@ed.gov
https://nces.ed.gov/nhes

National Postsecondary Student Aid Study

The National Postsecondary Student Aid Study (NPSAS) is a comprehensive nationwide study of how students and their families pay for postsecondary education. Data gathered from the study are used to help guide future federal student financial aid policy. The study covers nationally representative samples of undergraduates, graduates, and first-professional students in the 50 states, the District of Columbia, and Puerto Rico, including students attending less-than-2-year institutions, community colleges, and 4-year colleges and universities. Participants include students who do not receive aid and those who do receive financial aid. Since NPSAS identifies nationally representative samples of student subpopulations of interest to policymakers and obtains baseline data for longitudinal study of these subpopulations, data from the study provide the base-year sample for the Beginning Postsecondary Students (BPS) longitudinal study and the Baccalaureate and Beyond (B&B) longitudinal study.

Originally, NPSAS was conducted every 3 years. Beginning with the 1999–2000 study (NPSAS:2000), NPSAS has been conducted every 4 years. NPSAS:08 included a new set of instrument items to obtain baseline measures of the awareness of two new federal grants introduced in 2006: the Academic Competitiveness Grant (ACG) and the National Science and Mathematics Access to Retain Talent (SMART) grant.

The first NPSAS (NPSAS:87) was conducted during the 1986–87 school year. Data were gathered from about 1,100 colleges, universities, and other postsecondary institutions; 60,000 students; and 14,000 parents. These data provided information on the cost of postsecondary education, the distribution of financial aid, and the characteristics of both aided and nonaided students and their families.

For NPSAS:93, information on 77,000 undergraduates and graduate students enrolled during the school year was collected at 1,000 postsecondary institutions. The sample included students who were enrolled at any time between July 1, 1992, and June 30, 1993. About 66,000 students and a subsample of their parents were interviewed by telephone. NPSAS:96 contained information on more than 48,000 undergraduate and graduate students from about 1,000 postsecondary institutions who were enrolled at any time during the 1995–96 school year. NPSAS:2000 included nearly 62,000 students (50,000 undergraduates and almost 12,000 graduate students) from 1,000 postsecondary institutions. NPSAS:04 collected data on about 80,000 undergraduates and 11,000 graduate students from 1,400 postsecondary institutions. For NPSAS:08, about 114,000 undergraduate students and 14,000 graduate students who were enrolled in postsecondary education during the 2007–08 school year were selected from more than 1,730 postsecondary institutions.

NPSAS:12 sampled about 95,000 undergraduates and 16,000 graduate students from approximately 1,500 postsecondary institutions. Public access to the data is available online through PowerStats (https://nces.ed.gov/datalab/).

NPSAS:16 sampled about 89,000 undergraduate and 24,000 graduate students attending approximately 1,800 Title IV eligible postsecondary institutions in the 50 states, the District of Columbia, and Puerto Rico. The sample represents approximately 20 million undergraduate and 4 million graduate students enrolled in postsecondary education at Title IV eligible institutions at any time between July 1, 2015, and June 30, 2016.

Further information on NPSAS may be obtained from

Aurora D’Amico
Tracy Hunt-White
Longitudinal Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
aurora.damico@ed.gov
tracy.hunt-white@ed.gov
https://nces.ed.gov/npsas

National Teacher and Principal Survey (NTPS)

The National Teacher and Principal Survey is a set of related questionnaires that collect descriptive data on the context of elementary and secondary education. Data reported by schools, principals, and teachers provide a variety of statistics on the condition of education in the United States that may be used by policymakers and the general public. The NTPS questionnaires cover a wide range of topics, including teacher demand, teacher and principal characteristics, teachers’ and principals’ perceptions of school climate and problems in their schools, teacher and principal compensation, district hiring and retention practices, general conditions in schools, and basic characteristics of the student population.

The NTPS was first conducted during the 2015–16 school year. The survey is a redesign of the Schools and Staffing Survey (SASS), which was conducted from the 1987–88 school year to the 2011–12 school year. Although the NTPS maintains the SASS survey’s focus on schools, teachers, and administrators, the NTPS has a different structure and sample than SASS. In addition, whereas SASS operated on a 4-year survey cycle, the NTPS operates on a 2-year survey cycle.

The school sample for the 2015–16 NTPS was based on an adjusted public school universe file from the 2013–14 Common Core of Data (CCD), a database of all the nation’s public school districts and public schools. The NTPS definition of a school is the same as the SASS definition of a school—an institution or part of an institution that provides classroom instruction to students, has one or more teachers to provide instruction, serves students in one or more of grades 1–12 or the ungraded equivalent, and is located in one or more buildings apart from a private home.

The 2015–16 NTPS universe of schools is confined to the 50 states plus the District of Columbia. It excludes the Department of Defense dependents schools overseas, schools in U.S. territories overseas, and CCD schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent. Bureau of Indian Education schools are included in the NTPS universe, but these schools were not oversampled and the data do not support separate BIE estimates.

The NTPS includes three key components: school questionnaires, principal questionnaires, and teacher questionnaires. NTPS data are collected by the U.S. Census Bureau through a mail questionnaire with telephone and in-person field follow-up. The school and principal questionnaires were sent to sampled schools, and the teacher questionnaire was sent to a sample of teachers working at sampled schools. The NTPS school sample consisted of about 8,300 public schools; the principal sample consisted of about 8,300 public school principals; and the teacher sample consisted of about 40,000 public school teachers.

The school questionnaire asks knowledgeable school staff members about grades offered, student attendance and enrollment, staffing patterns, teaching vacancies, programs and services offered, curriculum, and community service requirements. In addition, basic information is collected about the school year, including the beginning time of students’ school days and the length of the school year. The weighted unit response rate for the 2015–16 school survey was 72.5 percent.

The principal questionnaire collects information about principal/school head demographic characteristics, training, experience, salary, goals for the school, and judgments about school working conditions and climate. Information is also obtained on professional development opportunities for teachers and principals, teacher performance, barriers to dismissal of underperforming teachers, school climate and safety, parent/guardian participation in school events, and attitudes about educational goals and school governance. The weighted unit response rate for the 2015–16 principal survey was 71.8 percent.

The teacher questionnaire collects data from teachers about their current teaching assignment, workload, education history, and perceptions and attitudes about teaching. Questions are also asked about teacher preparation, induction, organization of classes, computers, and professional development. The weighted response rate for the 2015–16 teacher survey was 67.8 percent.

Further information about the NTPS is available in User’s Manual for the 2015–16 National Teacher and Principal Survey, Volumes 1-4 (NCES 2017-131 through NCES 2017-134).

For additional information about the NTPS program, please contact

Maura Spiegelman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
maura.spiegelman@ed.gov
https://nces.ed.gov/surveys/ntps

Principal Follow-up Survey

The Principal Follow-up Survey (PFS), first conducted in school year 2008–09, is a component of the 2011–12 Schools and Staffing Survey (SASS). The 2012–13 PFS was administered in order to provide attrition rates for principals in K–12 public and private schools. The goal was to assess how many principals in the 2011–12 school year still worked as a principal in the same school in the 2012–13 school year, how many had moved to become a principal in another school, and how many no longer worked as a principal. The PFS sample included all schools whose principals had completed SASS principal questionnaires. Schools that had returned a completed 2011–12 SASS principal questionnaire were mailed the PFS form in March 2013.

Further information on the PFS may be obtained from

Isaiah O’Rear
Sample Surveys Division
Cross-Sectional Surveys Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
isaiah.orear@ed.gov
https://nces.ed.gov/surveys/sass/

Private School Universe Survey

The purposes of the Private School Universe Survey (PSS) data collection activities are (1) to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools and (2) to report data on the total number of private schools, teachers, and students in the survey universe. Begun in 1989, the PSS has been conducted every 2 years, and data for the 1989–90, 1991–92, 1993–94, 1995–96, 1997–98, 1999–2000, 2001–02, 2003–04, 2005–06, 2007–08, 2009–10, 2011–12, 2013–14, and 2015–16 school years have been released. The First Look report Characteristics of Private Schools in the United States: Results From the 2015–16 Private School Universe Survey (NCES 2017-073) presents selected findings from the 2015–16 PSS.

The PSS produces data similar to that of the Common Core of Data for public schools, and can be used for public-private comparisons. The data are useful for a variety of policy- and research-relevant issues, such as the growth of religiously affiliated schools, the number of private high school graduates, the length of the school year for various private schools, and the number of private school students and teachers.

The target population for this universe survey is all private schools in the United States that meet the PSS criteria of a private school (i.e., the private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home).

The survey universe is composed of schools identified from a variety of sources. The main source is a list frame initially developed for the 1989–90 PSS. The list is updated regularly by matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 124 geographic areas, conducted by the U.S. Census Bureau.

Of the 40,302 schools included in the 2009–10 sample, 10,229 were found ineligible for the survey. Those not responding numbered 1,856, and those responding numbered 28,217. The unweighted response rate for the 2009–10 PSS survey was 93.8 percent.

Of the 39,325 schools included in the 2011–12 sample, 10,030 cases were considered as out-of-scope (not eligible for the PSS). A total of 26,983 private schools completed a PSS interview (15.8 percent completed online), while 2,312 schools refused to participate, resulting in an unweighted response rate of 92.1 percent.

There were 40,298 schools in the 2013–14 sample; of these, 10,659 were considered as out-of-scope (not eligible for the PSS). A total of 24,566 private schools completed a PSS interview (34.1 percent completed online), while 5,073 schools refused to participate, resulting in an unweighted response rate of 82.9 percent.

The 2015–16 PSS included 42,389 schools, of which 12,754 were considered as out-of-scope (not eligible for the PSS). A total of 22,428 private schools completed a PSS interview and 7,207 schools failed to respond, which resulted in an unweighted response rate of 75.7 percent.

Further information on the PSS may be obtained from

Steve Broughman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
stephen.broughman@ed.gov
https://nces.ed.gov/surveys/pss

Projections of Education Statistics

Since 1964, NCES has published projections of key statistics for elementary and secondary schools and higher education institutions. The latest report is Projections of Education Statistics to 2026 (NCES 2018-019). The Projections of Education Statistics series uses projection models for elementary and secondary enrollment, high school graduates, elementary and secondary teachers, expenditures for public elementary and secondary education, enrollment in postsecondary degree-granting institutions, and postsecondary degrees conferred to develop national and state projections. These models are described more fully in the report’s appendix on projection methodology.

Differences between the reported and projected values are, of course, almost inevitable. An evaluation of past projections revealed that, at the elementary and secondary level, projections of public school enrollments have been quite accurate: mean absolute percentage differences for enrollment in public schools ranged from 0.3 to 1.2 percent for projections from 1 to 5 years in the future, while those for teachers in public schools were 3.1 percent or less. At the higher education level, projections of enrollment have been fairly accurate: mean absolute percentage differences were 5.9 percent or less for projections from 1 to 5 years into the future.

Further information on Projections of Education Statistics may be obtained from

William Hussar
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
william.hussar@ed.gov
https://nces.ed.gov/pubs2018/2018019.pdf

School Survey on Crime and Safety (SSOCS)

The School Survey on Crime and Safety (SSOCS) is the only recurring federal survey that collects detailed information on the incidence, frequency, seriousness, and nature of violence affecting students and school personnel, as well as other indicators of school safety from the schools’ perspective. SSOCS is conducted by the National Center for Education Statistics (NCES) within the U.S. Department of Education and collected by the U.S. Census Bureau. Data from this collection can be used to examine the relationship between school characteristics and violent and serious violent crimes in primary, middle, high, and combined schools. In addition, data from SSOCS can be used to assess what crime prevention programs, practices, and policies are used by schools. SSOCS has been conducted in school years 1999–2000, 2003–04, 2005–06, 2007–08, 2009–10, and 2015–16.

The sampling frame for SSOCS:2016 was constructed from the 2013–14 Public Elementary/Secondary School Universe data file of the Common Core of Data (CCD), an annual collection of data on all public K–12 schools and school districts. The SSOCS sampling frame was restricted to regular public schools (including charter schools) in the United States and the District of Columbia. Other types of schools from the CCD Public Elementary/Secondary School Universe file were excluded from the SSOCS sampling frame. For instance, schools in Puerto Rico, American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands, as well as Department of Defense dependents schools and Bureau of Indian Education schools, were excluded. Also excluded were special education, alternative, vocational, virtual, newly closed, ungraded, and home schools, and schools with the highest grade of kindergarten or lower.

The SSOCS:2016 universe totaled 83,600 schools. From this total, 3,553 schools were selected for participation in the survey. The sample was stratified by instructional level, type of locale (urbanicity), and enrollment size. The sample of schools in each instructional level was allocated to each of the 16 cells formed by the cross-classification of the four categories of enrollment size and four types of locale. The target number of responding schools allocated to each of the 16 cells was proportional to the sum of the square roots of the total student enrollment over all schools in the cell. The target respondent count within each stratum was then inflated to account for anticipated nonresponse; this inflated count was the sample size for the stratum.

Data collection began in February 2016 and ended in early July 2016. Questionnaire packets were mailed to the principals of the sampled schools, who were asked to complete the survey or have it completed by the person at the school who is most knowledgeable about school crime and policies for providing a safe school environment. A total of 2,092 public schools submitted usable questionnaires, resulting in an overall weighted unit response rate of 62.9 percent.

Further information about SSOCS may be obtained from

Rachel Hansen
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW
Washington, DC 20202
(202) 245-7082
rachel.hansen@ed.gov
https://nces.ed.gov/surveys/ssocs/

Other Department of Education Agencies

Office for Civil Rights

Civil Rights Data Collection

The U.S. Department of Education’s Office for Civil Rights (OCR) has surveyed the nation’s public elementary and secondary schools since 1968. The survey was first known as the OCR Elementary and Secondary School (E&S) Survey; in 2004, it was renamed the Civil Rights Data Collection (CRDC). The survey collects data on school discipline, access to and participation in high-level mathematics and science courses, teacher characteristics, school finances, and other school characteristics. These data are reported by race/ethnicity, sex, and disability.

Data in the survey are collected pursuant to 34 C.F.R. Section 100.6(b) of the Department of Education regulation implementing Title VI of the Civil Rights Act of 1964. The requirements are also incorporated by reference in Department regulations implementing Title IX of the Education Amendments of 1972, Section 504 of the Rehabilitation Act of 1973, and the Age Discrimination Act of 1975. School, district, state, and national data are currently available. Data from individual public schools and districts are used to generate national and state data.

The CRDC has generally been conducted biennially in each of the 50 states plus the District of Columbia. The 2009–10 CRDC was collected from a sample of approximately 7,000 school districts and over 72,000 schools in those districts. It was made up of two parts: part 1 contained beginning-of-year "snapshot" data and part 2 contained cumulative, or end-of-year, data.

The 2011–12 CRDC survey, which collected data from approximately 16,500 school districts and 97,000 schools, was the first CRDC survey since 2000 that included data from every public school district and school in the nation. The 2013–14 CRDC survey also collected information from a universe of every public school district and school in the nation.

Further information on the Civil Rights Data Collection may be obtained from

Office for Civil Rights
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202
OCR@ed.gov
https://www.ed.gov/about/offices/list/ocr/data.html

Office of Special Education Programs

Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act

The Individuals with Disabilities Education Act (IDEA) is a law ensuring services to children with disabilities throughout the nation. IDEA governs how states and public agencies provide early intervention, special education, and related services to more than 6.8 million eligible infants, toddlers, children, and youth with disabilities.

IDEA, formerly the Education of the Handicapped Act (EHA), requires the Secretary of Education to transmit, on an annual basis, a report to Congress describing the progress made in serving the nation’s children with disabilities. This annual report contains information on children served by public schools under the provisions of Part B of IDEA and on children served in state-operated programs for persons with disabilities under Chapter I of the Elementary and Secondary Education Act.

Statistics on children receiving special education and related services in various settings, and school personnel providing such services, are reported in an annual submission of data to the Office of Special Education Programs (OSEP) by the 50 states, the District of Columbia, the Bureau of Indian Education schools, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, the U.S. Virgin Islands, the Federated States of Micronesia, the Republic of Palau, and the Republic of the Marshall Islands. The child count information is based on the number of children with disabilities receiving special education and related services on December 1 of each year. Count information is available from http://www.ideadata.org.

Since all participants in programs for persons with disabilities are reported to OSEP, the data are not subject to sampling error. However, nonsampling error can arise from a variety of sources. Some states only produce counts of students receiving special education services by disability category because Part B of the EHA requires it. In those states that typically produce counts of students receiving special education services by disability category without regard to EHA requirements, definitions and labeling practices vary.

Further information on this annual report to Congress may be obtained from

Office of Special Education Programs
Office of Special Education and Rehabilitative Services
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202-7100
https://www.ed.gov/about/reports/annual/osep/index.html
http://idea.ed.gov/
http://www.ideadata.org

Other Governmental Agencies and Programs

Bureau of Labor Statistics

Consumer Price Indexes

The Consumer Price Index (CPI) represents changes in prices of all goods and services purchased for consumption by urban households. Indexes are available for two population groups: a CPI for All Urban Consumers (CPI-U) and a CPI for Urban Wage Earners and Clerical Workers (CPI-W). Unless otherwise specified, data in this report are adjusted for inflation using the CPI-U. These values are generally adjusted to a school-year basis by averaging the July through June figures. Price indexes are available for the United States, the four Census regions, size of city, cross-classifications of regions and size-classes, and 23 local areas. The major uses of the CPI include as an economic indicator, as a deflator of other economic series, and as a means of adjusting income.

Also available is the Consumer Price Index research series using current methods (CPI-U-RS), which presents an estimate of the CPI-U from 1978 to the present that incorporates most of the improvements that the Bureau of Labor Statistics has made over that time span into the entire series. The historical price index series of the CPI-U does not reflect these changes, though these changes do make the present and future CPI more accurate. The limitations of the CPI-U-RS include considerable uncertainty surrounding the magnitude of the adjustments and the several improvements in the CPI that have not been incorporated into the CPI-U-RS for various reasons. Nonetheless, the CPI-U-RS can serve as a valuable proxy for researchers needing a historical estimate of inflation using current methods. This series has not been used in NCES tables.

Further information on consumer price indexes may be obtained from

Bureau of Labor Statistics
U.S. Department of Labor
2 Massachusetts Avenue NE
Washington, DC 20212
http://www.bls.gov/cpi

Employment and Unemployment Surveys

Statistics on the employment and unemployment status of the population and related data are compiled by the Bureau of Labor Statistics (BLS) using data from the Current Population Survey (CPS) (see below) and other surveys. The CPS, a monthly household survey conducted by the U.S. Census Bureau for the Bureau of Labor Statistics, provides a comprehensive body of information on the employment and unemployment experience of the nation’s population, classified by age, sex, race, and various other characteristics.

Further information on unemployment surveys may be obtained from

Bureau of Labor Statistics
U.S. Department of Labor
2 Massachusetts Avenue NE
Washington, DC 20212
cpsinfo@bls.gov
http://www.bls.gov/bls/employment.htm

Census Bureau

American Community Survey

The Census Bureau introduced the American Community Survey (ACS) in 1996. Fully implemented in 2005, it provides a large monthly sample of demographic, socioeconomic, and housing data comparable in content to the Long Forms of the Decennial Census up to and including the 2000 long form. Aggregated over time, these data serve as a replacement for the Long Form of the Decennial Census. The survey includes questions mandated by federal law, federal regulations, and court decisions.

Since 2011, the survey has been mailed to approximately 295,000 addresses in the United States and Puerto Rico each month, or about 3.5 million addresses annually. A larger proportion of addresses in small governmental units (e.g., American Indian reservations, small counties, and towns) also receive the survey. The monthly sample size is designed to approximate the ratio used in the 2000 Census, which requires more intensive distribution in these areas. The ACS covers the U.S. resident population, which includes the entire civilian, noninstitutionalized population; incarcerated persons; institutionalized persons; and the active duty military who are in the United States. In 2006, the ACS began interviewing residents in group quarter facilities. Institutionalized group quarters include adult and juvenile correctional facilities, nursing facilities, and other health care facilities. Noninstitutionalized group quarters include college and university housing, military barracks, and other noninstitutional facilities such as workers and religious group quarters and temporary shelters for the homeless.

National-level data from the ACS are available from 2000 onward. The ACS produces 1-year estimates for jurisdictions with populations of 65,000 and over and 5-year estimates for jurisdictions with smaller populations. The 1-year estimates for 2016 used data collected between January 1, 2016, and December 31, 2016, and the 5-year estimates for 2012–2016 used data collected between January 1, 2012, and December 31, 2016. The ACS produced 3-year estimates (for jurisdictions with populations of 20,000 or over) for the periods 2005–2007, 2006–2008, 2007–2009, 2008–2010, 2009–2011, 2010–2012, and 2011–2013. Three-year estimates for these periods will continue to be available to data users, but no further 3-year estimates will be produced.

Further information about the ACS is available at https://www.census.gov/acs/www/.

Census of Population—Education in the United States

Some NCES tables are based on a part of the decennial census that consisted of questions asked of a 1 in 6 sample of people and housing units in the United States. This sample was asked more detailed questions about income, occupation, and housing costs, as well as questions about general demographic information. This decennial census "long form" is no longer used; it has been replaced by the American Community Survey (ACS).

School enrollment. People classified as enrolled in school reported attending a "regular" public or private school or college. They were asked whether the institution they attended was public or private and what level of school they were enrolled in.

Educational attainment. Data for educational attainment were tabulated for people ages 15 and over and classified according to the highest grade completed or the highest degree received. Instructions were also given to include the level of the previous grade attended or the highest degree received for people currently enrolled in school.

Poverty status. To determine poverty status, answers to income questions were used to make comparisons to the appropriate poverty threshold. All people except those who were institutionalized, people in military group quarters and college dormitories, and unrelated people under age 15 were considered. If the total income of each family or unrelated individual in the sample was below the corresponding cutoff, that family or individual was classified as "below the poverty level."

Further information on the 1990 and 2000 Census of Population may be obtained from

Population Division
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
https://www.census.gov/main/www/cen1990.html
https://www.census.gov/main/www/cen2000.html

Current Population Survey

The Current Population Survey (CPS) is a monthly survey of about 54,000 households conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. The CPS is the primary source of labor force statistics on the U.S. population. In addition, supplemental questionnaires are used to provide further information about the U.S. population. The March supplement (also known as the Annual Social and Economic [ASEC] supplement) contains detailed questions on topics such as income, employment, and educational attainment; additional questions, such as items on disabilities, have also been included. The October supplement contains questions on school enrollment and school characteristics. Survey items on computer and internet use have been the principal focus in the July supplement and are the principal focus in the November 2017 supplement.

CPS samples are initially selected based on results from the decennial census and are periodically updated to reflect new housing construction. The current sample design for the main CPS, last revised in July 2015, includes about 74,000 households. Each month, about 54,000 of the 74,000 households are interviewed. Information is obtained each month from those in the household who are 15 years of age and over, and demographic data are collected for children 0–14 years of age. In addition, supplemental questions regarding school enrollment are asked about eligible household members age 3 and over in the October CPS supplement.

In January 1992, the CPS educational attainment variable was changed. The “Highest grade attended” and “Year completed” questions were replaced by the question “What is the highest level of school ... has completed or the highest degree ... has received?” Thus, for example, while the old questions elicited data for those who completed more than 4 years of high school, the new question elicited data for those who were high school completers, i.e., those who graduated from high school with a diploma as well as those who completed high school through equivalency programs, such as a GED program.

A major redesign of the CPS was implemented in January 1994 to improve the quality of the data collected. Survey questions were revised, new questions were added, and computer-assisted interviewing methods were used for the survey data collection. Further information about the redesign is available in Current Population Survey, October 1995: (School Enrollment Supplement) Technical Documentation at https://www.census.gov/prod/techdoc/cps/cpsoct95.pdf.

Beginning in 2003, the race/ethnicity questions were expanded. Information on people of Two or more races were included, and the Asian and Pacific Islander race category was split into two categories—Asian and Native Hawaiian or Other Pacific Islander. In addition, questions were reworded to make it clear that self-reported data on race/ethnicity should reflect the race/ethnicity with which the responder identifies, rather than what may be written in official documentation.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services. Caution should be used when comparing population estimates (e.g., the number of 18- to 24-year-olds) from CPS data over long periods of time (e.g., 10 or more years) since CPS data reflect the latest available Census-based controls. For instance, 2012–2017 CPS data reflect Census 2010-based controls, while CPS data from 2003–2011 reflect Census 2000-based controls. Thus, the estimates of levels for data collected in 2012 and later years will differ from those for earlier years by more than what could be attributed to actual changes in the population. These differences could be disproportionately greater for certain population subgroups than for the total population. Nevertheless, the most recent change in population controls had relatively little impact on summary measures such as averages, medians, and percentage distributions.

The generalized variance function is a simple model that expresses the variance as a function of the expected value of a survey estimate. Methods for deriving standard errors and examples can be found within the CPS technical documentation at https://www.census.gov/programs-surveys/cps/technical-documentation/complete.html. Standard errors were estimated using replicate weight methodology beginning in 2005 for March CPS data and beginning in 2010 for October CPS data. Those interested in using CPS household-level supplement replicate weights to calculate variances may refer to Estimating Current Population Survey (CPS) Household-Level Supplement Variances Using Replicate Weights at https://thedataweb.rm.census.gov/pub/cps/supps/HH-level_Use_of_the_Public_Use_Replicate_Weight_File.doc.

Further information on the CPS may be obtained from

Associate Directorate for Demographic Programs—Survey Operations
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
301-763-3806
dsd.cps@census.gov
https://www.census.gov/programs-surveys/cps.html

Computer and Internet Use

The Current Population Survey (CPS) has been conducting supplemental data collections regarding computer use since 1984. In 1997, these supplemental data collections were expanded to include data on internet access. More recently, data regarding computer and internet use were collected in October 2010, July 2011, October 2012, July 2013, and July 2015.

In the July 2011, 2013, and 2015 supplements, the sole focus was on computer and internet use. In the October 2010 and 2012 supplements questions on school enrollment were the principal focus, and questions on computer and internet use were less prominent. Measurable differences in estimates taken from these supplements across years could reflect actual changes in the population; however, differences could also reflect seasonal variations in data collection or differences between the content of the July and October supplements. Therefore, caution should be used when making year-to-year comparisons of CPS computer and internet use estimates.

The most recent computer and internet use supplement, conducted in July 2015, collected household information from all eligible CPS households, as well as information from individual household members age 3 and over. Information was collected about the household’s computer and internet use and the household member’s use of the Internet from any location in the past year. Additionally, information was gathered regarding a randomly selected household respondent’s use of the Internet.

For the July 2015 basic CPS, the household-level nonresponse rate was 13.0 percent. The person-level nonresponse rate for the computer and internet use supplement was an additional 23.0 percent. Since one rate is a person-level rate and the other a household-level rate, the rates cannot be combined to derive an overall rate.

Further information on the CPS Computer and Internet Use Supplement may be obtained from

Education and Social Stratification Branch
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://census.gov/topics/population/computer-internet.html

Dropouts

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population age 3 years and over as part of the monthly basic survey on labor force participation. In addition to gathering the information on school enrollment, with the limitations on accuracy as noted below under “School Enrollment,” the survey data permit calculations of dropout rates. Both status and event dropout rates are tabulated from the October CPS. Event rates describe the proportion of students who leave school each year without completing a high school program. Status rates provide cumulative data on dropouts among all young adults within a specified age range. Status rates are higher than event rates because they include all dropouts ages 16 through 24, regardless of when they last attended school.

In addition to other survey limitations, dropout rates may be affected by survey coverage and exclusion of the institutionalized population. The incarcerated population has grown rapidly and has a high dropout rate. Dropout rates for the total population might be higher than those for the noninstitutionalized population if the prison and jail populations were included in the dropout rate calculations. On the other hand, if military personnel, who tend to be high school graduates, were included, it might offset some or all of the impact from the theoretical inclusion of the jail and prison populations.

Another area of concern with tabulations involving young people in household surveys is the relatively low coverage ratio compared to older age groups. CPS undercoverage results from missed housing units and missed people within sample households. Overall CPS undercoverage for October 2016 is estimated to be about 11 percent. CPS coverage varies with age, sex, and race. Generally, coverage is larger for females than for males and larger for non-Blacks than for Blacks. This differential coverage is a general problem for most household-based surveys. Further information on CPS methodology may be found in the technical documentation at https://www.census.gov/cps.

Further information on the calculation of dropouts and dropout rates may be obtained from the Trends in High School Dropout and Completion Rates in the United States report at https://nces.ed.gov/programs/dropout/index.asp or by contacting

Joel McFarland
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
joel.mcfarland@ed.gov

Educational Attainment

Reports documenting educational attainment are produced by the Census Bureau using the March Current Population Survey (CPS) supplement (Annual Social and Economic supplement [ASEC]). Currently, the ASEC supplement consists of  approximately 70,000 interviewed households. Both recent and earlier editions of Educational Attainment in the United States may be downloaded at https://www.census.gov/topics/education/educational-attainment/data/tables.All.html.

In addition to the general constraints of CPS, some data indicate that the respondents have a tendency to overestimate the educational level of members of their household. Some inaccuracy is due to a lack of the respondent’s knowledge of the exact educational attainment of each household member and the hesitancy to acknowledge anything less than a high school education.

Further information on educational attainment data from CPS may be obtained from

Associate Directorate for Demographic Programs—Survey Operations
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
301-763-3806
dsd.cps@census.gov
https://www.census.gov/programs-surveys/cps.html

School Enrollment

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population age 3 years and over. Currently, the October supplement consists of  approximately 54,000 interviewed households, the same households interviewed in the basic Current Population Survey. The main sources of nonsampling variability in the responses to the supplement are those inherent in the survey instrument. The question of current enrollment may not be answered accurately for various reasons. Some respondents may not know current grade information for every student in the household, a problem especially prevalent for households with members in college or in nursery school. Confusion over college credits or hours taken by a student may make it difficult to determine the year in which the student is enrolled. Problems may occur with the definition of nursery school (a group or class organized to provide educational experiences for children) where respondents’ interpretations of “educational experiences” vary.

For the October 2017 basic CPS, the household-level nonresponse rate was 13.8 percent. The person-level nonresponse rate for the school enrollment supplement was an additional 9.9 percent. Since the basic CPS nonresponse rate is a household-level rate and the school enrollment supplement nonresponse rate is a person-level rate, these rates cannot be combined to derive an overall nonresponse rate. Nonresponding households may have fewer persons than interviewed ones, so combining these rates may lead to an overestimate of the true overall nonresponse rate for persons for the school enrollment supplement.

Although the principal focus of the October supplement is school enrollment, in some years the supplement has included additional questions on other topics. In 2010 and 2012, for example, the October supplement included additional questions on computer and internet use.

Further information on CPS methodology may be obtained from https://www.census.gov/programs-surveys/cps.html.

Further information on the CPS School Enrollment Supplement may be obtained from

Associate Directorate for Demographic Programs—Survey Operations
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
301-763-3806
dsd.cps@census.gov
https://www.census.gov/programs-surveys/cps.html

Decennial Census, Population Estimates, and Population Projections

The decennial census is a universe survey mandated by the U.S. Constitution. It is a questionnaire sent to every household in the country, and it is composed of seven questions about the household and its members (name, sex, age, relationship, Hispanic origin, race, and whether the housing unit is owned or rented). The Census Bureau also produces annual estimates of the resident population by demographic characteristics (age, sex, race, and Hispanic origin) for the nation, states, and counties, as well as national and state projections for the resident population. The reference date for population estimates is July 1 of the given year. With each new issue of July 1 estimates, the Census Bureau revises estimates for each year back to the last census. Previously published estimates are superseded and archived.

Census respondents self-report race and ethnicity. The race questions on the 1990 and 2000 censuses differed in some significant ways. In 1990, the respondent was instructed to select the one race “that the respondent considers himself/herself to be,” whereas in 2000, the respondent could select one or more races that the person considered himself or herself to be. American Indian, Eskimo, and Aleut were three separate race categories in 1990; in 2000, the American Indian and Alaska Native categories were combined, with an option to write in a tribal affiliation. This write-in option was provided only for the American Indian category in 1990. There was a combined Asian and Pacific Islander race category in 1990, but the groups were separated into two categories in 2000.

The census question on ethnicity asks whether the respondent is of Hispanic origin, regardless of the race option(s) selected; thus, persons of Hispanic origin may be of any race. In the 2000 census, respondents were first asked, “Is this person Spanish/Hispanic/Latino?” and then given the following options: No, not Spanish/Hispanic/Latino; Yes, Puerto Rican; Yes, Mexican, Mexican American, Chicano; Yes, Cuban; and Yes, other Spanish/Hispanic/Latino (with space to print the specific group). In the 2010 census, respondents were asked “Is this person of Hispanic, Latino, or Spanish origin?” The options given were No, not of Hispanic, Latino, or Spanish origin; Yes, Mexican, Mexican Am., Chicano; Yes, Puerto Rican; Yes, Cuban; and Yes, another Hispanic, Latino, or Spanish origin—along with instructions to print “Argentinean, Colombian, Dominican, Nicaraguan, Salvadoran, Spaniard, and so on” in a specific box.

The 2000 and 2010 censuses each asked the respondent “What is this person’s race?” and allowed the respondent to select one or more options. The options provided were largely the same in both the 2000 and 2010 censuses: White; Black, African American, or Negro; American Indian or Alaska Native (with space to print the name of enrolled or principal tribe); Asian Indian; Japanese; Native Hawaiian; Chinese; Korean; Guamanian or Chamorro; Filipino; Vietnamese; Samoan; Other Asian; Other Pacific Islander; and Some other race. The last three options included space to print the specific race. Two significant differences between the 2000 and 2010 census questions on race were that no race examples were provided for the “Other Asian” and “Other Pacific Islander” responses in 2000, whereas the race examples of “Hmong, Laotian, Thai, Pakistani, Cambodian, and so on” and “Fijian, Tongan, and so on,” were provided for the “Other Asian” and “Other Pacific Islander” responses, respectively, in 2010.

The census population estimates program modified the enumerated population from the 2010 census to produce the population estimates base for 2010 and onward. As part of the modification, the Census Bureau recoded the “Some other race” responses from the 2010 census to one or more of the five OMB race categories used in the estimates program (for more information, see https://www.census.gov/programs-surveys/popest/technical-documentation/methodology.html ).

Further information on the decennial census may be obtained from https://www.census.gov.

Department of Justice

Bureau of Justice Statistics

A division of the U.S. Department of Justice Office of Justice Programs, the Bureau of Justice Statistics (BJS) collects, analyzes, publishes, and disseminates statistical information on crime, criminal offenders, victims of crime, and the operations of the justice system at all levels of government and internationally. It also provides technical and financial support to state governments for development of criminal justice statistics and information systems on crime and justice.

For information on the BJS, see https://www.bjs.gov/.

National Crime Victimization Survey

The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics (BJS) by the U.S. Census Bureau, is the nation’s primary source of information on crime and the victims of crime. Initiated in 1972 and redesigned in 1992 and 2016, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, household burglary, and motor vehicle theft experienced by Americans and American households each year. The survey measures both crimes reported to the police and crimes not reported to the police.

NCVS estimates presented may differ from those in previous published reports. This is because a small number of victimizations, referred to as series victimizations, are included using a new counting strategy. High-frequency repeat victimizations, or series victimizations, are six or more similar but separate victimizations that occur with such frequency that the victim is unable to recall each individual event or describe each event in detail. As part of ongoing research efforts associated with the redesign of the NCVS, BJS investigated ways to include high-frequency repeat victimizations, or series victimizations, in estimates of criminal victimization. Including series victimizations results in more accurate estimates of victimization. BJS has decided to include series victimizations using the victim’s estimates of the number of times the victimizations occurred over the past 6 months, capping the number of victimizations within each series at a maximum of 10. This strategy for counting series victimizations balances the desire to estimate national rates and account for the experiences of persons who have been subjected to repeat victimizations against the desire to minimize the estimation errors that can occur when repeat victimizations are reported. Including series victimizations in national rates results in rather large increases in the level of violent victimization; however, trends in violence are generally similar regardless of whether series victimizations are included. For more information on the new counting strategy and supporting research, see Methods for Counting High-Frequency Repeat Victimizations in the National Crime Victimization Survey at https://www.bjs.gov/content/pub/pdf/mchfrv.pdf.

Readers should note that in 2003, in accordance with changes to the Office of Management and Budget’s standards for the classification of federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is now followed by a new question on race. The new question about race allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander. An analysis conducted by the Demographic Surveys Division at the U.S. Census Bureau showed that the new race question had very little impact on the aggregate racial distribution of the NCVS respondents, with one exception: There was a 1.6 percentage point decrease in the percentage of respondents who reported themselves as White. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.

There were changes in the sample design and survey methodology in the 2006 NCVS that may have affected survey estimates. Caution should be used when comparing the 2006 estimates to estimates of other years. Data from 2007 onward are comparable to earlier years. Analyses of the 2007 estimates indicate that the program changes made in 2006 had relatively small effects on NCVS estimates. For more information on the 2006 NCVS data, see Criminal Victimization, 2006, at https://www.bjs.gov/content/pub/pdf/cv06.pdf; the NCVS 2006 technical notes, at https://www.bjs.gov/content/pub/pdf/cv06tn.pdf; and Criminal Victimization, 2007, at https://bjs.gov/content/pub/pdf/cv07.pdf.

The NCVS sample was redesigned in 2016 in order to account for changes in the U.S. population identified through the 2010 Decennial Census and to make it possible to produce state- and local-level victimization estimates for the largest 22 states and specific metropolitan areas within those states. Because of this redesign, 2016 victimization data are not comparable to data from 2015 and prior years. For more information on the 2016 NCVS data, see Criminal Victimization, 2016, at https://www.bjs.gov/content/pub/pdf/cv16.pdf, and the technical notes, at https://www.bjs.gov/content/pub/pdf/ncvstd16.pdf.

The number of NCVS-eligible households in the sample in 2016 was about 134,690. Households were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interview. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial census. Within each sampled household, the U.S. Census Bureau interviewer attempts to interview all household members age 12 and over to determine whether they had been victimized by the measured crimes during the 6 months preceding the interview.

The first NCVS interview with a housing unit is conducted in person. Subsequent interviews are conducted by telephone, if possible. Households remain in the sample for 3 years and are interviewed seven times at 6-month intervals. Since the survey’s inception, the initial interview at each sample unit has been used only to bound future interviews to establish a time frame to avoid duplication of crimes uncovered in these subsequent interviews. Beginning in 2006, data from the initial interview have been adjusted to account for the effects of bounding and have been included in the survey estimates. After a household has been interviewed its seventh time, it is replaced by a new sample household. In 2016, the household response rate was about 78 percent and the completion rate for persons within households was about 84 percent.  Weights were developed to permit estimates for the total U.S. population 12 years and older.

Further information on the NCVS may be obtained from

Rachel E. Morgan
Victimization Statistics Branch
Bureau of Justice Statistics
rachel.morgan@usdoj.gov
http://www.bjs.gov/

School Crime Supplement

Created as a supplement to the NCVS and co-designed by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. This report includes data from the 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, and 2015 collections. The 1989 data are not included in this report as a result of methodological changes to the NCVS and SCS. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Students are asked additional questions about security measures used by their school, students’ participation in after-school activities, students’ perceptions of school rules, the presence of weapons and gangs in school, the presence of hate-related words and graffiti in school, student reports of bullying and reports of rejection at school, and the availability of drugs and alcohol in school. Students are also asked attitudinal questions relating to fear of victimization and avoidance behavior at school.

The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the NCVS sampling design and changes to the race/ethnicity variable beginning in 2003). Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 6–12, and were not home schooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 12–18 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview. It should be noted that the first or unbounded NCVS interview has always been included in analysis of the SCS data and may result in the reporting of events outside of the requested reference period.

The prevalence of victimization for 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, and 2015 was calculated by using NCVS incident variables appended to the SCS data files of the same year. The NCVS type of crime variable was used to classify victimizations of students in the SCS as serious violent, violent, or theft. The NCVS variables asking where the incident happened (at school) and what the victim was doing when it happened (attending school or on the way to or from school) were used to ascertain whether the incident happened at school. Only incidents that occurred inside the United States are included.

In 2001, the SCS survey instrument was modified from previous collections. First, in 1995 and 1999, "at school" was defined for respondents as in the school building, on the school grounds, or on a school bus. In 2001, the definition for "at school" was changed to mean in the school building, on school property, on a school bus, or going to and from school. This change was made to the 2001 questionnaire in order to be consistent with the definition of "at school" as it is constructed in the NCVS and was also used as the definition in subsequent SCS collections. Cognitive interviews conducted by the U.S. Census Bureau on the 1999 SCS suggested that modifications to the definition of "at school" would not have a substantial impact on the estimates.

In terms of the numbers of students participating in the SCS in recent years, 6,300 participated in 2005, 6,500 participated in 2007, 5,000 participated in 2009, 6,500 in 2011, 5,700 in 2013, and 4,700 in 2015.

In the 2005, 2007, 2009, 2011, 2013, and 2015 SCS, the household completion rates were 91 percent, 90 percent, 92 percent, 91 percent, 86 percent, and 83 percent, respectively, and the student completion rates were 62 percent, 58 percent, 56 percent, 63 percent, 60 percent, and 58 percent, respectively. The overall SCS unit response rates (calculated by multiplying the household completion rate by the student completion rate) were about 56 percent in 2005, 53 percent in 2007, 51 percent in 2009, 57 percent in 2011, 51 percent in 2013, and 48 percent in 2015. (Starting in 2011, overall SCS unit response rates are weighted.)

There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (NCES Statistical Standards, 2002, at https://nces.ed.gov/statprog/2002/std4_4.asp). Due to the low unit response rate in 2005, 2007, 2009, 2011, 2013, and 2015, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents.

In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has four key student or school characteristic variables for which data are known for respondents and nonrespondents—sex, race/ethnicity, household income, and urbanicity—all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.

In 2005, the analysis of unit nonresponse bias found evidence of bias for the race, household income, and urbanicity variables. White (non-Hispanic) and Other (non-Hispanic) respondents had higher response rates than Black (non-Hispanic) and Hispanic respondents. Respondents from households with an income of $35,000–$49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500–$14,999, $15,000–$24,999, and $25,000–$34,999. Respon­dents who live in urban areas had lower response rates than those who live in rural or suburban areas. Although the extent of nonresponse bias cannot be determined, weighting adjustments, which corrected for differential response rates, should have reduced the problem.

In 2007, the analysis of unit nonresponse bias found evidence of bias by the race/ethnicity and household income variables. Hispanic respondents had lower response rates than other races/ethnicities. Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $25,000. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other races/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2011, the analysis of unit nonresponse bias found evidence of potential bias for the age variable. Respondents 12 to 17 years old had higher response rates than did 18-year-old respondents in the NCVS and SCS interviews. Weighting the data adjusts for unequal selection probabilities and for the effects of nonresponse. The weighting adjustments that correct for differential response rates are created by region, age, race, and sex, and should have reduced the effect of nonresponse.

In 2013, the analysis of unit nonresponse bias found evidence of potential bias for the age variable in the SCS respondent sample. Students age 14 and those from the western region showed percentage bias exceeding 5 percent; however, both subgroups had the highest response rate out of their respective categories. All other subgroups evaluated showed less than 1 percent nonresponse bias and had between 0.3 and 2.6 percent difference between the response population and the eligible population.

In the 2015 SCS, evidence of potential nonresponse bias was found in the race, urbanicity, region, and age subgroups. In addition, respondents in the age 14 and rural subgroups had significantly higher nonresponse bias estimates compared to other age and urbanicity subgroups, while respondents who were Asian and respondents who were from the Northeast had significantly lower response bias estimates compared to other race and region subgroups. Thus, the analysis indicates that there are significant nonresponse biases in the 2015 SCS data and that caution should be used when comparing responses among subgroups in the SCS.

For most survey items in most years of the SCS survey, however, response rates have been high—typically over 97 percent of all eligible respondents, meaning there is little potential for item nonresponse bias for most items in the survey. Weights have been developed to compensate for differential probabilities of selection and nonresponse. The weighted data permit inferences about the eligible student population who were enrolled in schools in all SCS data years.

Further information about the SCS may be obtained from

Rachel Hansen
Sample Surveys Division
Cross-Sectional Surveys Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-7082
rachel.hansen@ed.gov
https://nces.ed.gov/programs/crime

Other Organization Sources

International Association for the Evaluation of Educational Achievement

The International Association for the Evaluation of Educational Achievement (IEA) is composed of governmental research centers and national research institutions around the world whose aim is to investigate education problems common among countries. Since its inception in 1958, the IEA has conducted more than 30 research studies of cross-national achievement. The regular cycle of studies encompasses learning in basic school subjects. Examples are the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS). IEA projects also include studies of particular interest to IEA members, such as the TIMSS 1999 Video Study of Mathematics and Science Teaching, the Civic Education Study, and studies on information technology in education.

The international bodies that coordinate international assessments vary in the labels they apply to participating education systems, most of which are countries. IEA differentiates between IEA members, which IEA refers to as “countries” in all cases, and “benchmarking participants.” IEA members include countries such as the United States and Ireland, as well as subnational entities such as England and Scotland (which are both part of the United Kingdom), the Flemish community of Belgium, and Hong Kong (a Special Administrative Region of China). IEA benchmarking participants are all subnational entities and include Canadian provinces, U.S. states, and Dubai in the United Arab Emirates (among others). Benchmarking participants, like the participating countries, are given the opportunity to assess the comparative international standing of their students’ achievement and to view their curriculum and instruction in an international context.

Some IEA studies, such as TIMSS and PIRLS, include an assessment portion, as well as contextual questionnaires for collecting information about students’ home and school experiences. The TIMSS and PIRLS scales, including the scale averages and standard deviations, are designed to remain constant from assessment to assessment so that education systems (including countries and subnational education systems) can compare their scores over time as well as compare their scores directly with the scores of other education systems. Although each scale was created to have a mean of 500 and a standard deviation of 100, the subject matter and the level of difficulty of items necessarily differ by grade, subject, and domain/dimension. Therefore, direct comparisons between scores across grades, subjects, and different domain/dimension types should not be made.

Further information on the International Association for the Evaluation of Educational Achievement may be obtained from http://www.iea.nl.

Trends in International Mathematics and Science Study

The Trends in International Mathematics and Science Study (TIMSS, formerly known as the Third International Mathematics and Science Study) provides data on the mathematics and science achievement of U.S. 4th- and 8th-graders compared with that of their peers in other countries. TIMSS collects information through mathematics and science assessments and questionnaires. The questionnaires request information to help provide a context for student performance. They focus on such topics as students’ attitudes and beliefs about learning mathematics and science, what students do as part of their mathematics and science lessons, students’ completion of homework, and their lives both in and outside of school; teachers’ perceptions of their preparedness for teaching mathematics and science, teaching assignments, class size and organization, instructional content and practices, collaboration with other teachers, and participation in professional development activities; and principals’ viewpoints on policy and budget responsibilities, curriculum and instruction issues, and student behavior. The questionnaires also elicit information on the organization of schools and courses. The assessments and questionnaires are designed to specifications in a guiding framework. The TIMSS framework describes the mathematics and science content to be assessed and provides grade-specific objectives, an overview of the assessment design, and guidelines for item development.

TIMSS is on a 4-year cycle. Data collections occurred in 1995, 1999 (8th grade only), 2003, 2007, 2011, and 2015. TIMSS 2015 consisted of assessments in 4th-grade mathematics; numeracy (a less difficult version of 4th-grade mathematics, newly developed for 2015); 8th-grade mathematics; 4th-grade science; and 8th-grade science. In addition, TIMSS 2015 included the third administration of TIMSS Advanced since 1995. TIMSS Advanced is an international comparative study that measures the advanced mathematics and physics achievement of students in their final year of secondary school (the equivalent of 12th grade in the United States) who are taking or have taken advanced courses. The TIMSS 2015 survey also collected policy-relevant information about students, curriculum emphasis, technology use, and teacher preparation and training.

Progress in International Reading Literacy Study

The Progress in International Reading Literacy Study (PIRLS) provides data on the reading literacy of U.S. 4th-graders compared with that of their peers in other countries. PIRLS is on a 5-year cycle: PIRLS data collections have been conducted in 2001, 2006, 2011, and 2016. In 2016, a total of 58 education systems, including both IEA members and IEA benchmarking participants, participated in the survey. Sixteen of the education systems participating in PIRLS also participated in ePIRLS, an innovative, computer-based assessment of online reading designed to measure students’ approaches to informational reading in an online environment.

PIRLS collects information through a reading literacy assessment and questionnaires that help to provide a context for student performance. Questionnaires are administered to collect information about students’ home and school experiences in learning to read. A student questionnaire addresses students’ attitudes toward reading and their reading habits. In addition, questionnaires are given to students’ teachers and school principals in order to gather information about students’ school experiences in developing reading literacy. In countries other than the United States, a parent questionnaire is also administered. The assessments and questionnaires are designed to specifications in a guiding framework. The PIRLS framework describes the reading content to be assessed and provides objectives specific to 4th grade, an overview of the assessment design, and guidelines for item development.

TIMSS and PIRLS Sampling and Response Rates

2016 PIRLS

As is done in all participating countries and other education systems, representative samples of students in the United States are selected. The sample design that was employed by PIRLS in 2016 is generally referred to as a two-stage stratified cluster sample. In the first stage of sampling, individual schools were selected with a probability proportionate to size (PPS) approach, which means that the probability is proportional to the estimated number of students enrolled in the target grade. In the second stage of sampling, intact classrooms were selected within sampled schools.

PIRLS guidelines call for a minimum of 150 schools to be sampled, with a minimum of 4,000 students assessed. The basic sample design of one classroom per school was designed to yield a total sample of approximately 4,500 students per population. About 4,400 U.S. students participated in PIRLS in 2016, joining 319,000 other student participants around the world. Accommodations were not provided for students with disabilities or students who were unable to read or speak the language of the test. These students were excluded from the sample. The IEA requirement is that the overall exclusion rate, which includes exclusions of schools and students, should not exceed more than 5 percent of the national desired target population.

In order to minimize the potential for response biases, the IEA developed participation or response rate standards that apply to all participating education systems and govern whether or not an education system’s data are included in the TIMSS or PIRLS international datasets and the way in which its statistics are presented in the international reports. These standards were set using composites of response rates at the school, classroom, and student and teacher levels. Response rates were calculated with and without the inclusion of substitute schools that were selected to replace schools refusing to participate. In the 2016 PIRLS administered in the United States, the unweighted school response rate was 76 percent, and the weighted school response rate was 75 percent. All schools selected for PIRLS were also asked to participate in ePIRLS. The unweighted school response rate for ePIRLS in the final sample with replacement schools was 89.0 percent and the weighted response rate was 89.1 percent. The weighted and unweighted student response rates for PIRLS were both 94 percent. The weighted and unweighted student response rates for ePIRLS were both 90 percent. 

2015 TIMSS and TIMSS Advanced

TIMSS 2015 was administered between March and May of 2015 in the United States. The U.S. sample was randomly selected and weighted to be representative of the nation. In order to reliably and accurately represent the performance of each country, international guidelines required that countries sample at least 150 schools and at least 4,000 students per grade (countries with small class sizes of fewer than 30 students per school were directed to consider sampling more schools, more classrooms per school, or both, to meet the minimum target of 4,000 tested students). In the United States, a total of 250 schools and 10,029 students participated in the grade 4 TIMSS survey, and 246 schools and 10,221 students participated in the grade 8 TIMSS (these figures do not include the participation of the state of Florida as a subnational education system, which was separate from and additional to its participation in the U.S. national sample).

TIMSS Advanced, also administered between March and May of 2015 in the United States, required participating countries and other education systems to draw probability samples of students in their final year of secondary school—ISCED Level 3—who were taking or had taken courses in advanced mathematics or who were taking or had taken courses in physics. International guidelines for TIMSS Advanced called for a minimum of 120 schools to be sampled, with a minimum of 3,600 students assessed per subject. In the United States, a total of 241 schools and 2,954 students participated in advanced mathematics, and 165 schools and 2,932 students participated in physics.

In TIMSS 2015, the weighted school response rate for the United States was 77 percent for grade 4 before the use of substitute schools (schools substituted for originally sampled schools that refused to participate) and 85 percent with the inclusion of substitute schools. For grade 8, the weighted school response rate before the use of substitute schools was 78 percent, and it was 84 percent with the inclusion of substitute schools. The weighted student response rate was 96 percent for grade 4 and 94 percent for grade 8.

In TIMSS Advanced 2015, the weighted school response rate for the United States for advanced mathematics was 72 percent before the use of substitute schools and 76 percent with the inclusion of substitute schools. The weighted school response rate for the United States for physics was 65 percent before the use of substitute schools and 68 percent with the inclusion of substitute schools. The weighted student response rate was 87 percent for advanced mathematics and 85 percent for physics. Student response rates are based on a combined total of students from both sampled and substitute schools.

Further information on the TIMSS study may be obtained from

Stephen Provasnik
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-6442
stephen.provasnik@ed.gov
https://nces.ed.gov/timss
http://www.iea.nl/timss

Further information on the PIRLS study may be obtained from

Sheila Thompson
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-8330
sheila.thompson@ed.gov
https://nces.ed.gov/surveys/pirls/
http://www.iea.nl/pirls

Organization for Economic Cooperation and Development

The Organization for Economic Cooperation and Development (OECD) publishes analyses of national policies and survey data in education, training, and economics in OECD and partner countries. Newer studies include student survey data on financial literacy and on digital literacy.

Education at a Glance

To highlight current education issues and create a set of comparative education indicators that represent key features of education systems, OECD initiated the Indicators of Education Systems (INES) project and charged the Centre for Educational Research and Innovation (CERI) with developing the cross-national indicators for it. The development of these indicators involved representatives of the OECD countries and the OECD Secretariat. Improvements in data quality and comparability among OECD countries have resulted from the country-to-country interaction sponsored through the INES project. The most recent publication in this series is Education at a Glance 2017: OECD Indicators.

Education at a Glance 2017 features data on the 35 OECD countries (Australia, Austria, Belgium, Canada, Chile, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, the Republic of Korea, Latvia, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, the Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States) and a number of partner countries, including Argentina, Brazil, China, Colombia, Costa Rica, India, Indonesia, Lithuania, the Russian Federation, Saudi Arabia, and South Africa.

The OECD Handbook for Internationally Comparative Education Statistics: Concepts, Standards, Definitions, and Classifications provides countries with specific guidance on how to prepare information for OECD education surveys; facilitates countries’ understanding of OECD indicators and their use in policy analysis; and provides a reference for collecting and assimilating educational data. Chapter 6 of the OECD Handbook for Internationally Comparative Education Statistics contains a discussion of data quality issues. Users should examine footnotes carefully to recognize some of the data limitations.

Further information on international education statistics may be obtained from

Andreas Schleicher
Director for the Directorate of Education and Skills
   and Special Advisor on Education Policy
   to the OECD’s Secretary General
OECD Directorate for Education and Skills
2, rue André Pascal
75775 Paris CEDEX 16
France
andreas.schleicher@oecd.org
http://www.oecd.org

Online Education Database (OECD.Stat)

OECD.Stat is the statistical online platform of the OECD; it allows users to access OECD’s databases for OECD member countries and selected nonmember countries. A user can build tables using selected variables and customizable table layouts, extract and download data, and view metadata on methodology and sources.

Data for educational attainment, as published in the International Educational Attainment indicator, are pulled directly from OECD.Stat. (Information on these data can be found in chapter A, indicator A1 of annex 3 in Education at a Glance 2017 and accessed at http://www.oecd.org/education/skills-beyond-school/EAG2017-Annex-3.pdf.) However, to support statistical testing, standard errors for some countries had to be estimated and therefore may differ from those published on OECD.Stat. NCES calculated standard errors for all data years for the United States. Standard errors for 2016 for Canada, the Republic of Korea, the Netherlands, Poland, Slovenia, and Turkey, as well as standard errors for the 2016 postsecondary educational attainment data for Japan, were estimated by NCES using a simple random sample assumption. These standard errors are likely to be lower than standard errors that take into account complex sample designs. Lastly, NCES estimated the standard errors for the OECD average using the sum of squares technique.

OECD.Stat can be accessed at http://stats.oecd.org. A user’s guide for OECD.Stat can be accessed at https://stats.oecd.org/Content/themes/OECD/static/‌help/WBOS%20User%20Guide%20(EN).pdf.

Program for International Student Assessment

The Program for International Student Assessment (PISA) is a system of international assessments organized by the Organization for Economic Cooperation and Development (OECD), an intergovernmental organization of industrialized countries, that focuses on 15-year-olds’ capabilities in reading literacy, mathematics literacy, and science literacy. PISA also includes measures of general, or cross-curricular, competencies such as learning strategies. PISA emphasizes functional skills that students have acquired as they near the end of compulsory schooling.

PISA is a 2-hour exam. Assessment items include a combination of multiple-choice questions and open-ended questions that require students to develop their own response. PISA scores are reported on a scale that ranges from 0 to 1,000, with the OECD mean set at 500 and a standard deviation set at 100. In 2015, literacy in science, reading, and mathematics were assessed through a computer-based assessment in the majority of countries, including the United States. Education systems could also participate in optional pencil-and-paper financial literacy assessments and computer-based mathematics and reading assessments. In each education system, the assessment is translated into the primary language of instruction; in the United States, all materials are written in English.

Forty-three education systems participated in the 2000 PISA; 41 education systems participated in 2003; 57 (30 OECD member countries and 27 nonmember countries or education systems) participated in 2006; and 65 (34 OECD member countries and 31 nonmember countries or education systems) participated in 2009. (An additional nine education systems administered the 2009 PISA in 2010.) In PISA 2012, 65 education systems (34 OECD member countries and 31 nonmember countries or education systems), as well as the U.S. states of Connecticut, Florida, and Massachusetts, participated. In the 2015 PISA, 73 education systems (35 OECD member countries and 31 nonmember countries or education systems), as well as the states of Massachusetts and North Carolina and the territory of Puerto Rico, participated.

To implement PISA, each of the participating education systems scientifically draws a nationally representative sample of 15-year-olds, regardless of grade level. In the PISA 2015 national sample for the United States, about 5,700 students from 177 public and private schools were represented. Massachusetts, North Carolina, and Puerto Rico also participated in PISA 2015 as separate education systems. In Massachusetts, about 1,400 students from 48 public schools participated; in North Carolina, about 1,900 students from 54 public schools participated; and in Puerto Rico, about 1,400 students in 47 public and private schools participated.

The intent of PISA reporting is to provide an overall description of performance in reading literacy, mathematics literacy, and science literacy every 3 years, and to provide a more detailed look at each domain in the years when it is the major focus. These cycles will allow education systems to compare changes in trends for each of the three subject areas over time. In the first cycle, PISA 2000, reading literacy was the major focus, occupying roughly two-thirds of assessment time. For 2003, PISA focused on mathematics literacy as well as the ability of students to solve problems in real-life settings. In 2006, PISA focused on science literacy; in 2009, it focused on reading literacy again; and in 2012, it focused on mathematics literacy. PISA 2015 focused on science, as it did in 2006.

Further information on PISA may be obtained from

Patrick Gonzales
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
patrick.gonzales@ed.gov
https://nces.ed.gov/surveys/pisa