School Choice in the United States: 2019
Common Core of Data
The Common Core of Data (CCD) is NCES’s primary database on public elementary and secondary education in the United States. It is a comprehensive, annual, national statistical database of all public elementary and secondary schools and school districts containing data designed to be comparable across all states. This database can be used to select samples for other NCES surveys and provide basic information and descriptive statistics on public elementary and secondary schools and schooling in general.
The CCD collects statistical information annually from approximately 100,000 public elementary and secondary schools and approximately 18,000 public school districts (including supervisory unions and regional education service agencies) in the 50 states, the District of Columbia, the Department of Defense Education Activity (DoDEA), the Bureau of Indian Education (BIE), Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. Three categories of information are collected in the CCD survey: general descriptive information on schools and school districts, data on students and staff, and fiscal data. The general school and district descriptive information includes name, address, phone number, and type of locale; the data on students and staff include selected demographic characteristics; and the fiscal data pertain to revenues and current expenditures.
The EDFacts data collection system is the primary collection tool for the CCD. NCES works collaboratively with the Department of Education’s Performance Information Management Service to develop the CCD collection procedures and data definitions. Coordinators from state education agencies (SEAs) submit the CCD data at different levels (school, agency, and state) to the EDFacts collection system. Prior to submitting CCD files to EDFacts, SEAs must collect and compile information from their respective local education agencies (LEAs) through established administrative records systems within their state or jurisdiction.
Once SEAs have completed their submissions, the CCD survey staff analyzes and verifies the data for quality assurance. Even though the CCD is a universe collection and thus not subject to sampling errors, nonsampling errors can occur. The two potential sources of nonsampling errors are nonresponse and inaccurate reporting. NCES attempts to minimize nonsampling errors through the use of annual training of SEA coordinators, extensive quality reviews, and survey editing procedures. In addition, each year SEAs are given the opportunity to revise their state-level aggregates from the previous survey cycle.
The CCD survey consists of five components: The Public Elementary/Secondary School Universe Survey, the Local Education Agency (School District) Universe Survey, the State Nonfiscal Survey of Public Elementary/Secondary Education, the National Public Education Financial Survey (NPEFS), and the School District Finance Survey (F-33). This report uses data from the Public Elementary/Secondary School Universe Survey.
Public Elementary/Secondary School Universe Survey
The Public Elementary/Secondary School Universe Survey includes all public schools providing education services to prekindergarten, kindergarten, grade 1–13, and ungraded students. For school year 2016–17, the survey included records for each public elementary and secondary school in the 50 states, the District of Columbia, the DoDEA, the BIE, Puerto Rico, American Samoa, the Northern Mariana Islands, Guam, and the U.S. Virgin Islands.
The Public Elementary/Secondary School Universe Survey includes data for the following variables: NCES school ID number, state school ID number, name of the school, name of the agency that operates the school, mailing address, physical location address, phone number, school type, operational status, locale code, latitude, longitude, county number, county name, full-time-equivalent (FTE) classroom teacher count, low/high grade span offered, congressional district code, school level, students eligible for free lunch, students eligible for reduced-price lunch, total students eligible for free and reduced-price lunch, and student totals and detail (by grade, by race/ethnicity, and by sex). The survey also contains flags indicating whether a school is Title I eligible, schoolwide Title I eligible, a magnet school, a charter school, a shared-time school, or a BIE school, as well as which grades are offered at the school.
Further information on the nonfiscal CCD data may be obtained from:
Patrick Keaton
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
patrick.keaton@ed.gov
https://nces.ed.gov/ccd
National Assessment of Educational Progress
The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects. The assessment is based on frameworks developed by the National Assessment Governing Board (NAGB). It includes both multiple-choice items and constructed-response items (those requiring written answers). Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students performing at or above three achievement levels (Basic, Proficient, and Advanced) are also reported for these groups.
From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate. In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main NAEP, and an aggregate of all state samples replaced the separate national sample. (School district-level assessments—under the Trial Urban District Assessment [TUDA] program—also began in 2002.)
Results are available for the mathematics assessments administered in 2000, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and better assess the specific objectives for students at each grade level.
The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.
Since the 2005 changes to the mathematics framework were minimal for grades 4 and 8, comparisons over time can be made between assessments conducted before and after the framework’s implementation for these grades. The changes that the 2005 framework made to the grade 12 assessment, however, were too drastic to allow grade 12 results from before and after implementation to be directly compared. These changes included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework; merging the measurement and geometry content areas; and changing the reporting scale from 0–500 to 0–300. For more information regarding the 2005 mathematics framework revisions, see https://nces.ed.gov/nationsreportcard/mathematics/frameworkcomparison.asp.
Results are available for the reading assessments administered in 2000, 2002, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.
Both a content alignment study and a reading trend, or bridge, study were conducted to determine if the new reading assessment was comparable to the prior assessment. Overall, the results of the special analyses suggested that the assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see https://nces.ed.gov/nationsreportcard/reading/whatmeasure.asp.
The online Highlights report 2017 NAEP Mathematics and Reading Assessments: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts (NCES 2018-037) presents an overview of results from the NAEP 2017 mathematics and reading reports. Highlighted results include key findings for the nation, states/jurisdictions, and 27 districts that participated in the Trial Urban District Assessment (TUDA) in mathematics and reading at grades 4 and 8.
Further information on NAEP may be obtained from:
Daniel McGrath
Reporting and Dissemination Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
daniel.mcgrath@ed.gov
https://nces.ed.gov/nationsreportcard
National Household Education Surveys Program
The National Household Education Surveys Program (NHES) is a data collection system that is designed to address a wide range of education-related issues. Surveys have been conducted in 1991, 1993, 1995, 1996, 1999, 2001, 2003, 2005, 2007, 2012, and 2016. NHES targets specific populations for detailed data collection. It is intended to provide more detailed data on the topics and populations of interest than are collected through supplements to other household surveys.
NHES:1999 collected end-of-decade estimates of key indicators from the surveys conducted throughout the 1990s. Approximately 60,000 households were screened for a total of about 31,000 interviews with parents of children from birth through grade 12 (including about 6,900 infants, toddlers, and preschoolers) and adults age 16 or older not enrolled in grade 12 or below. Key indicators included participation of children in nonparental care and early childhood programs, school experiences, parent/family involvement in education at home and at school, youth community service activities, plans for future education, and adult participation in educational activities and community service.
NHES:2003 included two surveys: the Parent and Family Involvement in Education Survey and the Adult Education for Work-Related Reasons Survey (the first administration). The Parent and Family Involvement Survey expanded on the first survey fielded on this topic in 1996. For this survey, interviews were completed by the parents of about 12,400 of the 14,900 sampled children in kindergarten through grade 12, yielding a weighted unit response rate of 83 percent.
NHES:2007 fielded the Parent and Family Involvement in Education Survey and the School Readiness Survey. The Parent and Family Involvement in Education Survey was similar in design and content to the one included in the 2003 collection. New features added to this survey included questions about supplemental education services provided by schools and school districts (including use of and satisfaction with such services), as well as questions that would efficiently identify the school attended by the sampled students. For the Parent and Family Involvement Survey, interviews were completed with parents of 10,680 sampled children in kindergarten through grade 12, including 10,370 students enrolled in public or private schools and 310 homeschooled children. Parents who were interviewed about children in kindergarten through 2nd grade were also asked some questions about these children’s school readiness.
The 2007 and earlier administrations of NHES used a random-digit-dial sample of landline phones and computer-assisted telephone interviewing to conduct interviews. However, due to declining response rates for all telephone surveys and the increase in households that only or mostly use a cell phone instead of a landline, the data collection method was changed to an address-based sample survey for NHES:2012. Because of this change in survey mode, readers should use caution when comparing NHES:2012 estimates to those of prior NHES administrations.
NHES:2012 included the Parent and Family Involvement in Education Survey and the Early Childhood Program Participation Survey. The Parent and Family Involvement in Education Survey gathered data on students age 20 or younger who were enrolled in kindergarten through grade 12 or who were homeschooled at equivalent grade levels. Survey questions that pertained to students enrolled in kindergarten through grade 12 requested information on various aspects of parent involvement in education (such as help with homework, family activities, and parent involvement at school) and survey questions pertaining to homeschooled students requested information on the student’s homeschooling experiences, the sources of the curriculum, and the reasons for homeschooling.
The 2012 Parent and Family Involvement in Education Survey questionnaires were completed for 17,563 (397 homeschooled and 17,166 enrolled) children, for a weighted unit response rate of 78.4 percent. The overall estimated unit response rate (the product of the screener unit response rate of 73.8 percent and the Parent and Family Involvement in Education Survey unit response rate) was 57.8 percent.
NHES:2016 used a nationally representative address-based sample covering the 50 states and the District of Columbia. The 2016 administration included a screener survey questionnaire that identified households with children or youth under age 20 and adults ages 16 to 65. A total of 206,000 households were selected based on this screener, and the screener response rate was 66.4 percent. All sampled households received initial contact by mail. Although the majority of respondents completed paper questionnaires, a small sample of cases was part of a web experiment with mailed invitations to complete the survey online.
The 2016 Parent and Family Involvement in Education Survey, like its predecessor in 2012, gathered data about students age 20 or under who were enrolled in kindergarten through grade 12 or who were being homeschooled for the equivalent grades. The 2016 survey’s questions also covered aspects of parental involvement in education similar to those in the 2012 survey. The total number of completed questionnaires in the 2016 survey was 14,075 (13,523 enrolled and 552 homeschooled children), representing a population of 53.2 million students either homeschooled or enrolled in a public or private school in 2015–16. The survey’s weighted unit response rate was 74.3 percent, and the overall response rate was 49.3 percent.
Data for the 2016 Parent and Family Involvement in Education Survey are available in Parent and Family Involvement in Education: Results From the National Household Education Surveys Program of 2016 (NCES 2017-102).
Further information on NHES may be obtained from:
Sarah Grady
Andrew Zukerberg
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
sarah.grady@ed.gov
andrew.zukerberg@ed.gov
http://nces.ed.gov/nhes
Private School Universe Survey
The purposes of the Private School Universe Survey (PSS) data collection activities are (1) to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools and (2) to report data on the total number of private schools, teachers, and students in the survey universe. Begun in 1989, the PSS has been conducted every 2 years, and data for the 1989–90, 1991–92, 1993–94, 1995–96, 1997–98, 1999–2000, 2001–02, 2003–04, 2005–06, 2007–08, 2009–10, 2011–12, 2013–14, and 2015–16 school years have been released. The First Look report Characteristics of Private Schools in the United States: Results From the 2015–16 Private School Universe Survey (NCES 2017-073) presents selected findings from the 2015–16 PSS.
The PSS produces data similar to that of the Common Core of Data for public schools and can be used for public-private comparisons. The data are useful for a variety of policy- and research-relevant issues, such as the growth of religiously affiliated schools, the number of private high school graduates, the length of the school year for various private schools, and the number of private school students and teachers.
The target population for this universe survey is all private schools in the United States that meet the PSS criteria of a private school (i.e., the private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home).
The survey universe is composed of schools identified from a variety of sources. The main source is a list frame initially developed for the 1989–90 PSS. The list is updated regularly by matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 124 geographic areas, conducted by the U.S. Census Bureau.
Of the 40,302 schools included in the 2009–10 sample, 10,229 were found ineligible for the survey. Those not responding numbered 1,856, and those responding numbered 28,217. The unweighted response rate for the 2009–10 PSS survey was 93.8 percent.
Of the 39,325 schools included in the 2011–12 sample, 10,030 cases were considered as out-of-scope (not eligible for the PSS). A total of 26,983 private schools completed a PSS interview (15.8 percent completed online), while 2,312 schools refused to participate, resulting in an unweighted response rate of 92.1 percent.
There were 40,298 schools in the 2013–14 sample; of these, 10,659 were considered as out-of-scope (not eligible for the PSS). A total of 24,566 private schools completed a PSS interview (34.1 percent completed online), while 5,073 schools refused to participate, resulting in an unweighted response rate of 82.9 percent.
The 2015–16 PSS included 42,389 schools, of which 12,754 were considered as out-of-scope (not eligible for the PSS). A total of 22,428 private schools completed a PSS interview and 7,207 schools failed to respond, which resulted in an unweighted response rate of 75.7 percent.
Further information on the PSS may be obtained from:
Steve Broughman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
stephen.broughman@ed.gov
http://nces.ed.gov/surveys/pss
School Crime Supplement (SCS) to the National Crime Victimization Survey (NCVS)
Created as a supplement to the National Crime Victimization Survey (NCVS) and co-designed by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. This report includes data from the 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017 collections. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Students are asked additional questions about security measures used by their school, students’ participation in after-school activities, students’ perceptions of school rules, the presence of weapons and gangs in school, the presence of hate-related words and graffiti in school, student reports of bullying and reports of rejection at school, and the availability of drugs and alcohol in school. Students are also asked attitudinal questions relating to fear of victimization and avoidance behavior at school.
The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS.1 Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 6–12, and were not homeschooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 12–18 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview.
In 2001, the SCS survey instrument was modified. In 1995 and 1999, “at school” had been defined for respondents as meaning in the school building, on the school grounds, or on a school bus. In 2001, the definition of “at school” was changed to mean in the school building, on school property, on a school bus, or going to and from school. The change to the definition of “at school” in the 2001 questionnaire was made in order to render the definition there consistent with the definition as it is constructed in the NCVS. This change to the definition of “at school” has been retained in subsequent SCS collections. Cognitive interviews conducted by the U.S. Census Bureau on the 1999 SCS suggested that modifications to the definition of “at school” would not have a substantial impact on the estimates.
A total of about 9,700 students participated in the 1995 SCS, and 8,400 students participated in both the 1999 and 2001 SCS. In 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017, the numbers of students participating were 7,200, 6,300, 5,600, 5,000, 6,500, 5,700, 5,500, and 7,100, respectively.
In the 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017 SCS collections, the household completion rates were 95 percent, 94 percent, 93 percent, 92 percent, 91 percent, 90 percent, 92 percent, 91 percent, 86 percent, 82 percent, and 76 percent, respectively, and the student completion rates were 78 percent, 78 percent, 77 percent, 70 percent, 62 percent, 58 percent, 56 percent, 63 percent, 60 percent, 58 percent, and 52 percent, respectively. The overall SCS unit response rate (calculated by multiplying the household completion rate by the student completion rate) was about 74 percent in 1995, 73 percent in 1999, 72 percent in 2001, 64 percent in 2003, 56 percent in 2005, 53 percent in 2007, 51 percent in 2009, 57 percent in 2011, 51 percent in 2013, 48 percent in 2015, and 40 percent in 2017. (Prior to 2011, overall SCS unit response rates were unweighted; starting in 2011, overall SCS unit response rates are weighted.)
There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (U.S. Department of Education 2003). Due to the low unit response rate in 2005, 2007, 2009, 2011, 2013, 2015, and 2017, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents. In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has several key student or school characteristic variables for which data are known for respondents and nonrespondents: sex, age, race/ethnicity, household income, region, and urbanicity, all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.
In 2005, the analysis of unit nonresponse bias found evidence of bias for the race, household income, and urbanicity variables. White (non-Hispanic) and Other (non-Hispanic) respondents had higher response rates than Black (non-Hispanic) and Hispanic respondents. Respondents from households with an income of $35,000–$49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500–$14,999, $15,000–$24,999, and $25,000–$34,999. Respondents who live in urban areas had lower response rates than those who live in rural or suburban areas. Although the extent of nonresponse bias cannot be determined, weighting adjustments, which corrected for differential response rates, should have reduced the problem.
In 2007, the analysis of unit nonresponse bias found evidence of bias by the race/ethnicity and household income variables. Hispanic respondents had lower response rates than respondents of other races/ethnicities. Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $25,000. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.
In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other races/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.
In 2011, the analysis of unit nonresponse bias found evidence of potential bias for the age variable. Respondents 12 to 17 years old had higher response rates than did 18-year-old respondents in the NCVS and SCS interviews. Weighting the data adjusts for unequal selection probabilities and for the effects of nonresponse. The weighting adjustments that correct for differential response rates are created by region, age, race, and sex, and should have reduced the effect of nonresponse.
In 2013, the analysis of unit nonresponse bias found evidence of potential bias for the age, region, and Hispanic origin variables in the NCVS interview response. Within the SCS portion of the data, only the age and region variables showed significant unit nonresponse bias. Further analysis indicated only the age 14 and the west region categories showed positive response biases that were significantly different from some of the other categories within the age and region variables. Based on the analysis, nonresponse bias seems to have little impact on the SCS results.
In 2015, the analysis of unit nonresponse bias found evidence of potential bias for age, race, Hispanic origin, urbanicity, and region in the NCVS interview response. For the SCS interview, the age, race, urbanicity, and region variables showed significant unit nonresponse bias. The age 14 group and rural areas showed positive response biases that were significantly different from other categories within the age and urbanicity variables. The northeast region and Asian race group showed negative response biases that were significantly different from other categories within the region and race variables. These results provide evidence that these subgroups may have a nonresponse bias associated with them.
In 2017, the analysis of unit nonresponse bias found that the race/ethnicity and census region variables showed significant differences in response rates between different race/ethnicity and census region subgroups. Respondent and nonrespondent distributions were significantly different for the race/ethnicity subgroup only. However, after using weights adjusted for person nonresponse, there was no evidence that these response differences introduced nonresponse bias in the final victimization estimates.
Response rates for SCS survey items in all survey years were high—typically over 95 percent of all eligible respondents, meaning there is little potential for item nonresponse bias for most items in the survey. The weighted data permit inferences about the eligible student population who were enrolled in schools in all SCS data years.
For more information about SCS, contact:
Rachel Hansen
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-7082
rachel.hansen@ed.gov
http://nces.ed.gov/programs/crime
1 For the NCVS, households were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interviews. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial census. The number of NCVS-eligible households in the 2017 sample was approximately 192,111.