Skip Navigation
Click to open navigation

Appendix A. Guide to Sources

The indicators in this report present data from a variety of sources. Brief descriptions of these sources and their data collections and data collection methods are presented below, grouped by sponsoring organization. Most of these sources are federal surveys and many are conducted by the National Center for Education Statistics (NCES).

The data were collected using many research methods, including surveys of a universe (such as all colleges) or of a sample and compilations of administrative records.

National Center for Education Statistics (NCES)

Common Core of Data

The Common Core of Data (CCD) is NCES's primary database on public elementary and secondary education in the United States. It is a comprehensive, annual, national statistical database of all public elementary and secondary schools and school districts containing data designed to be comparable across all states. This database can be used to select samples for other NCES surveys and provide basic information and descriptive statistics on public elementary and secondary schools and schooling in general.

The CCD collects statistical information annually from approximately 100,000 public elementary and secondary schools and approximately 18,000 public school districts (including supervisory unions and regional education service agencies) in the 50 states, the District of Columbia, Department of Defense (DoD) dependents schools, the Bureau of Indian Education (BIE), Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. Three categories of information are collected in the CCD survey: general descriptive information on schools and school districts; data on students and staff; and fiscal data. The general school and district descriptive information includes name, address, phone number, and type of locale; the data on students and staff include selected demographic characteristics; and the fiscal data pertain to revenues and current expenditures.

The CCD survey consists of five components: The Public Elementary/Secondary School Universe Survey, the Local Education Agency (School District) Universe Survey, the State Nonfiscal Survey of Public Elementary/Secondary Education, the National Public Education Financial Survey (NPEFS), and the School District Finance Survey (F-33). Indicators 6 (Elementary and Secondary Enrollment) and 7 (English Language Learners) report data from the State Nonfiscal Survey of Public Elementary/Secondary Education.

State Nonfiscal Survey of Public Elementary/ Secondary Education

The State Nonfiscal Survey of Public Elementary/ Secondary Education for the 2012–13 school year provides state-level, aggregate information about students and staff in public elementary and secondary education. It includes data from the 50 states, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa. The DoD dependents schools (overseas and domestic) and the BIE are also included in the survey universe. This survey covers public school student membership by grade, race/ethnicity, and state or jurisdiction and covers number of staff in public schools by category and state or jurisdiction. Beginning with the 2006–07 school year, the number of diploma recipients and other high school completers are no longer included in the State Nonfiscal Survey of Public Elementary/Secondary Education file. These data are now published in the public-use CCD State Dropout and Completion Data File.

For more information on the nonfiscal CCD data, contact:

Patrick Keaton
Administrative Data Division
Elementary and Secondary Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW
Washington, DC 20202
patrick.keaton@ed.gov
http://nces.ed.gov/ccd

EDFacts

EDFacts is a centralized data collection through which state education agencies submit K–12 education data to the U.S. Department of Education (ED). All data in EDFacts are organized into "data groups" and reported to ED using defined file specifications. Depending on the data group, state education agencies may submit aggregate counts for the state as a whole or detailed counts for individual schools or school districts. EDFacts does not collect student-level records. The entities that are required to report EDFacts data vary by data group but may include the 50 states, the District of Columbia, the Department of Defense (DoD) dependents schools, the Bureau of Indian Education, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. More information about EDFacts file specifications and data groups can be found at http://www.ed.gov/EDFacts.

EDFacts is a universe collection and is not subject to sampling error, but nonsampling errors such as nonresponse and inaccurate reporting may occur. The U.S. Department of Education attempts to minimize nonsampling errors by training data submission coordinators and reviewing the quality of state data submissions. However, anomalies may still be present in the data.

Differences in state data collection systems may limit the comparability of EDFacts data across states and across time. To build EDFacts files, state education agencies rely on data that were reported by their schools and school districts. The systems used to collect these data are evolving rapidly and differ from state to state.

In some cases, EDFacts data may not align with data reported on state education agency websites. States may update their websites on schedules different from those they use to report data to ED. Furthermore, ED may use methods for protecting the privacy of individuals represented within the data that could be different from the methods used by an individual state.

Indicator 7 (English Language Learners) reports EDFacts data on public school students participating in programs for English language learners. EDFacts Limited English Proficient (LEP) students in LEP program data are collected in data group 123 within file 046. EDFacts collects this data group on behalf of the National Center for Education Statistics (NCES). The definition for this data group is "The unduplicated number of limited English proficient (LEP) students enrolled in English language instruction educational programs designed for LEP students." The reporting period is October 1 or the closest school day to October 1. For more information about this data group, please see file specification 046 for the relevant school year, available at http://www2.ed.gov/about/inits/ed/edfacts/file-specifications.html.

For more information about EDFacts, contact: EDFacts

Administrative Data Division
Elementary/Secondary Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
EDFacts@ed.gov
http://www2.ed.gov/about/inits/ed/edfacts/index.html

High School Longitudinal Study of 2009

The High School Longitudinal Study of 2009 (HSLS:09) is a nationally representative, longitudinal study of approximately 21,000 9th-grade students in 944 schools who will be followed through their secondary and postsecondary years. The study focuses on understanding students' trajectories from the beginning of high school into postsecondary education, the workforce, and beyond. The HSLS:09 questionnaire is focused on, but not limited to, information on science, technology, engineering, and mathematics (STEM) education and careers. It is designed to provide data on mathematics and science education, the changing high school environment, and postsecondary education. This study features a new student assessment in algebra skills, reasoning, and problem solving and includes surveys of students, their parents, math and science teachers, and school administrators, as well as a new survey of school counselors.

The HSLS:09 base year took place in the 2009–10 school year, with a randomly selected sample of fall-term 9th-graders in more than 900 public and private high schools that had both a 9th and an 11th grade. Students took a mathematics assessment and survey online. Students' parents, principals, and mathematics and science teachers and the school's lead counselor completed surveys on the phone or online.

The HSLS:09 student questionnaire includes interest and motivation items for measuring key factors predicting choice of postsecondary paths, including majors and eventual careers. This study explores the roles of different factors in the development of a student's commitment to attend college and then take the steps necessary to succeed in college (the right courses, courses in specific sequences, etc.). Questionnaires in this study have asked questions of students and parents regarding reasons for selecting specific colleges (e.g., academic programs, financial aid and access prices, and campus environment).

The first follow-up of HSLS:09 occurred in the spring of 2012, when most sample members were in the 11th grade. Data files and documentation for the first follow-up were released in fall 2013 and are available on the NCES website.

A between-round postsecondary status update survey took place in the spring of students' expected graduation year (2013). It asked respondents about college applications, acceptances, and rejections, as well as their actual college choices. In the fall of 2013 and the spring of 2014, high school transcripts were collected and coded. Indicators 12 (High School Coursetaking) and 13 (Advanced Placement and International Baccalaureate Coursetaking) in this report use data from the First Follow-up and High School Transcript Study of HSLS:09.

A full second follow-up was conducted in 2016, when most sample members were 3 years beyond high school graduation. Additional follow-ups are planned, to at least age 30.

For more information on HSLS:09, contact:

Elise Christopher Sample Surveys Division
Longitudinal Surveys Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
hsls09@ed.gov
http://nces.ed.gov/surveys/hsls09

Integrated Postsecondary Education Data System

The Integrated Postsecondary Education Data System (IPEDS) surveys approximately 7,500 postsecondary institutions, including universities and colleges, as well as institutions offering technical and vocational education beyond the high school level. IPEDS, an annual universe collection that began in 1986, replaced the Higher Education General Information Survey (HEGIS). In order to present data in a timely manner, this report uses "provisional" IPEDS data for the most recent years. These data have been fully reviewed, edited, and imputed, but do not incorporate data revisions submitted by institutions after the close of data collection.

IPEDS consists of interrelated survey components that provide information on postsecondary institutions, student enrollment, programs offered, degrees and certificates conferred, and both the human and financial resources involved in the provision of institutionally based postsecondary education. Prior to 2000, the IPEDS survey had the following subject-matter components: Graduation Rates; Fall Enrollment; Institutional Characteristics; Completions; Salaries, Tenure, and Fringe Benefits of Full-Time Faculty; Fall Staff; Finance; and Academic Libraries (in 2000, the Academic Libraries component became a survey separate from IPEDS). Since 2000, IPEDS survey components occurring in a particular collection year have been organized into three seasonal collection periods: fall, winter, and spring. The Institutional Characteristics and Completions components first took place during the fall 2000 collection; the Employees by Assigned Position (EAP), Salaries, and Fall Staff components first took place during the winter 2001–02 collection; and the Enrollment, Student Financial Aid, Finance, and Graduation Rates components first took place during the spring 2001 collection. In the winter 2005–06 data collection, the EAP, Fall Staff, and Salaries components were merged into the Human Resources component. During the 2007–08 collection year, the Enrollment component was broken into two separate components: 12-Month Enrollment (taking place in the fall collection) and Fall Enrollment (taking place in the spring collection). In the 2011–12 IPEDS data collection year, the Student Financial Aid component was moved to the winter data collection to aid in the timing of the net price of attendance calculations displayed on the College Navigator (http://nces.ed.gov/collegenavigator). In the 2012–13 IPEDS data collection year, the Human Resources component was moved from the winter data collection to the spring data collection, and in the 2013–14 data collection year, the Graduation Rates and Graduation Rates 200% components were moved from the spring data collection to the winter data collection. In this report, Indicators 22 (Degrees Awarded), 23 (Undergraduate and Graduate Degree Fields), and 24 (STEM Degrees) present data from the Completions component; Indicator 21 (Postsecondary Graduation Rates) presents data from the Graduation Rates component; and Indicator 19 (Undergraduate and Graduate Enrollment) presents data from the Fall Enrollment component.

Beginning in 2008–09, the first-professional degree category was combined with the doctor's degree category. However, some degrees formerly identified as first-professional that take more than two full-time-equivalent academic years to complete, such as those in Theology (M.Div, M.H.L./Rav), are included in the Master's degree category. Doctor's degrees were broken out into three distinct categories: research/scholarship, professional practice, and other doctor's degrees.

IPEDS race/ethnicity data collection also changed in 2008–09. The "Asian" race category is now separate from a "Native Hawaiian or Other Pacific Islander" category, and a new category of "Two or more races" was added.

The degree-granting institutions portion of IPEDS is a census of colleges that award associate's or higher degrees and are eligible to participate in Title IV financial aid programs. Prior to 1993, data from technical and vocational institutions were collected through a sample survey. Beginning in 1993, all data are gathered in a census of all postsecondary institutions. Beginning in 1997, the survey was restricted to institutions participating in Title IV programs. The data presented in this report from 1993 forward are based on lists of all institutions and are not subject to sampling errors.

The classification of institutions offering college and university education changed as of 1996. Prior to 1996, institutions that had courses leading to an associate's or higher degree or that had courses accepted for credit toward those degrees were considered higher education institutions. Higher education institutions were accredited by an agency or association that was recognized by the U.S. Department of Education or were recognized directly by the Secretary of Education. The newer standard includes institutions that award associate's or higher degrees and that are eligible to participate in Title IV federal financial aid programs. The impact of this change on data collected in 1996 was not large.

For more information on IPEDS, contact:

Richard Reeves
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
richard.reeves@ed.gov
http://nces.ed.gov/ipeds

Fall (Completions)

This survey was part of the HEGIS series throughout its existence. However, the degree classification taxonomy was revised in 1970–71, 1982–83, 1991–92, 2002–03, and 2009–10. Collection of degree data has been maintained through IPEDS.

Degrees-conferred trend tables arranged by the 2009–10 classification are included in the Digest of Education Statistics to provide consistent data from 1970–71 through the most recent year. Data in this edition on associate's and other formal awards below the baccalaureate degree, by field of study, cannot be made comparable with figures from years prior to 1982–83. The nonresponse rate does not appear to be a significant source of nonsampling error for this survey. The response rate over the years has been high; for the fall 2014 Completions component, it rounded to 100.0 percent. Because of the high response rate, there was no need to conduct a nonresponse bias analysis. Imputation methods for the fall 2014 Completions component are discussed in the 2014–15 Integrated Postsecondary Education Data System (IPEDS) Methodology Report (NCES 2015-098).

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) indicated that most Title IV institutions supplying revised data on completions in 2003–04 were able to supply missing data for the prior year. The small differences between imputed data for the prior year and the revised actual data supplied by the institution indicated that the imputed values produced by NCES were acceptable.

For more information on the IPEDS Completions component, contact:

Imani Stutely
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
imani.stutely@ed.gov
http://nces.ed.gov/ipeds

Winter (Graduation Rates)

In IPEDS data collection years 2012–13 and earlier, the Graduation Rates component was collected during the spring collection. In the IPEDS 2013–14 data collection year, however, the Graduation Rates collection was moved to the winter data collection.

The 2014–15 Graduation Rates component collected counts of full-time, first-time degree/certificate-seeking undergraduate students beginning their postsecondary education in the specified cohort year and their completion status as of 150 percent of normal program completion time at the same institution where the students started. If 150 percent of normal program completions time extended beyond August 31, 2014, the counts as of that date were collected. Four-year institutions used 2008 as the cohort year, while less-than-4-year institutions used 2011 as the cohort year. Of the 6,433 institutions that were expected to respond to the Graduation Rates component, 6,430 institutions responded, resulting in a response rate that rounded to 100 percent.

The 2014–15 Graduation Rates 200 Percent component was designed to combine information reported in a prior collection via the Graduation Rates component with current information about the same cohort of students. From previously collected data, the following elements were obtained: the number of students entering the institution as full-time, first-time degree/certificate-seeking students in a cohort year; the number of students in this cohort completing within 100 and 150 percent of normal program completion time; and the number of cohort exclusions (such as students who left for military service). Then the count of additional cohort exclusions and additional program completers between 151 and 200 percent of normal program completion time was collected. Four-year institutions reported on bachelor's or equivalent degree-seeking students and used cohort year 2006 as the reference period, while less-than-4-year institutions reported on all students in the cohort and used cohort year 2010 as the reference period. Of the 5,928 institutions that were expected to respond to the Graduation Rates 200 Percent component, 5,926 institutions responded, resulting in a response rate that rounded to 100 percent.

For more information on the IPEDS Graduation Rates component, contact:

Andrew Mary
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
andrew.mary@ed.gov
http://nces.ed.gov/ipeds/

Spring (Fall Enrollment)

This survey has been part of the HEGIS and IPEDS series since 1966. Response rates for this survey have been relatively high, generally exceeding 85 percent. Beginning in 2000, when web-based data collection was introduced, higher response rates have been attained. In the spring 2015 data collection, the Fall Enrollment component covered fall 2014. Of the 7,292 institutions that were expected to respond, 7,284 responded, for a response rate that rounded to 100 percent. Data collection procedures for the Fall Enrollment component of the spring 2015 data collection are presented in Enrollment and Employees in Postsecondary Institutions, Fall 2014; and Financial Statistics and Academic Libraries, Fiscal Year 2014: First Look (Provisional Data) (NCES 2016-005).

Beginning with the fall 1986 survey and the introduction of IPEDS (see above), the survey was redesigned. The survey allows (in alternating years) for the collection of age and residence data. Beginning in 2000, the survey collected instructional activity and unduplicated headcount data, which are needed to compute a standardized, full-time-equivalent (FTE) enrollment statistic for the entire academic year. As of 2007–08, the timeliness of the instructional activity data has been improved by collecting these data in the fall as part of the 12-Month-Enrollment component instead of in the spring as part of the Fall Enrollment component.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) showed that public institutions made the majority of changes to enrollment data during the 2004 revision period. The majority of changes were made to unduplicated headcount data, with the net differences between the original data and the revised data at about 1 percent. Part-time students in general and enrollment in private not-for-profit institutions were often underestimated. The fewest changes by institutions were to Classification of Instructional Programs (CIP) code data. (The CIP is a taxonomic coding scheme that contains titles and descriptions of primarily postsecondary instructional programs.)

For more information on the IPEDS Fall Enrollment component, contact:

Chris Cody
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
christopher.cody@ed.gov
http://nces.ed.gov/ipeds

National Assessment of Educational Progress

The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects. The assessments are based on frameworks developed by the National Assessment Governing Board (NAGB). Assessment items include both multiple-choice and constructed-response (requiring written answers) items. Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students performing at or above three achievement levels (Basic, Proficient, and Advanced) are also reported for these groups.

From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate. In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main NAEP, and an aggregate of all state samples replaced the separate national sample.

Results are available for the mathematics assessments administered in 2000, 2003, 2005, 2007, 2009, 2011, 2013, and 2015. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and better assess the specific objectives for students at each grade level.

The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.

Since the 2005 changes to the mathematics framework were minimal for grades 4 and 8, comparisons over time can be made between assessments conducted before and after the framework's implementation for these grades. The changes that the 2005 framework made to the grade 12 assessment, however, were too drastic to allow grade 12 results from before and after implementation to be directly compared. These changes included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework; merging the measurement and geometry content areas; and changing the reporting scale from 0–500 to 0–300. For more information regarding the 2005 mathematics framework revisions, see https://nces.ed.gov/nationsreportcard/mathematics/frameworkcomparison.asp.

Results are available for the reading assessments administered in 2000, 2002, 2003, 2005, 2007, 2009, 2011, 2013, and 2015. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.

Both a content alignment study and a reading trend or bridge study were conducted to determine if the new assessment was comparable to the prior assessment. Overall, the results of the special analyses suggested that the assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see http://nces.ed.gov/nationsreportcard/reading/whatmeasure.asp.

For more information on NAEP, contact:

Daniel McGrath
Assessments Division
Reporting and Dissemination Branch
National Center for Education Statistics Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
daniel.mcgrath@ed.gov

http://nces.ed.gov/nationsreportcard

National Household Education Surveys Program

The National Household Education Surveys Program (NHES) is a data collection system that is designed to address a wide range of education-related issues. Surveys have been conducted in 1991, 1993, 1995, 1996, 1999, 2001, 2003, 2005, 2007, and 2012. NHES targets specific populations for detailed data collection. It is intended to provide more detailed data on the topics and populations of interest than are collected through supplements to other household surveys. Indicator 5 (Early Child Care and Education Arrangements) reports data from the 2012 NHES (Early Childhood Program Participation Survey).

The 2012 Early Childhood Program Participation Survey collected data on the early care and education arrangements and early learning of children from birth through the age of 5 who were not yet enrolled in kindergarten. Questionnaires were completed for 7,893 children, for a weighted unit response rate of 78.7 percent. The overall estimated weighted unit response rate (the product of the screener weighted unit response rate of 73.8 percent and the Early Childhood Program Participation Survey unit weighted response rate) was 58.1 percent.

For more information on NHES, contact:

Sarah Grady
Sample Surveys Division
National Center for Education Statistics Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
sarah.grady@ed.gov
http://nces.ed.gov/nhes

National Postsecondary Student Aid Study

The National Postsecondary Student Aid Study (NPSAS) is a comprehensive nationwide study of how students and their families pay for postsecondary education. Data gathered from the study are used to help guide future federal student financial aid policy. The study covers nationally representative samples of undergraduates, graduates, and first-professional students in the 50 states, the District of Columbia, and Puerto Rico, including students attending less-than-2-year institutions, community colleges, 4-year colleges, and universities. Participants include students who do not receive aid and those who do receive financial aid. Since NPSAS identifies nationally representative samples of student subpopulations of interest to policymakers and obtains baseline data for longitudinal study of these subpopulations, data from the study provide the base-year sample for the Beginning Postsecondary Students (BPS) longitudinal study and the Baccalaureate and Beyond (B&B) longitudinal study.

Originally, NPSAS was conducted every 3 years. Beginning with the 1999–2000 study (NPSAS:2000), NPSAS has been conducted every 4 years. Indicator 20 (Financial Aid) reports data from the 1990–2000, 2003–04, 2007–08, and 2011–12 NPSAS studies.

NPSAS:2000 included nearly 62,000 students (50,000 undergraduates and almost 12,000 graduate students) from 1,000 postsecondary institutions. NPSAS:04 collected data on about 80,000 undergraduates and 11,000 graduate students from 1,400 postsecondary institutions. For NPSAS:08, about 114,000 undergraduate students and 14,000 graduate students who were enrolled in postsecondary education during the 2007–08 school year were selected from more than 1,730 postsecondary institutions.

NPSAS:12 sampled about 95,000 undergraduates and 16,000 graduate students from approximately 1,500 postsecondary institutions. Public access to the data is available online through PowerStats (http://nces.ed.gov/datalab/).

For more information on NPSAS, contact:

Aurora D'Amico
Tracy Hunt-White
Sample Surveys Division Longitudinal Surveys Branch
National Center for Education Statistics Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
aurora.damico@ed.gov
tracy.hunt-white@ed.gov
http://nces.ed.gov/npsas

Private School Universe Survey

The purposes of the Private School Universe Survey (PSS) data collection activities are (1) to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools and (2) to report data on the total number of private schools, teachers, and students in the survey universe. Begun in 1989 under the U.S. Census Bureau, the PSS has been conducted every 2 years, and data for the 1989–90, 1991–92, 1993–94, 1995–96, 1997–98, 1999–2000, 2001–02, 2003–04, 2005–06, 2007–08, 2009–10, and 2011–12 school years have been released

The PSS produces data similar to that of the Common Core of Data for public schools, and can be used for public-private comparisons. The data are useful for a variety of policy- and research-relevant issues, such as the growth of religiously affiliated schools, the number of private high school graduates, the length of the school year for various private schools, and the number of private school students and teachers. In this report, Indicator 6 (Elementary and Secondary Enrollment) uses PSS data for private school student enrollment.

The target population for this universe survey is all private schools in the United States that meet the PSS criteria of a private school (i.e., the private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home).

The survey universe is composed of schools identified from a variety of sources. The main source is a list frame initially developed for the 1989–90 PSS. The list is updated regularly by matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 124 geographic areas, conducted by the U.S. Census Bureau.

Of the 39,325 schools included in the 2011–12 sample, 10,030 cases were considered as out-of-scope (not eligible for the PSS). A total of 26,983 private schools completed a PSS interview (15.8 percent completed online), while 2,312 schools refused to participate, resulting in an unweighted response rate of 92.1 percent.

For more information on the PSS, contact:

Steve Broughman
Sample Surveys Division
Cross-Sectional Surveys Branch National Center for Education Statistics Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
stephen.broughman@ed.gov

http://nces.ed.gov/surveys/pss

Other Department of Education Agencies

Office for Civil Rights

Civil Rights Data Collection

The U.S. Department of Education's Office for Civil Rights (OCR) has surveyed the nation's public elementary and secondary schools since 1968. The survey was first known as the OCR Elementary and Secondary School (E&S) Survey; in 2004, it was renamed the Civil Rights Data Collection (CRDC). The survey provides information about the enrollment of students in public schools in every state and about some education services provided to those students. These data are reported by race/ethnicity, sex, and disability.

Data in the survey are collected pursuant to 34 C.F.R. Section 100.6(b) of the Department of Education regulation implementing Title VI of the Civil Rights Act of 1964. The requirements are also incorporated by reference in Department regulations implementing Title IX of the Education Amendments of 1972, Section 504 of the Rehabilitation Act of 1973, and the Age Discrimination Act of 1975. School, district, state, and national data are currently available. Data from individual public schools and districts are used to generate national and state data .

The CRDC has generally been conducted biennially in each of the 50 states plus the District of Columbia. The 2009–10 CRDC was collected from a sample of approximately 7,000 school districts and over 72,000 schools in those districts. It was made up of two parts: part 1 contained beginning-of-year "snapshot" data and part 2 contained cumulative, or end-of-year, data .

The 2011–12 CRDC survey, which collected data from approximately 16,500 school districts and 97,000 schools, was the first CRDC survey since 2000 that included data from every public school district and school in the nation. The 2013–14 CRDC survey also collected information from a universe of every public school district and school in the nation .

For more information on the CRDC, contact:
Office for Civil Rights
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202
OCR@ed.gov
http://www.ed.gov/about/offices/list/ocr/data.html

Office of Special Education Programs

Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act

The Individuals with Disabilities Education Act (IDEA) is a law ensuring services to children with disabilities throughout the nation. IDEA governs how states and public agencies provide early intervention, special education, and related services to more than 6.5 million eligible infants, toddlers, children, and youth with disabilities.

IDEA, formerly the Education of the Handicapped Act (EHA), requires the Secretary of Education to transmit to Congress annually a report describing the progress made in serving the nation's children with disabilities. This annual report contains information on children served by public schools under the provisions of Part B of IDEA and on children served in state-operated programs for persons with disabilities under Chapter I of the Elementary and Secondary Education Act. Indicator 8 (Children with Disabilities) reports data on children served under Part B of IDEA .

Statistics on children receiving special education and related services in various settings and school personnel providing such services are reported in an annual submission of data to the Office of Special Education Programs (OSEP) by the 50 states, the District of Columbia, the Bureau of Indian Education schools, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, the U.S. Virgin Islands, the Federated States of Micronesia, Palau, and the Marshall Islands.

The child count information is based on the number of children with disabilities receiving special education and related services on December 1 of each year. Count information is available from http://www.ideadata.org.

Since all participants in programs for persons with disabilities are reported to OSEP, the data are not subject to sampling error. However, nonsampling error can arise from a variety of sources. Some states only produce counts of students receiving special education services by disability category because Part B of the EHA requires it. In those states that typically produce counts of students receiving special education services by disability category without regard to EHA requirements, definitions and labeling practices vary.

Further information on this annual report to Congress may be obtained from:

Office of Special Education Programs
Office of Special Education and Rehabilitative Services
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202-7100
http://www.ed.gov/about/reports/annual/osep/index.html
http://idea.ed.gov/
http://www.ideadata.org

Other Governmental Agencies and Programs

Bureau of Labor Statistics

Consumer Price Indexes

The Consumer Price Index (CPI) represents changes in prices of all goods and services purchased for consumption by urban households. Indexes are available for two population groups: a CPI for All Urban Consumers (CPI-U) and a CPI for Urban Wage Earners and Clerical Workers (CPI-W). Unless otherwise specified, data are adjusted for inflation using the CPI-U. These values are generally adjusted to a school-year basis by averaging the July through June figures. Price indexes are available for the United States, the four Census regions, size of city, cross-classifications of regions and size classes, and 26 local areas. The major uses of the CPI include as an economic indicator, as a deflator of other economic series, and as a means of adjusting income. In this report, Indicators 20 (Financial Aid) and 28 (Employment and Earnings) use the CPI.

Further information on consumer price indexes may be obtained from:

Bureau of Labor Statistics
U.S. Department of Labor
2 Massachusetts Avenue NE
Washington, DC 20212
http://www.bls.gov/cpi

Census Bureau

American Community Survey

The Census Bureau introduced the American Community Survey (ACS) in 1996. Fully implemented in 2005, it provides a large monthly sample of demographic, socioeconomic, and housing data comparable in content to the Long Forms of the Decennial Census up to and including the 2000 long form. Aggregated over time, these data serve as a replacement for the Long Form of the Decennial Census. The survey includes questions mandated by federal law, federal regulations, and court decisions.

Since 2011, the survey has been mailed to approximately 295,000 addresses in the United States and Puerto Rico each month, or about 3.5 million addresses annually. A larger proportion of addresses in small governmental units (e.g., American Indian reservations, small counties, and towns) also receive the survey. The monthly sample size is designed to approximate the ratio used in the 2000 Census, which requires more intensive distribution in these areas. The ACS covers the U.S. resident population, which includes the entire civilian, noninstitutionalized population; incarcerated persons; institutionalized persons; and the active duty military who are in the United States. In 2006, the ACS began interviewing residents in group quarter facilities. Institutionalized group quarters include adult and juvenile correctional facilities, nursing facilities, and other health care facilities. Noninstitutionalized group quarters include college and university housing, military barracks, and other noninstitutional facilities such as workers and religious group quarters and temporary shelters for the homeless.

National-level data from the ACS are available from 2000 onward. The ACS produces 1-year estimates for jurisdictions with populations of 65,000 and over, 3-year estimates for jurisdictions with populations of 20,000 or over, and 5-year estimates for jurisdictions with smaller populations. The 2014 1-year estimates used data collected between January 1, 2014, and December 31, 2014, and the 2010–14 5-year estimates used data collected between January 1, 2010, and December 31, 2014. The ACS produced 3-year estimates (for jurisdictions with populations of 20,000 or over) for the periods 2005–07, 2006–08, 2007–09, 2008–10, 2009–11, 2010–12, and 2011–13. Three-year estimates for these periods will continue to be available to data users, but no further 3-year estimates will be produced.

Further information about the ACS is available at http://www.census.gov/acs/www/.

Current Population Survey

The Current Population Survey (CPS) is a monthly survey of about 60,000 households conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. The CPS is the primary source of information of labor force statistics for the U.S. noninstitutionalized population (e.g., it excludes military personnel and their families living on bases and inmates of correctional institutions). In addition, supplemental questionnaires are used to provide further information about the U.S. population. Specifically, in October, detailed questions regarding school enrollment and school characteristics are asked. In March, detailed questions regarding income are asked.

The current sample design, introduced in July 2001, includes about 72,000 households. Each month about 58,900 of the 72,000 households are eligible for interview, and of those, 7 to 10 percent are not interviewed because of temporary absence or unavailability. Information is obtained each month from those in the household who are 15 years of age and older, and demographic data are collected for children 0–14 years of age. In addition, supplemental questions regarding school enrollment are asked about eligible household members ages 3 and older in the October survey. Prior to July 2001, data were collected in the CPS from about 50,000 dwelling units. The samples are initially selected based on the decennial census files and are periodically updated to reflect new housing construction.

A major redesign of the CPS was implemented in January 1994 to improve the quality of the data collected. Survey questions were revised, new questions were added, and computer-assisted interviewing methods were used for the survey data collection. Further information about the redesign is available in Current Population Survey, October 1995: (School Enrollment Supplement) Technical Documentation at http://www.census.gov/prod/techdoc/cps/cpsoct95.pdf.

Caution should be used when comparing data from 1994 through 2001 with data from 1993 and earlier. Data from 1994 through 2001 reflect 1990 census-based population controls, while data from 1993 and earlier reflect 1980 or earlier census-based population controls. Changes in population controls generally have relatively little impact on summary measures such as means, medians, and percentage distributions. They can have a significant impact on population counts. For example, use of the 1990 census-based population controls resulted in about a 1 percent increase in the civilian noninstitutional population and in the number of families and households. Thus, estimates of levels for data collected in 1994 and later years will differ from those for earlier years by more than what could be attributed to actual changes in the population. These differences could be disproportionately greater for certain subpopulation groups than for the total population.

Beginning in 2003, race/ethnicity questions expanded to include information on people of two or more races. Native Hawaiian/Pacific Islander data are collected separately from Asian data. The questions have also been worded to make it clear that self-reported data on race/ethnicity should reflect the race/ethnicity with which the responder identifies, rather than what may be written in official documentation.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services. Generalized standard error tables are provided in the Current Population Reports; methods for deriving standard errors can be found within the CPS technical documentation at http://www.census.gov/programs-surveys/cps/technical-documentation/complete.html. The CPS data are subject to both nonsampling and sampling errors.

Prior to 2009, standard errors were estimated using the generalized variance function. The generalized variance function is a simple model that expresses the variance as a function of the expected value of a survey estimate. Beginning with March 2009 CPS data, standard errors were estimated using replicate weight methodology. Those interested in using CPS household-level supplement replicate weights to calculate variances may refer to Estimating Current Population Survey (CPS) Household-Level Supplement Variances Using Replicate Weights at http://thedataweb.rm.census.gov/pub/cps/supps/HH-level_Use_of_the_Public_Use_Replicate_Weight_File.doc.

Further information on the CPS may be obtained from:

Education and Social Stratification Branch
Population Division Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://www.census.gov/cps

Dropouts

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population ages 3 years and over as part of the monthly basic survey on labor force participation. In addition to gathering the information on school enrollment, with the limitations on accuracy as noted below under "School Enrollment," the survey data permit calculations of dropout rates. Both status and event dropout rates are tabulated from the October CPS. Event rates describe the proportion of students who leave school each year without completing a high school program. Status rates provide cumulative data on dropouts among all young adults within a specified age range. Status rates are higher than event rates because they include all dropouts ages 16 through 24, regardless of when they last attended school .

In addition to other survey limitations, dropout rates may be affected by survey coverage and exclusion of the institutionalized population. The incarcerated population has grown more rapidly and has a higher dropout rate than the general population. Dropout rates for the total population might be higher than those for the noninstitutionalized population if the prison and jail populations were included in the dropout rate calculations. On the other hand, if military personnel, who tend to be high school graduates, were included, it might offset some or all of the impact from the theoretical inclusion of the jail and prison populations.

Another area of concern with tabulations involving young people in household surveys is the relatively low coverage ratio compared to older age groups. CPS undercoverage results from missed housing units and missed people within sample households. Overall CPS undercoverage for October 2015 is estimated to be about 11 percent. CPS coverage varies with age, sex, and race. Generally, coverage is larger for females than for males and larger for non-Blacks than for Blacks. This differential coverage is a general problem for most household-based surveys. Further information on CPS methodology may be found in the technical documentation at http://www.census.gov/cps.

Further information on the calculation of dropouts and dropout rates may be obtained from Trends in High School Dropout and Completion Rates in the United States: 2013 (NCES 2016-117) at https://nces.ed.gov/pubs2016/2016117rev.pdf or by contacting:

Joel McFarland
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
joel.mcfarland@ed.gov

School Enrollment

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population ages 3 years and over. Prior to 2001, the October supplement consisted of approximately 47,000 interviewed households. Beginning with the October 2001 supplement, the sample was expanded by 9,000 to a total of approximately 56,000 interviewed households. The main sources of nonsampling variability in the responses to the supplement are those inherent in the survey instrument. The question of current enrollment may not be answered accurately for various reasons. Some respondents may not know current grade information for every student in the household, a problem especially prevalent for households with members in college or in nursery school. Confusion over college credits or hours taken by a student may make it difficult to determine the year in which the student is enrolled. Problems may occur with the definition of nursery school (a group or class organized to provide educational experiences for children) where respondents' interpretations of "educational experiences" vary.

For the October 2015 basic CPS, the household-level nonresponse rate was 12.9 percent. The person-level nonresponse rate for the school enrollment supplement was an additional 8.9 percent. Since the basic CPS nonresponse rate is a household-level rate and the school enrollment supplement nonresponse rate is a person-level rate, these rates cannot be combined to derive an overall nonresponse rate. Nonresponding households may have fewer persons than interviewed ones, so combining these rates may lead to an overestimate of the true overall nonresponse rate for persons for the school enrollment supplement.

Further information on CPS methodology may be obtained from http://www.census.gov/cps.

Further information on the CPS School Enrollment Supplement may be obtained from:

Education and Social Stratification Branch Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
https://www.census.gov/topics/education/school-enrollment.html

Decennial Census, Population Estimates, and Population Projections

The decennial census is a universe survey mandated by the U.S. Constitution. It is a questionnaire sent to every household in the country, and it is composed of seven questions about the household and its members (name, sex, age, relationship, Hispanic origin, race, and whether the housing unit is owned or rented). The Census Bureau also produces annual estimates of the resident population by demographic characteristics (age, sex, race, and Hispanic origin) for the nation, states, and counties, as well as national and state projections for the resident population. The reference date for population estimates is July 1 of the given year. With each new issue of July 1 estimates, the Census Bureau revises estimates for each year back to the last census. Previously published estimates are superseded and archived.

Census respondents self-report race and ethnicity. The race questions on the 1990 and 2000 censuses differed in some significant ways. In 1990, the respondent was instructed to select the one race "that the respondent considers himself/herself to be," whereas in 2000, the respondent could select one or more races that the person considered himself or herself to be. American Indian, Eskimo, and Aleut were three separate race categories in 1990; in 2000, the American Indian and Alaska Native categories were combined, with an option to write in a tribal affiliation. This write-in option was provided only for the American Indian category in 1990. There was a combined Asian and Pacific Islander race category in 1990, but the groups were separated into two categories in 2000.

The census question on ethnicity asks whether the respondent is of Hispanic origin, regardless of the race option(s) selected; thus, persons of Hispanic origin may be of any race. In the 2000 census, respondents were first asked, "Is this person Spanish/Hispanic/Latino?" and then given the following options: No, not Spanish/ Hispanic/Latino; Yes, Puerto Rican; Yes, Mexican, Mexican American, Chicano; Yes, Cuban; and Yes, other Spanish/Hispanic/Latino (with space to print the specific group). In the 2010 census, respondents were asked "Is this person of Hispanic, Latino, or Spanish origin?" The options given were No, not of Hispanic, Latino, or Spanish origin; Yes, Mexican, Mexican Am., Chicano; Yes, Puerto Rican; Yes, Cuban; and Yes, another Hispanic, Latino, or Spanish origin—along with instructions to print "Argentinean, Colombian, Dominican, Nicaraguan, Salvadoran, Spaniard, and so on" in a specific box.

The 2000 and 2010 censuses each asked the respondent "What is this person's race?" and allowed the respondent to select one or more options. The options provided were largely the same in both the 2000 and 2010 censuses: White; Black, African American, or Negro; American Indian or Alaska Native (with space to print the name of enrolled or principal tribe); Asian Indian; Japanese; Native Hawaiian; Chinese; Korean; Guamanian or Chamorro; Filipino; Vietnamese; Samoan; Other Asian; Other Pacific Islander; and Some other race. The last three options included space to print the specific race. Two significant differences between the 2000 and 2010 census questions on race were that no race examples were provided for the "Other Asian" and "Other Pacific Islander" responses in 2000, whereas the race examples of "Hmong, Laotian, Thai, Pakistani, Cambodian, and so on" and "Fijian, Tongan, and so on," were provided for the "Other Asian" and "Other Pacific Islander" responses, respectively, in 2010.

The census population estimates program modified the enumerated population from the 2010 census to produce the population estimates base for 2010 and onward. As part of the modification, the Census Bureau recoded the "Some other race" responses from the 2010 census to one or more of the five OMB race categories used in the estimates program (for more information, see http://www.census.gov/programs-surveys/popest/technical-documentation/methodology.html).

Further information on the decennial census may be obtained from http://www.census.gov.

Centers for Disease Control and Prevention

Youth Risk Behavior Surveillance System

The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS), as well as surveys conducted in states and large urban school districts. Indicator 15 (Safety at School) in this report uses 2013 YRBSS data.

The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 9–12 in the United States. The target population consisted of all public and private school students in grades 9–12 in the 50 states and the District of Columbia. The first-stage sampling frame included selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.

The final stage of sampling consisted of randomly selecting, in each chosen school and in each of grades not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.

In the 2013 national survey, race/ethnicity was computed from two questions: (1) "Are you Hispanic or Latino?" (response options were "yes" and "no"), and (2) "What is your race?" (response options were "American Indian or Alaska Native," "Asian," "Black or African American," "Native Hawaiian or Other Pacific Islander," or "White"). For the second question, students could select more than one response option. For this report, students were classified as "Hispanic" if they answered "yes" to the first question, regardless of how they answered the second question. Students who answered "no" to the first question and selected more than one race/ethnicity in the second category were classified as "More than one race." Students who answered "no" to the first question and selected only one race/ethnicity were classified as that race/ ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered "no" to the first question but did not answer the second question.

Further information on the YRBSS may be obtained from:

Laura Kann
Division of Adolescent and School Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
Centers for Disease Control and Prevention
Mailstop E-75
1600 Clifton Road NE Atlanta, GA 30329
(404) 718-8132
lkk1@cdc.gov
www.cdc.gov/yrbs

Bureau of Justice Statistics

A division of the U.S. Department of Justice Office of Justice Programs, the Bureau of Justice Statistics (BJS) collects, analyzes, publishes, and disseminates statistical information on crime, criminal offenders, victims of crime, and the operations of the justice system at all levels of government and internationally. It also provides technical and financial support to state governments for development of criminal justice statistics and information systems on crime and justice.

For information on the BJS, see www.ojp.usdoj.gov/bjs/.

National Crime Victimization Survey (NCVS)

The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics (BJS) by the U.S. Census Bureau, is the nation's primary source of information on crime and the victims of crime. Initiated in 1972 and redesigned in 1992, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, household burglary, and motor vehicle theft experienced by Americans and American households each year.

Readers should note that in 2003, in accordance with changes to the Office of Management and Budget's standards for the classification of federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is now followed by a new question on race. The new question about race allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander.

NCVS-eligible households were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interview. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial Census.

The first NCVS interview with a housing unit is conducted in person. Subsequent interviews are conducted by telephone, if possible. Households remain in the sample for 3 years and are interviewed seven times at 6-month intervals. After a household has been interviewed its seventh time, it is replaced by a new sample household.

Further information on the NCVS may be obtained from:

Barbara A. Oudekerk
Victimization Statistics Branch
Bureau of Justice Statistics
barbara.a.oudekerk@usdoj.gov
http://www.bjs.gov/

School Crime Supplement (SCS)

Created as a supplement to the NCVS and co-designed by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Indicator 15 (Safety at School) reports data from the 2013 SCS.

The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the NCVS sampling design and changes to the race/ethnicity variable beginning in 2003). Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 6–12, and were not home schooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 12–18 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview. It should be noted that the first or unbounded NCVS interview has always been included in analysis of the SCS data and may result in the reporting of events outside of the requested reference period.

A total of about 5,700 students participated in the 2013 SCS. In the 2013 SCS, the household completion rate was 86 percent and the student completion rate was 60 percent. The overall unweighted SCS unit response rate (calculated by multiplying the household completion rate by the student completion rate) was about 51 percent in 2013.

There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (U.S. Department of Education 2003). Due to the low unit response rate in 2005, 2007, 2009, 2011, and 2013, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents.

In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has four key student or school characteristic variables for which data are known for respondents and nonrespondents: sex, race/ethnicity, household income, and urbanicity, all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.

In 2013, the analysis of unit nonresponse bias found evidence of potential bias for the age variable in the SCS respondent sample. Students age 14 and those from the western region showed percentage bias exceeding 5 percent; however, both subgroups had the highest response rate out of their respective categories. All other subgroups evaluated showed less than 1 percent nonresponse bias and had between 0.3 and 2.6 percent difference between the response population and the eligible population.

Response rates for most SCS survey items in all survey years were high—typically over 97 percent of all eligible respondents, meaning there is little potential for item nonresponse bias for most items in the survey. Weights were developed to compensate for differential probabilities of selection and nonresponse. The weighted data permit inferences about the eligible student population who were enrolled in schools in all SCS data years.

Further information about the SCS may be obtained from:

Rachel Hansen
Sample Surveys Division
Cross-Sectional Surveys Branch National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
(202) 502-7486
rachel.hansen@ed.gov
http://nces.ed.gov/programs/crime