Skip Navigation
Click to open navigation

Appendix A: Technical Notes

Sources of Data

This section briefly describes each of the datasets used in this report: the School-Associated Violent Death Surveillance System, the Supplementary Homicide Reports, the Web-based Injury Statistics Query and Reporting System Fatal, the National Crime Victimization Survey, the School Crime Supplement to the National Crime Victimization Survey, the Youth Risk Behavior Surveillance System, the Schools and Staffing Survey, the National Teacher and Principal Survey, the School Survey on Crime and Safety, the Fast Response Survey System survey of school safety and discipline, EDFacts, and the Program for International Student Assessment. Directions for obtaining more information are provided at the end of each description.


School-Associated Violent Deaths Surveillance System (SAVD-SS)

The School-Associated Violent Death Surveillance System (SAVD-SS) was developed by the Centers for Disease Control and Prevention in conjunction with the U.S. Department of Education and the U.S. Department of Justice. The system contains descriptive data on all school-associated violent deaths in the United States, including all homicides, suicides, or legal intervention deaths for which the fatal injury occurred on the campus of a functioning elementary or secondary school; while the victim was on the way to or from regular sessions at such a school; or while attending or on the way to or from an official school-sponsored event. Victims of such incidents include students, as well as nonstudents (e.g., students' parents, community residents, and school staff). SAVD-SS includes data on the school, event, victim(s), and offender(s). SAVD-SS uses these data to describe the epidemiology of school-associated violent deaths, identify common features of these deaths, estimate the rate of school-associated violent deaths in the United States, and identify potential risk factors for these deaths. The SAVD-SS has collected data from July 1, 1992 through the present.

The SAVD-SS uses a four-step process to identify and collect data on school-associated violent deaths. Cases are initially identified through a systematic search of the LexisNexis newspaper and media database. Then law enforcement officials from the office that investigated the deaths are contacted to confirm the details of the case and to determine if the event meets the case definition. Once a case is confirmed, a law enforcement official and a school official are interviewed regarding details about the school, event, victim(s), and offender(s). A copy of the full law enforcement report is also sought for each case. The information obtained on schools includes school demographics, attendance/absentee rates, suspensions/expulsions and mobility, school history of weapon-carrying incidents, security measures, violence prevention activities, school response to the event, and school policies about weapon carrying. Event information includes the location of injury, the context of injury (while classes were being held, during break, etc.), motives for injury, method of injury, and school and community events happening around the time period. Information obtained on victim(s) and offender(s) includes demographics, circumstances of the event (date/time, alcohol or drug use, number of persons involved), types and origins of weapons, criminal history, psychological risk factors, school-related problems, extracurricular activities, and family history, including structure and stressors.

One hundred five school-associated violent deaths were identified from July 1, 1992, to June 30, 1994 (Kachur et al. 1996). A more recent SAVD-SS study identified 253 school-associated violent deaths between July 1, 1994, and June 30, 1999 (Anderson et al. 2001). Other publications using SAVD-SS data have described how the number of events change during the school year (Centers for Disease Control and Prevention 2001), the source of the firearms used in these events (Reza et al. 2003), suicides that were associated with schools (Kauffman et al. 2004), and trends in school-associated homicide from July 1, 1992, to June 30, 2006 (Centers for Disease Control and Prevention 2008). For several reasons, all data for years from 1999 to the present are flagged as preliminary. For some recent data, the interviews with school and law enforcement officials to verify case details have not been completed, or law enforcement reports have not been received. The details learned during the interviews and data abstraction from law enforcement reports can occasionally change the classification of a case. Also, new cases may be identified because of the expansion of the scope of the media files used for case identification. Sometimes other cases not identified during earlier data years using the independent case finding efforts (which focus on nonmedia sources of information) will be discovered. Also, other cases may occasionally be identified while the law enforcement and school interviews are being conducted to verify known cases. For additional information about SAVD, contact:

Kristin Holland, Ph.D., M.P.H.
Principal Investigator & Behavioral Scientist
School-Associated Violent Death Surveillance System
Division of Violence Prevention
National Center for Injury Control and Prevention
Centers for Disease Control and Prevention
(770) 488-3954
KHolland@cdc.gov

Top

Supplementary Homicide Reports (SHR)

Supplementary Homicide Reports (SHR) are a part of the Uniform Crime Reporting (UCR) program of the Federal Bureau of Investigation (FBI). These reports provide incident-level information on criminal homicides, including situation type (e.g., number of victims, number of offenders, and whether offenders are known); the age, sex, and race of victims and offenders; weapon used; circumstances of the incident; and the relationship of the victim to the offender. The data are provided monthly to the FBI by local law enforcement agencies participating in the UCR program. The data include murders and nonnegligent manslaughters in the United States from January 1980 to December 2015; that is, negligent manslaughters and justifiable homicides have been eliminated from the data. Based on law enforcement agency reports, the FBI estimates that 670,137 murders (including nonnegligent manslaughters) were committed from 1980 to 2015. Agencies provided detailed information on 599,678 of these homicide victims. SHR estimates in this report have been revised from those in previously published reports.

About 90 percent of homicides are included in the SHR program. However, adjustments can be made to the weights to correct for missing victim reports. Estimates from the SHR program used in this report were generated by the Bureau of Justice Statistics (BJS). Weights have been developed to compensate for the average annual 10 percent of homicides that were not reported to the SHR data file. The development of the set of annual weights is a three-step process.

Each year the FBI's annual Crime in the United States report presents a national estimate of murder victims in the United States and estimates of the number of murder victims in each of the 50 states and the District of Columbia. The first-stage weight uses the FBI's annual estimates of murder victims in each state and the number of murder victims from that state found in the annual SHR database.

Specifically, the first-stage weight for victims in state S in year Y is—

FBI's estimate of murder victims in state S(year Y)
Divided by
Number of murder victims in the SHR file
from state S(year Y)

For complete reporting states, this first-stage weight is equal to 1. For partial reporting states, this weight is greater than 1. For states with a first-stage weight greater than 2—that is, the state reported SHR data for less than half of the FBI's estimated number of murder victims in the state—the first-stage weight is set to 1.

The second-stage weight uses the FBI's annual national estimates of murder victims in the United States and the sum of the first-stage weights for each state. The second-stage weight for victims in all states in year Y is—

FBI's estimate of murder victims
in the United States(year Y)
Divided by
Sum of the first-stage weights of all states(year Y)

The third step in the process is to calculate the final annual victim-level SHR weight. This weight used to develop national estimates of the attributes of murder victims is—

SHR weight(year Y) = (First-stage weight(year Y))*(Second-stage weight(year Y))

Conceptually, the first-stage weight uses a state's own reported SHR records to represent all murder victims in that state, as long as at least 50 percent of the estimated number of murder victims in that state has a record in the SHR. The sum of the first-stage weights then equals the sum of the total number of all murder victims in states with at least 50 percent SHR coverage and the simple count of those victims from the other reporting states. The second-stage weight is used to inflate the first-stage weights so that the weight derived from the product of the first- and second-stage weights represents all murder victims in that year in the United States. The difference between the sum of the first-stage weights and the FBI's annual national estimate of murder victims is the unreported murder victims in states with less than 50 percent SHR coverage and the murder victims in states that report no data to the SHR in that year. The second-stage weight compensates for this difference by assuming that the attributes of the nonreported victims are similar to the attributes of weighted murder victims in that year's SHR database.

The weighting procedure outlined above assumes that the characteristics of unreported homicide incidents are similar to the characteristics of reported incidents. There is no comprehensive way to assess the validity of this assumption. There is one exception to this weighting process. Some states did not report any data in some years. For example, Florida reported no incidents to the SHR program for the years 1988 through 1991 or from 1997 through 2015. The annual national weights, however, attempt to compensate for those few instances in which entire states did not report any data. For additional information about the SHR program, contact:

Communications Unit
Criminal Justice Information Services Division
Federal Bureau of Investigation
Module D3
1000 Custer Hollow Road
Clarksburg, WV 26306
(304) 625-4995
cjis_comm@leo.gov

Top

Web-based Injury Statistics Query and Reporting System Fatal (WISQARS™ Fatal)

WISQARS™ Fatal provides mortality data related to injury. The mortality data reported in WISQARS™ Fatal come from death certificate data reported to the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention. Data include causes of death reported by attending physicians, medical examiners, and coroners and demographic information about decedents reported by funeral directors, who obtain that information from family members and other informants. NCHS collects, compiles, verifies, and prepares these data for release to the public. The data provide information about unintentional injuries, homicide, and suicide as leading causes of death, how common they are, and whom they affect. These data are intended for a broad audience—the public, the media, public health practitioners and researchers, and public health officials—to increase their knowledge of injury.

WISQARS™ Fatal mortality reports provide tables of the total numbers of injury-related deaths and the death rates per 100,000 U.S. population. The reports list deaths according to cause (mechanism) and intent (manner) of injury by state, race, Hispanic origin, sex, and age groupings. For more information on WISQARS™ Fatal, contact:

National Center for Injury Prevention and Control
Centers for Disease Control and Prevention
Mailstop K65
4770 Buford Highway NE
Atlanta, GA 30341-3724
(770) 488-1506
ohcinfo@cdc.gov
http://www.cdc.gov/injury/wisqars/index.html

Top

National Crime Victimization Survey (NCVS)

The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics (BJS) by the U.S. Census Bureau, is the nation's primary source of information on crime and the victims of crime. Initiated in 1972 and redesigned in 1992, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, household burglary, and motor vehicle theft experienced by Americans and American households each year. The survey measures both crimes reported to police and crimes not reported to the police.

NCVS estimates reported in Indicators of School Crime and Safety: 2013 and beyond may differ from those in previous published reports. This is because a small number of victimizations, referred to as series victimizations, are included in this report using a new counting strategy. High-frequency repeat victimizations, or series victimizations, refer to situations in which six or more similar but separate victimizations that occur with such frequency that the victim is unable to recall each individual event or describe each event in detail. As part of ongoing research efforts associated with the redesign of the NCVS, BJS investigated ways to include high-frequency repeat victimizations, or series victimizations, in estimates of criminal victimization, which would result in more accurate estimates of victimization. BJS has decided to include series victimizations using the victim's estimates of the number of times the victimization occurred over the past 6 months, capping the number of victimizations within each series at 10. This strategy balances the desire to estimate national rates and account for the experiences of persons who have been subjected to repeat victimizations against the desire to minimize the estimation errors that can occur when repeat victimizations are reported. Including series victimizations in national rates results in rather large increases in the level of violent victimization; however, trends in violence are generally similar regardless of whether series victimizations are included. For more information on the new counting strategy and supporting research, see Methods for Counting High Frequency Repeat Victimizations in the National Crime Victimization Survey (Lauritsen et al. 2012) at https://www.bjs.gov/content/pub/pdf/mchfrv.pdf.

Readers should note that in 2003, in accordance with changes to the U.S. Office of Management and Budget's standards for classifying federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is now followed by a new question about race. The new question about race allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander. An analysis conducted by the Demographic Surveys Division at the U.S. Census Bureau showed that the new race question had very little impact on the aggregate racial distribution of NCVS respondents, with one exception: There was a 1.6 percentage point decrease in the percentage of respondents who reported themselves as White. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.

Every 10 years, the NCVS sample is redesigned to reflect changes in the population. In the 2006 NCVS, changes in the sample design and survey methodology affected the survey's estimates. Caution should be used when comparing 2006 estimates to estimates of other years. For more information on the 2006 NCVS data, see Criminal Victimization, 2006 (Rand and Catalano 2007) at https://bjs.gov/content/pub/pdf/cv06.pdf, the technical notes at http://www.bjs.gov/content/pub/pdf/cv06tn.pdf, and Criminal Victimization, 2007 (Rand 2008) at https://www.bjs.gov/content/pub/pdf/cv07.pdf. The sample redesign also impacted the comparability of 2016 victimization estimates to estimates for earlier years. Caution should be used when making comparisons to earlier years. For more information, see Criminal Victimization, 2016 (available at https://www.bjs.gov/content/pub/pdf/cv16.pdf).

The number of NCVS-eligible households in the 2016 sample was approximately 173,289. Households were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interviews. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial Census. Within each sampled household, the U.S. Census Bureau interviewer attempts to interview all household members age 12 and older to determine whether they had been victimized by the measured crimes during the 6 months preceding the interview.

The first NCVS interview with a housing unit is conducted in person. Subsequent interviews are conducted by telephone, if possible. All persons age 12 and older are interviewed every 6 months. Households remain in the sample for 3 years and are interviewed seven times at 6-month intervals. Since the survey's inception, the initial interview at each sample unit has been used only to bound future interviews to establish a time frame to avoid duplication of crimes uncovered in these subsequent interviews. Beginning in 2006, data from the initial interview have been adjusted to account for the effects of bounding and have been included in the survey estimates. After a household has been interviewed its seventh time, it is replaced by a new sample household. In 2016, the household response rate was about 78 percent, and the completion rate for persons within households was about 84 percent. Weights were developed to permit estimates for the total U.S. population 12 years and older. For more information about the NCVS, contact:

Barbara A. Oudekerk
Victimization Statistics Branch
Bureau of Justice Statistics
Barbara.A.Oudekerk@usdoj.gov
http://www.bjs.gov/

Top

School Crime Supplement (SCS)

Created as a supplement to the NCVS and co-designed by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. This report includes data from the 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, and 2015 collections. The 1989 data are not included in this report as a result of methodological changes to the NCVS and SCS. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Students are asked additional questions about security measures used by their school, students' participation in afterschool activities, students' perceptions of school rules, the presence of weapons and gangs in school, the presence of hate-related words and graffiti in school, student reports of bullying and reports of rejection at school, and the availability of drugs and alcohol in school. Students are also asked attitudinal questions relating to fear of victimization and avoidance behavior at school.

The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the NCVS sampling design and changes to the race/ethnicity variable beginning in 2003). Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 6–12, and were not homeschooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 12–18 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview. It should be noted that the first or unbounded NCVS interview has always been included in analysis of the SCS data and may result in the reporting of events outside of the requested reference period.

The prevalence of victimization for 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, and 2015 was calculated by using NCVS incident variables appended to the SCS data files of the same year. The NCVS type of crime variable was used to classify victimizations of students in the SCS as serious violent, violent, or theft. The NCVS variables asking where the incident happened (at school) and what the victim was doing when it happened (attending school or on the way to or from school) were used to ascertain whether the incident happened at school. Only incidents that occurred inside the United States are included.

In 2001, the SCS survey instrument was modified from previous collections. First, in 1995 and 1999, "at school" was defined for respondents as in the school building, on the school grounds, or on a school bus. In 2001, the definition for "at school" was changed to mean in the school building, on school property, on a school bus, or going to and from school. This change was made to the 2001 questionnaire in order to be consistent with the definition of "at school" as it is constructed in the NCVS and was also used as the definition in subsequent SCS collections. Cognitive interviews conducted by the U.S. Census Bureau on the 1999 SCS suggested that modifications to the definition of "at school" would not have a substantial impact on the estimates.

A total of about 9,700 students participated in the 1995 SCS, 8,400 in 1999, 8,400 in 2001, 7,200 in 2003, 6,300 in 2005, 5,600 in 2007, 5,000 in 2009, 6,500 in 2011, 5,700 in 2013, and 5,500 in 2015. In the 2015 SCS, the household completion rate was 82 percent.

In the 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, and 2015 SCS, the household completion rates were 95 percent, 94 percent, 93 percent, 92 percent, 91 percent, 90 percent, 92 percent, 91 percent, 86 percent, and 82 percent respectively, and the student completion rates were 78 percent, 78 percent, 77 percent, 70 percent, 62 percent, 58  percent, 56 percent, 63 percent, 60 percent, and 58 percent respectively. The overall unweighted SCS unit response rate (calculated by multiplying the household completion rate by the student completion rate) was about 74 percent in 1995, 73 percent in 1999, 72 percent in 2001, 64 percent in 2003, 56 percent in 2005, 53 percent in 2007, 51 percent in 2009, 57 percent in 2011, 51 percent in 2013, and 48 percent in 2015.

There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (U.S. Department of Education 2003). Due to the low unit response rate in 2005, 2007, 2009, 2011, 2013, and 2015, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents. In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has several key student or school characteristic variables for which data are known for respondents and nonrespondents: sex, age, race/ethnicity, household income, region, and urbanicity, all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.

In 2005, the analysis of unit nonresponse bias found evidence of bias for the race, household income, and urbanicity variables. White (non-Hispanic) and Other (non-Hispanic) respondents had higher response rates than Black (non-Hispanic) and Hispanic respondents. Respondents from households with an income of $35,000–$49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500–$14,999, $15,000–$24,999, and $25,000– $34,999. Respondents who live in urban areas had lower response rates than those who live in rural or suburban areas. Although the extent of nonresponse bias cannot be determined, weighting adjustments, which corrected for differential response rates, should have reduced the problem.

In 2007, the analysis of unit nonresponse bias found evidence of bias by the race/ethnicity and household income variables. Hispanic respondents had lower response rates than other races/ethnicities. Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $25,000. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other races/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2011, the analysis of unit nonresponse bias found evidence of potential bias for the age variable. Respondents 12 to 17 years old had higher response rates than did 18-year-old respondents in the NCVS and SCS interviews. Weighting the data adjusts for unequal selection probabilities and for the effects of nonresponse. The weighting adjustments that correct for differential response rates are created by region, age, race, and sex, and should have reduced the effect of nonresponse.

In 2013, the analysis of unit nonresponse bias found evidence of potential bias for the age, region, and Hispanic origin variables in the NCVS interview response. Within the SCS portion of the data, only the age and region variables showed significant unit nonresponse bias. Further analysis indicated only the age 14 and the west region categories showed positive response biases that were significantly different from some of the other categories within the age and region variables. Based on the analysis, nonresponse bias seems to have little impact on the SCS results.

In 2015, the analysis of unit nonresponse bias found evidence of potential bias for age, race, Hispanic origin, urbanicity, and region in the NCVS interview response. For the SCS interview, the age, race, urbanicity, and region variables showed significant unit nonresponse bias. The age 14 group and rural areas showed positive response biases that were significantly different from other categories within the age and urbanicity variables. The northeast region and Asian race group showed negative response biases that were significantly different from other categories within the region and race variables. These results provide evidence that these subgroups may have a nonresponse bias associated with them. Response rates for most SCS survey items in all survey years were high—typically 95 percent or more, meaning there is little potential for item nonresponse bias for most items in the survey.

The weighted data permit inferences about the eligible student population who were enrolled in schools in all SCS data years. For more information about SCS, contact:

Rachel Hansen
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW
Washington, DC 20202
(202) 245-7082
rachel.hansen@ed.gov
http://nces.ed.gov/programs/crime

Top

Youth Risk Behavior Surveillance System (YRBSS)

The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS) as well as surveys conducted in states, territories, tribes, and large urban school districts. This report uses 1993, 1995, 1997, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, and 2015 YRBSS data.

The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 9–12 in the United States. In each survey, the target population consisted of all public and private school students in grades 9–12 in the 50 states and the District of Columbia. The first-stage sampling frame included selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are either counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.

The final stage of sampling consisted of randomly selecting, in each chosen school and in each of grades 9–12, one or two classrooms from either a required subject, such as English or social studies, or a required period, such as homeroom or second period. All students in selected classes were eligible to participate. In surveys conducted before 2013, three strategies were used to oversample Black and Hispanic students: (1) larger sampling rates were used to select PSUs that are in high-Black and high-Hispanic strata; (2) a modified measure of size was used that increased the probability of selecting schools with a disproportionately high minority enrollment; and (3) two classes per grade, rather than one, were selected in schools with a high percentage of Black or Hispanic enrollment. In 2013 and 2015, only selection of two classes per grade was needed to achieve an adequate precision with minimum variance. Approximately 16,300 students participated in the 1993 survey, 10,900 participated in the 1995 survey, 16,300 participated in the 1997 survey, 15,300 participated in the 1999 survey, 13,600 participated in the 2001 survey, 15,200 participated in the 2003 survey, 13,900 participated in the 2005 survey, 14,000 participated in the 2007 survey, 16,400 participated in the 2009 survey, 15,400 participated in the 2011 survey, 13,600 participated in the 2013 survey, and 15,600 participated in the 2015 survey.

The overall response rate was 70 percent for the 1993 survey, 60 percent for the 1995 survey, 69 percent for the 1997 survey, 66 percent for the 1999 survey, 63 percent for the 2001 survey, 67 percent for the 2003 survey, 67 percent for the 2005 survey, 68 percent for the 2007 survey, 71 percent for the 2009 survey, 71 percent for the 2011 survey, 68 percent for the 2013 survey, and 60 percent for the 2015 survey. NCES standards call for response rates of 85 percent or better for cross-sectional surveys, and bias analyses are generally required by NCES when that percentage is not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available. A school nonresponse bias analysis, however, was done for the 2015 survey. This analysis found some evidence of potential bias by school type and urban status, but concluded that the bias had little impact on the overall estimates and would be further reduced by weight adjustment. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.

State-level data were downloaded from the Youth Online: Comprehensive Results web page (http://nccd.cdc.gov/YouthOnline/). Each state and district school-based YRBS employs a two-stage, cluster sample design to produce representative samples of students in grades 9–12 in their jurisdiction. All except one state sample (South Dakota), and all district samples, include only public schools, and each district sample includes only schools in the funded school district (e.g., San Diego Unified School District) rather than in the entire city (e.g., greater San Diego area).

In the first sampling stage in all except a few states and districts, schools are selected with probability proportional to school enrollment size. In the second sampling stage, intact classes of a required subject or intact classes during a required period (e.g., second period) are selected randomly. All students in sampled classes are eligible to participate. Certain states and districts modify these procedures to meet their individual needs. For example, in a given state or district, all schools, rather than a sample of schools, might be selected to participate. State and local surveys that have a scientifically selected sample, appropriate documentation, and an overall response rate greater than or equal to 60 percent are weighted. The overall response rate reflects the school response rate multiplied by the student response rate. These three criteria are used to ensure that the data from those surveys can be considered representative of students in grades 9–12 in that jurisdiction. A weight is applied to each record to adjust for student nonresponse and the distribution of students by grade, sex, and race/ethnicity in each jurisdiction. Therefore, weighted estimates are representative of all students in grades 9–12 attending schools in each jurisdiction. Surveys that do not have an overall response rate of greater than or equal to 60 percent and that do not have appropriate documentation are not weighted and are not included in this report.

In 2015, a total of 37 states and 19 districts had weighted data. Not all of the districts were contained in the 37 states. For example, Texas was not one of the 37 states that obtained weighted data but it contained two districts that did. For more information on the location of the districts, please see http://www.cdc.gov/healthyyouth/yrbs/participation.htm. In sites with weighted data, the student sample sizes for the state and district YRBS ranged from 1,052 to 55,596. School response rates ranged from 70 to 100 percent, student response rates ranged from 64 to 90 percent, and overall response rates ranged from 60  to 88 percent.

Readers should note that reports of these data published by the CDC and in this report do not include percentages where the denominator includes less than 100 unweighted cases.

In 1999, in accordance with changes to the Office of Management and Budget's standards for the classification of federal data on race and ethnicity, the YRBS item on race/ethnicity was modified. The version of the race and ethnicity question used in 1993, 1995, and 1997 was:

How do you describe yourself?

  1. White—not Hispanic
  2. Black—not Hispanic
  3. Hispanic or Latino
  4. Asian or Pacific Islander
  5. American Indian or Alaskan Native
  6. Other

The version used in 1999, 2001, 2003, and in the 2005 state and local district surveys was:

How do you describe yourself? (Select one or more responses.)

  1. American Indian or Alaska Native
  2. Asian
  3. Black or African American
  4. Hispanic or Latino
  5. Native Hawaiian or Other Pacific Islander
  6. White

In the 2005 national survey and in all 2007, 2009, 2011, 2013, and 2015 surveys, race/ethnicity was computed from two questions: (1) "Are you Hispanic or Latino?" (response options were "yes" and "no"), and (2) "What is your race?" (response options were "American Indian or Alaska Native," "Asian," "Black or African American," "Native Hawaiian or Other Pacific Islander," or "White"). For the second question, students could select more than one response option. For this report, students were classified as "Hispanic" if they answered "yes" to the first question, regardless of how they answered the second question. Students who answered "no" to the first question and selected more than one race/ethnicity in the second category were classified as "More than one race." Students who answered "no" to the first question and selected only one race/ethnicity were classified as that race/ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered "no" to the first question but did not answer the second question.

CDC has conducted two studies to understand the effect of changing the race/ethnicity item on the YRBS. Brener, Kann, and McManus (2003) found that allowing students to select more than one response to a single race/ethnicity question on the YRBS had only a minimal effect on reported race/ethnicity among high school students. Eaton et al. (2007) found that self-reported race/ethnicity was similar regardless of whether the single-question or a two-question format was used.

For additional information about the YRBSS, contact:

Laura Kann
Division of Adolescent and School Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
Centers for Disease Control and Prevention
Mailstop E-75
1600 Clifton Road NEAtlanta, GA 30329
(404) 718-8132
lkk1@cdc.gov
http://www.cdc.gov/yrbs

Top

Schools and Staffing Survey (SASS)

The Schools and Staffing Survey (SASS) is a set of related questionnaires that collect descriptive data on the context of public and private elementary and secondary education. Data reported by districts, schools, principals, teachers, and library media centers provide a variety of statistics on the condition of education in the United States that may be used by policymakers and the general public. The SASS system covers a wide range of topics, including teacher demand, teacher and principal characteristics, teachers' and principals' perceptions of school climate and problems in their schools, teacher and principal compensation, district hiring and retention practices, general conditions in schools, and basic characteristics of the student population.

SASS data are collected through a mail questionnaire with telephone and in-person field follow-up. SASS has been conducted by the U.S. Census Bureau for NCES since the first administration of the survey, which was conducted during the 1987–88 school year. Subsequent SASS administrations were conducted in 1990–91, 1993–94, 1999–2000, 2003–04, 2007–08, and 2011–12.

SASS is designed to produce national, regional, and state estimates for public elementary and secondary schools, school districts, principals, teachers, and school library media centers; and national and regional estimates for public charter schools, as well as principals, teachers, and school library media centers within these schools. For private schools, the sample supports national, regional, and affiliation estimates for schools, principals, and teachers.

From its inception, SASS has had five core components: school questionnaires, teacher listing forms, teacher questionnaires, principal questionnaires, and school district (prior to 1999–2000, "teacher demand and shortage") questionnaires. A sixth component, school library media center questionnaires, was introduced in the 1993–94 administration and has been included in every subsequent administration of SASS. School library data were also collected in the 1990–91 administration of the survey through the school and principal questionnaires.

School questionnaires used in SASS include the Public and Private School Questionnaires, teacher questionnaires include the Public and Private School Teacher Questionnaires, principal questionnaires include the Public and Private School Principal (or School Administrator) Questionnaires, school district questionnaires include the School District (or Teacher Demand and Shortage) Questionnaire, and library media center questionnaires include the School Library Media Center Questionnaire.

Although the five core questionnaires and the school library media questionnaires have remained relatively stable over the various administrations of SASS, the survey has changed to accommodate emerging issues in elementary and secondary education. Some items have been added, some have been deleted, and some questionnaire items have been reworded.

During the 1990–91 SASS cycle, NCES worked with the Office of Indian Education to add an Indian School Questionnaire to SASS, and it remained a part of SASS through 2007–08. The Indian School Questionnaire explores the same school-level issues that the Public and Private School Questionnaires explore, allowing comparisons among the three types of schools. The 1990–91, 1993–94, 1999–2000, 2003–04, and 2007–08 administrations of SASS obtained data on Bureau of Indian Education (BIE) schools (schools funded or operated by the BIE), but the 2011–12 administration did not collect data from BIE schools. SASS estimates for all survey years presented in this report exclude BIE schools, and as a result, estimates in this report may differ from those in previously published reports.

School library media center questionnaires were administered in public, private, and BIE schools as part of the 1993–94 and 1999–2000 SASS. During the 2003–04 administration of SASS, only library media centers in public schools were surveyed, and in 2007–08 library media centers in public schools and BIE and BIE-funded schools were surveyed. The 2011–12 survey collected data only on school library media centers in traditional public schools and in public charter schools. School library questions focused on facilities, services and policies, staffing, technology, information literacy, collections and expenditures, and media equipment. New or revised topics included access to online licensed databases, resource availability, and additional elements on information literacy. The Student Records and Library Media Specialist/Librarian Questionnaires were administered only in 1993–94.

As part of the 1999–2000 SASS, the Charter School Questionnaire was sent to the universe of charter schools in operation in 1998–99. In 2003–04 and in subsequent administrations of SASS, charter schools were included in the public school sample as opposed to being sent a separate questionnaire. Another change in the 2003–04 administration of SASS was a revised data collection procedure using a primary in-person contact within the school intended to reduce the field follow-up phase.

The SASS teacher surveys collect information on the characteristics of teachers, such as their age, race/ethnicity, years of teaching experience, average number of hours per week spent on teaching activities, base salary, average class size, and highest degree earned. These teacher-reported data may be combined with related information on their school's characteristics, such as school type (e.g., public traditional, public charter, Catholic, private other religious, and private nonsectarian), community type, and school enrollment size. The teacher questionnaires also ask for information on teacher opinions regarding the school and teaching environment. In 1993–94, about 53,000 public school teachers and 10,400 private school teachers were sampled. In 1999–2000, about 56,300 public school teachers, 4,400 public charter school teachers, and 10,800 private school teachers were sampled. In 2003–04, about 52,500 public school teachers and 10,000 private school teachers were sampled. In 2007–08, about 48,400 public school teachers and 8,200 private school teachers were sampled. In 2011–12, about 51,100 public school teachers and 7,100 private school teachers were sampled. Weighted overall response rates in 2011–12 were 61.8 percent for public school teachers and 50.1 percent for private school teachers.

The SASS principal surveys focus on such topics as age, race/ethnicity, sex, average annual salary, years of experience, highest degree attained, perceived influence on decisions made at the school, and hours spent per week on all school activities. These data on principals can be placed in the context of other SASS data, such as the type of the principal's school (e.g., public traditional, public charter, Catholic, other religious, or nonsectarian), enrollment, and percentage of students eligible for free or reduced price lunch. In 2003–04, about 10,200 public school principals were sampled, and in 2007–08, about 9,800 public school principals were sampled. In 2011–12, about 11,000 public school principals and 3,000 private school principals were sampled. Weighted response rates in 2011–12 for public school principals and private school principals were 72.7 percent and 64.7 percent, respectively.

The SASS 2011–12 sample of schools was confined to the 50 states and the District of Columbia and excludes the other jurisdictions, the Department of Defense overseas schools, the BIE schools, and schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent. The SASS 2011–12 sample included 10,250 traditional public schools, 750 public charter schools, and 3,000 private schools.

The public school sample for the 2011–12 SASS was based on an adjusted public school universe file from the 2009–10 Common Core of Data (CCD), a database of all the nation's public school districts and public schools. The private school sample for the 2011–12 SASS was selected from the 2009–10 Private School Universe Survey (PSS), as updated for the 2011–12 PSS. This update collected membership lists from private school associations and religious denominations, as well as private school lists from state education departments. The 2011–12 SASS private school frame was further augmented by the inclusion of additional schools that were identified through the 2009–10 PSS area frame data collection.

Additional resources available regarding SASS include the methodology report Quality Profile for SASS, Rounds 1–3: 1987–1995, Aspects of the Quality of Data in the Schools and Staffing Surveys (SASS) (Kalton et al. 2000) (NCES 2000-308), as well as these reports: Documentation for the 2011–12 Schools and Staffing Survey (Cox et al. 2017) and User's Manual for the 2011–12 Schools and Staffing Survey, Volumes 1–6 (Goldring et al. 2013) (NCES 2013-330 through 2013-335). For additional information about the SASS program, contact:

Isaiah O'Rear
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
isaiah.orear@ed.gov
http://nces.ed.gov/surveys/sass

Top

National Teacher and Principal Survey (NTPS)

The National Teacher and Principal Survey is a set of related questionnaires that collect descriptive data on the context of elementary and secondary education. Data reported by schools, principals, and teachers provide a variety of statistics on the condition of education in the United States that may be used by policymakers and the general public. The NTPS system covers a wide range of topics, including teacher demand, teacher and principal characteristics, teachers' and principals' perceptions of school climate and problems in their schools, teacher and principal compensation, district hiring and retention practices, general conditions in schools, and basic characteristics of the student population.

The NTPS was first conducted during the 2015–16 school year. The survey is a redesign of the Schools and Staffing Survey (SASS), which was conducted from the 1987–88 school year to the 2011–12 school year. Although the NTPS maintains the SASS survey's focus on schools, teachers, and administrators, the NTPS has a different structure and sample than SASS. In addition, whereas SASS operated on a 4-year survey cycle, the NTPS operates on a 2-year survey cycle.

The school sample for the 2015–16 NTPS was based on an adjusted public school universe file from the 2013–14 Common Core of Data (CCD), a database of all the nation's public school districts and public schools. The NTPS definition of a school is the same as the SASS definition of a school—an institution or part of an institution that provides classroom instruction to students, has one or more teachers to provide instruction, serves students in one or more of grades 1–12 or the ungraded equivalent, and is located in one or more buildings apart from a private home.

The 2015–16 NTPS universe of schools is confined to the 50 states plus the District of Columbia. It excludes the Department of Defense dependents schools overseas, schools in U.S. territories overseas, and CCD schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent. Bureau of Indian Education schools are included in the NTPS universe, but these schools were not oversampled and the data do not support separate BIE estimates.

The NTPS includes three key components: school questionnaires, principal questionnaires, and teacher questionnaires. NTPS data are collected by the U.S. Census Bureau through a mail questionnaire with telephone and in-person field follow-up. The school and principal questionnaires were sent to sampled schools, and the teacher questionnaire was sent to a sample of teachers working at sampled schools. The NTPS school sample consisted of about 8,300 public schools; the principal sample consisted of about 8,300 public school principals; and the teacher sample consisted of about 40,000 public school teachers.

The school questionnaire asks knowledgeable school staff members about grades offered, student attendance and enrollment, staffing patterns, teaching vacancies, programs and services offered, curriculum, and community service requirements. In addition, basic information is collected about the school year, including the beginning time of students' school days and the length of the school year. The weighted unit response rate for the 2015–16 school survey was 72.5 percent.

The principal questionnaire collects information about principal/school head demographic characteristics, training, experience, salary, goals for the school, and judgments about school working conditions and climate. Information is also obtained on professional development opportunities for teachers and principals, teacher performance, barriers to dismissal of underperforming teachers, school climate and safety, parent/guardian participation in school events, and attitudes about educational goals and school governance. The weighted unit response rate for the 2015–16 principal survey was 71.8 percent.

The teacher questionnaire collects data from teachers about their current teaching assignment, workload, education history, and perceptions and attitudes about teaching. Questions are also asked about teacher preparation, induction, organization of classes, computers, and professional development. The weighted response rate for the 2015–16 teacher survey was 67.8 percent.

Further information about the NTPS is available in User's Manual for the 2015–16 National Teacher and Principal Survey, Volumes 1–4 (NCES 2017-131 through NCES 2017-134).

For additional information about the NTPS program, please contact:

Maura Spiegelman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
maura.spiegelman@ed.gov
http://nces.ed.gov/surveys/ntps

Top

School Survey on Crime and Safety (SSOCS)

The School Survey on Crime and Safety (SSOCS) is the only recurring federal survey that collects detailed information on the incidence, frequency, seriousness, and nature of violence affecting students and school personnel, as well as other indicators of school safety from the schools' perspective. SSOCS is conducted by the National Center for Education Statistics (NCES) within the U.S. Department of Education and collected by the U.S. Census Bureau. Data from this collection can be used to examine the relationship between school characteristics and violent and serious violent crimes in primary, middle, high, and combined schools. In addition, data from SSOCS can be used to assess what crime prevention programs, practices, and policies are used by schools. SSOCS has been conducted in school years 1999– 2000, 2003–04, 2005–06, 2007–08, 2009–10, and 2015–16.

The sampling frame for SSOCS:2016 was constructed from the 2013–14 Public Elementary/Secondary School Universe data file of the Common Core of Data (CCD), an annual collection of data on all public K–12 schools and school districts. The SSOCS sampling frame was restricted to regular public schools (including charter schools) in the United States and the District of Columbia. Other types of schools from the CCD Public Elementary/Secondary School Universe file were excluded from the SSOCS sampling frame. For instance, schools in Puerto Rico, American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands, as well as Department of Defense dependents schools and Bureau of Indian Education schools, were excluded. Also excluded were special education, alternative, vocational, virtual, newly closed, ungraded, and home schools, and schools with the highest grade of kindergarten or lower.

The SSOCS:2016 universe totaled 83,600 schools. From this total, 3,553 schools were selected for participation in the survey. The sample was stratified by instructional level, type of locale (urbanicity), and enrollment size. The sample of schools in each instructional level was allocated to each of the 16 cells formed by the cross-classification of the four categories of enrollment size and four types of locale. The target number of responding schools allocated to each of the 16 cells was proportional to the sum of the square roots of the total student enrollment over all schools in the cell. The target respondent count within each stratum was then inflated to account for anticipated nonresponse; this inflated count was the sample size for the stratum.

Data collection began in February 2016 and ended in early July 2016. Questionnaire packets were mailed to the principals of the sampled schools, who were asked to complete the survey or have it completed by the person at the school who is most knowledgeable about school crime and policies for providing a safe school environment. A total of 2,092 public schools submitted usable questionnaires, resulting in an overall weighted unit response rate of 62.9 percent.

For more information about the SSOCS, contact:

Rachel Hansen
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-7082
rachel.hansen@ed.gov
http://nces.ed.gov/surveys/ssocs/

Top

Fast Response Survey System (FRSS)

The Fast Response Survey System (FRSS), established in 1975, collects issue-oriented data quickly, with a minimal burden on respondents. The FRSS, whose surveys collect and report data on key education issues at the elementary and secondary levels, was designed to meet the data needs of Department of Education analysts, planners, and decisionmakers when information could not be collected quickly through NCES's large recurring surveys. Findings from FRSS surveys have been included in congressional reports, testimony to congressional subcommittees, NCES reports, and other Department of Education reports. The findings are also often used by state and local education officials.

Data collected through FRSS surveys are representative at the national level, drawing from a sample that is appropriate for each study. The FRSS collects data from state education agencies and national samples of other educational organizations and participants, including local education agencies, public and private elementary and secondary schools, elementary and secondary school teachers and principals, and public libraries and school libraries. To ensure a minimal burden on respondents, the surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly.

The FRSS survey "School Safety and Discipline: 2013–14" (FRSS 106) collected information on specific safety and discipline plans and practices, training for classroom teachers and aides related to school safety and discipline issues, security personnel, frequency of specific discipline problems, and number of incidents of various offenses. The sample for the "School Safety and Discipline: 2013–14" survey was selected from the 2011–12 Common Core of Data (CCD) Public School Universe file. Approximately 1,600 regular public elementary, middle, and high school/combined schools in the 50 states and the District of Columbia were selected for the study. (For the purposes of the study, "regular" schools included charter schools.) In February 2014, questionnaires and cover letters were mailed to the principal of each sampled school. The letter requested that the questionnaire be completed by the person most knowledgeable about discipline issues at the school, and respondents were offered the option of completing the survey either on paper or online. Telephone follow-up for survey nonresponse and data clarification was initiated in March 2014 and completed in July 2014. About 1,350 schools completed the survey. The weighted response rate was 85 percent.

One of the goals of the FRSS "School Safety and Discipline: 2013–14" survey is to allow comparisons to the School Survey on Crime and Safety (SSOCS) data. Consistent with the approach used on SSOCS, respondents were asked to report for the current 2013–14 school year to date. Information about violent incidents that occurred in the school between the time that the survey was completed and the end of the school year are not included in the survey data.

For more information about the FRSS, contact:

John Ralph
Annual Reports and Information
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
John.Ralph@ed.gov
http://nces.ed.gov/surveys/frss/

Top

Campus Safety and Security Survey

The Campus Safety and Security Survey is administered by the Office of Postsecondary Education. Since 1990, all postsecondary institutions participating in Title IV student financial aid programs have been required to comply with the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act, known as the Clery Act. Originally, Congress enacted the Crime Awareness and Campus Security Act, which was amended in 1992, 1998, and again in 2000. The 1998 amendments renamed the law the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act. The Clery Act requires schools to give timely warnings of crimes to the student body and staff; to publicize campus crime and safety policies; and to collect, report, and disseminate campus crime data.

Crime statistics are collected and disseminated by campus security authorities. These authorities include campus police; nonpolice security staff responsible for monitoring campus property; municipal, county, or state law enforcement agencies with institutional agreements for security services; individuals and offices designated by the campus security policies as those to whom crimes should be reported; and officials of the institution with significant responsibility for student and campus activities. The act requires disclosure for offenses committed at geographic locations associated with each institution. For on-campus crimes, this includes property and buildings owned or controlled by the institution. In addition to on-campus crimes, the act requires disclosure of crimes committed in or on a noncampus building or property owned or controlled by the institution for educational purposes or for recognized student organizations, and on public property within or immediately adjacent to and accessible from the campus.

There are three types of statistics described in this report: criminal offenses; arrests for illegal weapons possession and violation of drug and liquor laws; and disciplinary referrals for illegal weapons possession and violation of drug and liquor laws. Criminal offenses include homicide, sex offenses, robbery, aggravated assaults, burglary, motor vehicle theft, and arson. Only the most serious offense is counted when more than one offense was committed during an incident. The two other categories, arrests and referrals, include counts for illegal weapons possession and violation of drug and liquor laws. Arrests and referrals relate to only those that are in violation of the law and not just in violation of institutional policies. If no federal, state, or local law was violated, these events are not reported. Further, if an individual is arrested and referred for disciplinary action for an offense, only the arrest is counted. Arrest is defined to include persons processed by arrest, citation, or summons, including those arrested and released without formal charges being placed. Referral for disciplinary action is defined to include persons referred to any official who initiates a disciplinary action of which a record is kept and which may result in the imposition of a sanction. Referrals may or may not involve the police or other law enforcement agencies.

All criminal offenses and arrests may include students, faculty, staff, and the general public. These offenses may or may not involve students that are enrolled in the institution. Referrals primarily deal with persons associated formally with the institution (i.e., students, faculty, staff).

Campus security and police statistics do not necessarily reflect the total amount or even the nature of crime on campus. Rather, they reflect incidents that have been reported and recorded by campus security and/or local police. The process of reporting and recording alleged criminal incidents involve some well-known social filters and steps beginning with the victim. First, the victim or some other party must recognize that a possible crime has occurred and report the event. The event must then be recorded, and if it is recorded, the nature and type of offense must be classified. This classification may differ from the initial report due to the collection of additional evidence, interviews with witnesses, or through officer discretion. Also, the date an incident is reported may be much later than the date of the actual incident. For example, a victim may not realize something was stolen until much later, or a victim of violence may wait a number of days to report a crime. Other factors are related to the probability that an incident is reported, including the severity of the event, the victim's confidence and prior experience with the police or security agency, or influence from third parties (e.g., friends and family knowledgeable about the incident). Finally the reader should be mindful that these figures represent alleged criminal offenses reported to campus security and/or local police within a given year, and they do not necessarily reflect prosecutions or convictions for crime. More information on the reporting of campus crime and safety data may be obtained from: The Handbook for Campus Safety and Security Reporting (U.S. Department of Education 2016) http://www2.ed.gov/admins/lead/safety/campus.html#handbook.

Policy Coordination, Development, and Accreditation Service
Office of Postsecondary Education
U.S. Department of Education
http://ope.ed.gov/security/index.aspx

Campus Safety and Security Help Desk
(800) 435-5985
CampusSafetyHelp@westat.com

Top

EDFacts

EDFacts is a centralized data collection through which state education agencies submit K–12 education data to the U.S. Department of Education (ED). All data in EDFacts are organized into "data groups" and reported to ED using defined file specifications. Depending on the data group, state education agencies may submit aggregate counts for the state as a whole or detailed counts for individual schools or school districts. EDFacts does not collect student-level records. The entities that are required to report EDFacts data vary by data group but may include the 50 states, the District of Columbia, the Department of Defense (DoD) dependents schools, the Bureau of Indian Education, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. More information about EDFacts file specifications and data groups can be found at http://www.ed.gov/edfacts.

EDFacts is a universe collection and is not subject to sampling error, but nonsampling errors such as nonresponse and inaccurate reporting may occur. The U.S. Department of Education attempts to minimize nonsampling errors by training data submission coordinators and reviewing the quality of state data submissions. However, anomalies may still be present in the data.

Differences in state data collection systems may limit the comparability of EDFacts data across states and across time. To build EDFacts files, state education agencies rely on data that were reported by their schools and school districts. The systems used to collect these data are evolving rapidly and differ from state to state. For example, there is a large shift in California's firearm incident data between 2010–11 and 2011–12. California cited a new student data system that more accurately collects firearm incident data as the reason for the magnitude of the difference.

In some cases, EDFacts data may not align with data reported on state education agency websites. States may update their websites on different schedules than those they use to report to ED. Further, ED may use methods to protect the privacy of individuals represented within the data that could be different from the methods used by an individual state.

EDFacts firearm incidents data are collected in data group 601 within file 094. EDFacts collects this data group on behalf of the Office of Safe and Healthy Students in the Office of Elementary and Secondary Education. The definition for this data group is "The number of incidents involving students who brought or possessed firearms at school." The reporting period is the entire school year. Data group 601 collects separate counts for incidents involving handguns, rifles/shotguns, other firearms, and multiple weapon types. The counts reported here exclude the "other firearms" category. For more information about this data group, please see file specification 094 for the relevant school year, available at http://www2.ed.gov/about/inits/ed/edfacts/file-specifications.html.

EDFacts discipline incidents data are collected in data group 523 within file 030. EDFacts collects this data group on behalf of the Office of Safe and Healthy Students and the School Improvement Grant program in the Office of Elementary and Secondary Education. The definition for this data group is "The cumulative number of times that students were removed from their regular education program for at least an entire school day for discipline." The reporting period is the entire school year. For more information about this data group, please see file specification 030 for the relevant school year, available at http://www2.ed.gov/about/inits/ed/edfacts/file-specifications.html.

For more information about EDFacts, contact:

EDFacts
Administrative Data Division
Elementary/Secondary Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
EDFacts@ed.gov
http://www2.ed.gov/about/inits/ed/edfacts/index.html

Top

Program for International Student Assessment (PISA)

The Program for International Student Assessment (PISA) is a system of international assessments organized by the Organization for Economic Cooperation and Development (OECD), an intergovernmental organization of industrialized countries, that focuses on 15-year-olds' capabilities in reading literacy, mathematics literacy, and science literacy. PISA also includes measures of general, or cross-curricular, competencies such as learning strategies. PISA emphasizes functional skills that students have acquired as they near the end of compulsory schooling.

PISA is a 2-hour exam. Assessment items include a combination of multiple-choice questions and open-ended questions that require students to develop their own response. PISA scores are reported on a scale that ranges from 0 to 1,000, with the OECD mean set at 500 and a standard deviation set at 100. In 2015, literacy was assessed in science, reading, and mathematics through a computer-based assessment in the majority of countries, including the United States. Education systems could also participate in optional pencil-and-paper financial literacy assessments and computer-based mathematics and reading assessments. In each education system, the assessment is translated into the primary language of instruction; in the United States, all materials are written in English.

Forty-three education systems participated in the 2000 PISA; 41 education systems participated in 2003; 57 (30 OECD member countries and 27 nonmember countries or education systems) participated in 2006; and 65 (34 OECD member countries and 31 nonmember countries or education systems) participated in 2009. (An additional nine education systems administered the 2009 PISA in 2010.) In PISA 2012, 65 education systems (34 OECD member countries and 31 nonmember countries or education systems), as well as the U.S. states of Connecticut, Florida, and Massachusetts, participated. In the 2015 PISA, 73 education systems (35 OECD member countries and 31 nonmember countries or education systems), as well as the states of Massachusetts and North Carolina and the territory of Puerto Rico, participated.

To implement PISA, each of the participating education systems scientifically draws a nationally representative sample of 15-year-olds, regardless of grade level. In the PISA 2015 national sample for the United States, about 5,700 students from 177 public and private schools were represented. Massachusetts, North Carolina, and Puerto Rico also participated in PISA 2015 as separate education systems. In Massachusetts, about 1,400 students from 48 public schools participated; in North Carolina, about 1,900 students from 54 public schools participated; and in Puerto Rico, about 1,400 students in 47 public and private schools participated.

The intent of PISA reporting is to provide an overall description of performance in reading literacy, mathematics literacy, and science literacy every 3 years, and to provide a more detailed look at each domain in the years when it is the major focus. These cycles will allow education systems to compare changes in trends for each of the three subject areas over time. In the first cycle, PISA 2000, reading literacy was the major focus, occupying roughly two-thirds of assessment time. For 2003, PISA focused on mathematics literacy as well as the ability of students to solve problems in real-life settings. In 2006 PISA focused on science literacy; in 2009, it focused on reading literacy again; and in 2012, it focused on mathematics literacy. PISA 2015 focused on science, as it did in 2006.

PISA also includes questionnaires that elicit contextual information for interpreting student achievement. For example, principals in participating schools are asked about school climate, specifically, the extent to which student learning at the school is hindered by student truancy, students skipping classes, student use of alcohol or illegal drugs, students intimidating or bullying other students, and students lacking respect for teachers, among other circumstances.

For more information about PISA, contact:

Patrick Gonzales
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
patrick.gonzales@ed.gov
http://nces.ed.gov/surveys/pisa

Top