- Violent Deaths
- Nonfatal Student and Teacher Victimization
- School Environment
- Violent and Other Criminal Incidents at Public Schools, and Those Reported to the Police
- Discipline Problems Reported by Public Schools
- Students’ Reports of Gangs at School
- Students’ Reports of Being Called Hate-Related Words and Seeing Hate-Related Grafitti
- Bullying at School and Electronic Bullying
- Teachers’ Reports on School Conditions
- Fights, Weapons, and Illegal Substances
- Fear and Avoidance
- Discipline, Safety, and Security Measures
- Postsecondary Campus Safety and Security
- Appendix A: Technical Notes
- Appendix B: Glossary of Terms
Our nation’s schools should be safe havens for teaching and learning free of crime and violence. Any instance of crime or violence at school not only affects the individuals involved but also may disrupt the educational process and affect bystanders, the school itself, and the surrounding community (Brookmeyer, Fanti, and Henrich 2006; Goldstein, Young, and Boyd 2008). For both students and teachers, victimization at school can have lasting effects. In addition to experiencing loneliness, depression, and adjustment difficulties (Crick and Bigbee 1998; Crick and Grotpeter 1996; Nansel et al. 2001; Prinstein, Boergers, and Vernberg 2001; Storch et al. 2003), victimized children are more prone to truancy (Ringwalt, Ennett, and Johnson 2003), poor academic performance (MacMillan and Hagan 2004; Wei and Williams 2004), dropping out of school (Beauvais et al. 1996; MacMillan and Hagan 2004), and violent behaviors (Nansel et al. 2003). For teachers, incidents of victimization may lead to professional disenchantment and even departure from the profession altogether (Karcher 2002; Smith and Smith 2006).
For parents, school staff, and policymakers to effectively address school crime, they need an accurate understanding of the extent, nature, and context of the problem. However, it is difficult to gauge the scope of crime and violence in schools given the large amount of attention devoted to isolated incidents of extreme school violence. Measuring progress toward safer schools requires establishing good indicators of the current state of school crime and safety across the nation and regularly updating and monitoring these indicators; this is the aim of Indicators of School Crime and Safety.
Purpose and Organization of This Report
Indicators of School Crime and Safety: 2018 is the 21st in a series of reports produced since 1998 by the National Center for Education Statistics (NCES) and the Bureau of Justice Statistics (BJS) that present the most recent data available on school crime and student safety. Although the data presented in this report are the most recent available at the time of publication, the most recent two or more school years are not covered due to data processing timelines. The report is not intended to be an exhaustive compilation of school crime and safety information, nor does it attempt to explore reasons for crime and violence in schools. Rather, it is designed to provide a brief summary of information from an array of data sources and to make data on national school crime and safety accessible to policymakers, educators, parents, and the general public
Indicators of School Crime and Safety: 2018 is organized into sections that delineate specific concerns to readers. The sections cover violent deaths; nonfatal student and teacher victimization; school environment; fights, weapons, and illegal substances; fear and avoidance; discipline, safety, and security measures; and campus safety and security. This year’s report also includes a spotlight section on topics related to youth opioid use, perceptions of bullying, and active shooter incidents in educational settings. Each section contains a set of indicators that, taken together, describe a distinct aspect of school crime and safety. Where available, data on crimes that occur outside of school grounds are offered as a point of comparison.1 Supplemental tables for each indicator provide more detailed breakouts and standard errors for estimates. A reference section and a glossary of terms appear at the end of the report.
This edition of the report contains updated data for 16 indicators: violent deaths at school and away from school (Indicator 1); incidence of victimization at school and away from school (Indicator 2); prevalence of victimization at school (Indicator 3); threats and injuries with weapons on school property (Indicator 4); students’ reports of gangs at school (Indicator 8); students’ reports of being called hate-related words and seeing hate-related graffiti (Indicator 9); bullying at school and electronic bullying (Indicator 10); physical fights on school property and anywhere (Indicator 12); students carrying weapons on school property and anywhere and students’ access to firearms (Indicator 13); students’ use of alcohol (Indicator 14); marijuana use and illegal drug availability (Indicator 15); students’ perceptions of personal safety at school and away from school (Indicator 16); students’ reports of avoiding school activities or classes or specific places in school (Indicator 17); students’ reports of safety and security measures observed at school (Indicator 20); criminal incidents at postsecondary institutions (Indicator 21); and hate crime incidents at postsecondary institutions (Indicator 22). In addition, this report includes three spotlight indicators: use, availability, and perceived harmfulness of opioids among youth (Spotlight 1); perceptions of bullying among students who reported being bullied: repetition and power imbalance (Spotlight 2); and active shooter incidents in educational settings (Spotlight 3).
Also included in this year’s report are references to publications relevant to each indicator that the reader may consult for additional information or analyses. These references can be found in the “For more information” sidebars at the bottom of each indicator.
The indicators in this report are based on information drawn from a variety of independent data sources, including national surveys of students, teachers, principals, and postsecondary institutions and universe data collections from federal departments and agencies. The sources include BJS, NCES, the Federal Bureau of Investigation, the Centers for Disease Control and Prevention, the Office of Postsecondary Education, and the National Institute on Drug Abuse of the U.S. Department of Health and Human Services. Each data source has an independent sample design, data collection method, and questionnaire design, or is the result of a universe data collection.
The combination of multiple, independent sources of data provides a broad perspective on school crime and safety that could not be achieved through any single source of information. However, readers should be cautious when comparing data from different sources. While every effort has been made to keep key definitions consistent across indicators, differences in sampling procedures, populations, time periods, and question phrasing can all affect the comparability of results. For example, both Indicators 19 and 20 report data on selected security and safety measures used in schools. Indicator 19 uses data collected from a survey of public school principals about safety and security practices used in their schools during the 2015–16 school year. The schools range from primary through high schools. Indicator 20, however, uses data collected from 12- through 18-year-old students residing in a sample of households. These students were asked whether they observed selected safety and security measures in their school in 2017; however, they may not have known whether, in fact, the security measure was present. In addition, different indicators contain various approaches to the analysis of school crime data and, therefore, will show different perspectives on school crime. For example, both Indicators 2 and 3 report data on theft and violent victimization at school based on the National Crime Victimization Survey and the School Crime Supplement to that survey, respectively. While Indicator 2 examines the number of incidents of victimization, Indicator 3 examines the percentage or prevalence of students who reported victimization. Finally, some indicators in this report are based on data from different sources than have been used in previous Indicators reports. This is due to data availability or efforts to improve analytic methodology or comparability. Table A provides a summary of some of the variations in the design and coverage of sample surveys used in this report.
Table A (13 KB)
|Survey||Sample||Year of survey||Reference time period||Indicators|
|Campus Safety and Security Survey||All postsecondary institutions that receive Title IV funding||2001 through 2016 annually||Calendar year||21, 22|
|EDFacts||All students in K–12 schools||2009–10 through 2016–17 annually||Incidents during the school year||13|
|Fast Response Survey System (FRSS)||Public primary, middle, and high schools1||2013–14||2013–14 school year||6, 7, 19|
|Monitoring the Future Survey||8th-, 10th-, and 12th-graders in public and private schools||1995 through 2017 annually||Drug use in lifetime, during the previous 12 months, and during the previous 30 days||Spotlight 1|
|National Crime Victimization Survey (NCVS)||Individuals ages 12 or older living in households and group quarters||1992 through 2017 annually||Interviews conducted during the calendar year2||2|
|National Teacher and Principal Survey (NTPS)||Public school K–12 teachers||2015–16||Incidents during the previous 12 months||5, 11|
|National Vital Statistics System (NVSS)||Universe||1992 through 2016 continuous||July 1 through June 30||1|
|The School-Associated Violent Death Surveillance System (SAVD-SS)||Universe||1992 through 2016 continuous||July 1 through June 30||1|
|School Crime Supplement (SCS) to the National Crime Victimization Survey||Students ages 12–18 enrolled in public and private schools during the school year||1995, 1999, and 2001 through 2017 biennially||Incidents during the previous 6 months||3|
|Incidents during the school year3||8, 9, 10, 13, 16, 17, 20, Spotlight 2|
|School Survey on Crime and Safety (SSOCS)||Public primary, middle, and high schools1||1999–2000, 2003–04, 2005–06, 2007–08, 2009–10, and 2015–16||1999–2000, 2003–04, 2005–06, 2007–08, 2009–10, and 2015–16 school years||6, 7, 18, 19|
|Schools and Staffing Survey (SASS)
||Public and private school K–12 teachers||1993–94, 1999–2000, 2003–04, 2007–08, and 2011–12||Incidents during the previous 12 months||5, 11|
|Studies of Active Shooter Incidents||Universe||2000 through 2017 annually||Calendar year||Spotlight 3|
|Youth Risk Behavior Surveillance System (YRBSS)||Students enrolled in grades 9–12 in public and private schools at the time of the survey||1993 through 2017 biennially||Incidents during the previous 12 months||4, 10, 12|
|Incidents during the previous 30 days||13, 14, 15|
|1 Either school principals or the person most knowledgeable about discipline issues at school completed the questionnaire.
2 The NCVS is a self-reported survey that is administered from January to December. Respondents are asked about the number and characteristics of crimes they have experienced during the prior 6 months. Crimes are classified by the year of the survey and not by the year of the crime.
3 For data collections prior to 2007, the reference period was the previous 6 months. The reference period for 2007 and beyond was the school year. Cognitive testing showed that estimates from 2007 and beyond are comparable to previous years. For more information, see appendix A.
Several indicators in this report are based on self-reported survey data. Readers should note that limitations inherent to self-reported data may affect estimates( Addington 2005; Cantor and Lynch 2000). First, unless an interview is “bounded” or a reference period is established, estimates may include events that exceed the scope of the specified reference period. This factor may artificially increase reported incidents because respondents may recall events outside of the given reference period. Second, many of the surveys rely on the respondent to “self-determine” a condition. This factor allows the respondent to define a situation based upon his or her own interpretation of whether the incident was a crime or not. On the other hand, the same situation may not necessarily be interpreted in the same way by a bystander or the perceived offender. Third, victim surveys tend to emphasize crime events as incidents that take place at one point in time. However, victims can often experience a state of victimization in which they are threatened or victimized regularly or repeatedly. Finally, respondents may recall an event inaccurately. For instance, people may forget the event entirely or recall the specifics of the episode incorrectly. These and other factors can affect the precision of the estimates based on these surveys.
Data trends are discussed in this report when possible. Where trends are not discussed, either the data are not available in earlier surveys or the wording of the survey question changed from year to year, making it impossible to discuss any trend. A number of considerations influence the selection of the data years to present in Indicators of School Crime and Safety. Base years for the presentations typically are selected to provide 10 to 20 years of trend data when available. In the case of surveys with long time frames, such as the School Crime Supplement to the National Crime Victimization Survey and the Youth Risk Behavior Survey, a decade’s beginning year (i.e., 2001) often starts the trend line. The narrative for the indicators compares the most recent year’s data with those from the established base year, often including analyses for intervening data points and the immediately preceding survey administration. In the tables for the indicators, data from selected earlier and intervening years are presented with the base year and most recent data to show a more complete trend.
Where data from samples are reported, as is the case with most indicators in this report, the standard error is calculated for each estimate provided in order to determine the “margin of error” for these estimates. The standard errors of the estimates for different subpopulations in an indicator can vary considerably and should be taken into account when making comparisons. With the exception of Indicator 2, in this report, in cases where the standard error was between 30 and 50 percent of the associated estimate, the estimates were noted with an “!” symbol (Interpret data with caution. The coefficient of variation [CV] for this estimate is between 30 and 50 percent). In Indicator 2, the “!” symbol cautions the reader that marked estimates indicate that the reported statistic was based on 10 or fewer cases or the coefficient of variation was greater than 50 percent. With the exception of Indicator 2, in cases where the standard error was 50 percent or greater of the associated estimate, the estimate was suppressed, with a note stating, “Reporting standards not met. Either there are too few cases for a reliable estimate or the coefficient of variation (CV) is 50 percent or greater.” See appendix A for more information.
The appearance of an “!” symbol (Interpret data with caution) in a table or figure indicates a data cell with a high ratio of standard error to estimate, alerting the reader to use caution when interpreting such data. These estimates are still discussed, however, when statistically significant differences are found despite large standard errors.
Comparisons in the text based on sample survey data have been tested for statistical significance to ensure that the differences are larger than might be expected due to sampling variation. Findings described in this report with comparative language (e.g., higher, lower, increase, and decrease) are statistically significant at the .05 level. Comparisons based on universe data do not require statistical testing, with the exception of linear trends. Several test procedures were used, depending upon the type of data being analyzed and the nature of the comparison being tested. The primary test procedure used in this report was Student’s t statistic, which tests the difference between two sample estimates. The t test formula was not adjusted for multiple comparisons. Linear trend tests were used to examine changes in percentages over a range of values such as time or age. Linear trend tests allow one to examine whether, for example, the percentage of students who reported using drugs increased (or decreased) over time or whether the percentage of students who reported being physically attacked in school increased (or decreased) with age. When differences among percentages were examined relative to a variable with ordinal categories (such as grade), analysis of variance (ANOVA) was used to test for a linear relationship between the two variables. Results of significance testing might differ slightly from those published elsewhere based on differences in how the testing was performed.
Percentages reported in the tables and figures are generally rounded to one decimal place (e.g., 76.5 percent), while percentages reported in the text are generally rounded from the original number to whole numbers (with any value of 0.50 or above rounded to the next highest whole number). While the data labels on the figures have been rounded to one decimal place, the graphical presentation of these data is based on the unrounded estimates.
Appendix A of this report contains descriptions of all the datasets used in this report and a discussion of how standard errors were calculated for each estimate.
1 Data in this report are not adjusted to reflect the number of hours that youth spend on school property versus the number of hours they spend elsewhere.