Our nation's schools should be a safe haven for teaching and learning free of crime and violence. Even though students are less likely to be victims of a violent crime at school1 than away from school (Indicators 1 and 2), any instance of crime or violence at school not only affects the individuals involved but also may disrupt the educational process and affect bystanders, the school itself, and the surrounding community (Henry 2000). For both students and teachers, victimization at school can have lasting effects. In addition to experiencing loneliness, depression, and adjustment difficulties (Crick and Bigbee 1998; Crick and Grotpeter 1996; Nansel et al. 2001; Prinstein, Boergers, and Vernberg 2001; Storch et al. 2003), victimized children are more prone to truancy (Ringwalt, Ennett, and Johnson 2003), poor academic performance (Wei and Williams 2004), dropping out of school (Beauvais et al. 1996), and violent behaviors (Nansel et al. 2003). For teachers, incidents of victimization may lead to professional disenchantment and even departure from the profession altogether (Karcher 2002).
For parents, school staff, and policymakers to effectively address school crime, they need an accurate understanding of the extent, nature, and context of the problem. However, it is difficult to gauge the scope of crime and violence in schools given the large amount of attention devoted to isolated incidents of extreme school violence. Measuring progress toward safer schools requires establishing good indicators of the current state of school crime and safety across the nation and regularly updating and monitoring these indicators; this is the aim of Indicators of School Crime and Safety.
Indicators of School Crime and Safety: 2007 is the tenth in a series of reports produced by the National Center for Education Statistics (NCES) and the Bureau of Justice Statistics (BJS) since 1998 that present the most recent data available on school crime and student safety. The report is not intended to be an exhaustive compilation of school crime and safety information, nor does it attempt to explore reasons for crime and violence in schools. Rather, it is designed to provide a brief summary of information from an array of data sources and to make data on national school crime and safety accessible to policymakers, educators, parents, and the general public.
Indicators of School Crime and Safety: 2007 is organized into sections that delineate specific concerns to readers, starting with a description of the most serious violent crimes. The sections cover Violent Deaths; Nonfatal Student and Teacher Victimization; School Environment; Fights, Weapons, and Illegal Substances; Fear and Avoidance; and Discipline, Safety, and Security Measures. Each section contains a set of indicators that, taken together, aim to describe a distinct aspect of school crime and safety. Where available, data on crimes that occur outside of school grounds are offered as a point of comparison.2 Supplemental tables for each indicator provide more detailed breakouts and standard errors for estimates. A glossary of terms and references section appear at the end of the report.
This year's report contains updated data on violent deaths (Indicator 1), nonfatal student victimization (Indicator 2), public school reports of selected crimes (Indicator 6), discipline problems (Indicator 7), serious disciplinary actions (Indicator 19), and safety and security measures (Indicator 20). A new classification scheme for school level has been applied to the most recent data available on teachers who were threatened with injury or physically attacked in Indicator 5. In addition, one new indicator appears in this year's report: Indicator 12 summarizes teachers' reports of the conditions at their schools, including student misbehavior, tardiness, and class cutting and school rule enforcement by other teachers and principals.
Also found in this year's report are references to recent publications relevant to each indicator that the reader may want to consult for additional information or analyses. These references can be found in the "For more information" sidebars at the bottom of each indicator.
The indicators in this report are based on information drawn from a variety of independent data sources, including national surveys of students, teachers, and principals and universe data collections from federal departments and agencies, including BJS, NCES, the Federal Bureau of Investigation, and the Centers for Disease Control and Prevention. Each data source has an independent sample design, data collection method, and questionnaire design or is the result of a universe data collection.
The combination of multiple, independent sources of data provides a broad perspective on school crime and safety that could not be achieved through any single source of information. However, readers should be cautious when comparing data from different sources. While every effort has been made to keep key definitions consistent across indicators, differences in sampling procedures, populations, time periods, and question phrasing can all affect the comparability of results. For example, both Indicators 20 and 21 report data on select security and safety measures used in schools. Indicator 20 uses data collected from a sample of principals about safety and security practices used in their schools during the 2005–06 school year. Indicator 21, however, uses data collected from 12- through 18-year-olds residing in a sample of households. These students were asked whether they observed selected safety and security measures in their school in 2005, but they may not have known if, in fact, the security measure was present. In addition, different indicators contain various approaches to the analysis of school crime data and, therefore, will show different perspectives on school crime. For example, both Indicators 2 and 3 report data on theft and violent crime at school based on the National Crime Victimization Survey and the School Crime Supplement to that survey, respectively. While Indicator 2 examines the number of incidents of crime, Indicator 3 examines the percentage or prevalence of students who reported victimization. Figure A provides a summary of some of the variations in the design and coverage of sample surveys used in this report.
Several indicators in this report are based on self-reported survey data. Readers should note that limitations inherent to self-reported data may affect estimates (Cantor and Lynch 2000). First, unless an interview is "bounded" or a reference period is established, estimates may include events that exceed the scope of the specified reference period. This factor may artificially increase reports because respondents may recall events outside of the given reference period. Second, many of the surveys rely on the respondent to "self-determine" a condition. This factor allows the respondent to define a situation based upon his or her own interpretation of whether the incident was a crime or not. On the other hand, the same situation may not necessarily be interpreted in the same way by a bystander or the perceived offender. Third, victim surveys tend to emphasize crime events as incidents that take place at one point in time. However, victims can often experience a state of victimization in which they are threatened or victimized regularly or repeatedly. Finally, respondents may recall an event inaccurately. For instance, people may forget the event entirely or recall the specifics of the episode incorrectly. These and other factors may affect the precision of the estimates based on these surveys.
Data trends are discussed in this report when possible. Where trends are not discussed, either the data are not available in earlier surveys or the wording of the survey question changed from year to year, eliminating the ability to discuss any trend. Where data from samples are reported, as is the case with most of the indicators in this report, the standard error is calculated for each estimate provided in order to determine the "margin of error" for these estimates. The standard errors of the estimates for different subpopulations in an indicator can vary considerably and should be taken into account when making comparisons. Throughout this report, in cases where the standard error was at least 30 percent of the associated estimate, the estimates were noted with a "!" symbol (interpret data with caution). In cases where the standard error was greater than 50 percent of the associated estimate, the estimate was suppressed. See appendix A (178 KB) for more information.
The comparisons in the text have been tested for statistical significance to ensure that the differences are larger than might be expected due to sampling variation. Unless otherwise noted, all statements cited in the report are statistically significant at the .05 level. Several test procedures were used, depending upon the type of data being analyzed and the nature of the statement being tested. The primary test procedure used in this report was the Student's t statistic, which tests the difference between two sample estimates. Linear trend tests were used when differences among percentages were examined relative to ordered categories of a variable, rather than the differences between two discrete categories. This test allows one to examine whether, for example, the percentage of students who reported using drugs increased (or decreased) over time or whether the percentage of students who reported being physically attacked in school increased (or decreased) with age. When differences among percentages were examined relative to a variable with ordered categories (such as grade), analysis of variance (ANOVA) was used to test for a linear relationship between the two variables.
Appendix A (178 KB) of this report contains descriptions of all the datasets used in this report and a discussion of how standard errors were calculated for each estimate.