Our nationís schools should be safe havens for teaching and learning free of crime and violence. Any instance of crime or violence at school not only affects the individuals involved but also may disrupt the educational process and affect bystanders, the school itself, and the surrounding community (Henry 2000). For both students and teachers, victimization at school can have lasting effects. In addition to experiencing loneliness, depression, and adjustment difficulties (Crick and Bigbee 1998; Crick and Grotpeter 1996; Nansel et al. 2001; Prinstein, Boergers, and Vernberg 2001; Storch et al. 2003), victimized children are more prone to truancy (Ringwalt, Ennett, and Johnson 2003), poor academic performance (MacMillan and Hagan 2004; Wei and Williams 2004), dropping out of school (Beauvais et al. 1996; MacMillan and Hagan 2004), and violent behaviors (Nansel et al. 2003). For teachers, incidents of victimization may lead to professional disenchantment and even departure from the profession altogether (Karcher 2002; Smith and Smith 2006).
For parents, school staff, and policymakers to effectively address school crime, they need an accurate understanding of the extent, nature, and context of the problem. However, it is difficult to gauge the scope of crime and violence in schools given the large amount of attention devoted to isolated incidents of extreme school violence. Measuring progress toward safer schools requires establishing good indicators of the current state of school crime and safety across the nation and regularly updating and monitoring these indicators; this is the aim of Indicators of School Crime and Safety.
Indicators of School Crime and Safety: 2009 is the twelfth in a series of reports produced since 1998 by the National Center for Education Statistics (NCES) and the Bureau of Justice Statistics (BJS) that present the most recent data available on school crime and student safety. The report is not intended to be an exhaustive compilation of school crime and safety information, nor does it attempt to explore reasons for crime and violence in schools. Rather, it is designed to provide a brief summary of information from an array of data sources and to make data on national school crime and safety accessible to policymakers, educators, parents, and the general public.
Indicators of School Crime and Safety: 2009 is organized into sections that delineate specific concerns to readers, starting with a description of the most serious violent crimes. The sections cover Violent Deaths; Nonfatal Student and Teacher Victimization; School Environment; Fights, Weapons, and Illegal Substances; Fear and Avoidance; and Discipline, Safety, and Security Measures. Each section contains a set of indicators that, taken together, aim to describe a distinct aspect of school crime and safety. Where available, data on crimes that occur outside of school grounds are offered as a point of comparison.9 Supplemental tables for each indicator provide more detailed breakouts and standard errors for estimates. A glossary of terms and a reference section appear at the end of the report. Standard errors for the estimate tables are available online.
This yearís report contains updated data for 8 indicators: violent deaths (Indicator 1), nonfatal student victimization (Indicator 2), teachers threatened with injury or physically attacked by students (Indicator 5), violent and other crime incidents at public schools and those reported to the police (Indicator 6), discipline problems reported by public schools (Indicator 7), teachersí reports on school conditions (Indicator 12), serious disciplinary actions taken by public schools (indicator 19), and safety and security measures taken by public schools (indicator 20).
Also found in this yearís report are references to recent publications relevant to each indicator that the reader may want to consult for additional information or analyses. These references can be found in the "For more information" sidebars at the bottom of each indicator.
The indicators in this report are based on information drawn from a variety of independent data sources, including national surveys of students, teachers, and principals and universe data collections from federal departments and agencies, including BJS, NCES, the Federal Bureau of Investigation, and the Centers for Disease Control and Prevention. Each data source has an independent sample design, data collection method, and questionnaire design, or is the result of a universe data collection.
The combination of multiple, independent sources of data provides a broad perspective on school crime and safety that could not be achieved through any single source of information. However, readers should be cautious when comparing data from different sources. While every effort has been made to keep key definitions consistent across indicators, differences in sampling procedures, populations, time periods, and question phrasing can all affect the comparability of results. For example, both Indicators 20 and 21 report data on select security and safety measures used in schools. Indicator 20 uses data collected from a survey of public school principals about safety and security practices used in their schools during the 2007-08 school year. The schools range from primary through high schools. Indicator 21, however, uses data collected from 12- through 18-year-old students residing in a sample of households. These students were asked whether they observed selected safety and security measures in their school in 2007, but they may not have known whether, in fact, the security measure was present. In addition, different indicators contain various approaches to the analysis of school crime data and, therefore, will show different perspectives on school crime. For example, both Indicators 2 and 3 report data on theft and violent crime at school based on the National Crime Victimization Survey and the School Crime Supplement to that survey, respectively. While Indicator 2 examines the number of incidents of crime, Indicator 3 examines the percentage or prevalence of students who reported victimization. Figure A provides a summary of some of the variations in the design and coverage of sample surveys used in this report.
Several indicators in this report are based on self-reported survey data. Readers should note that limitations inherent to self-reported data may affect estimates (Addington 2005; Cantor and Lynch 2000). First, unless an interview is "bounded" or a reference period is established, estimates may include events that exceed the scope of the specified reference period. This factor may artificially increase reported incidents because respondents may recall events outside of the given reference period. Second, many of the surveys rely on the respondent to "self-determine" a condition. This factor allows the respondent to define a situation based upon his or her own interpretation of whether the incident was a crime or not. On the other hand, the same situation may not necessarily be interpreted in the same way by a bystander or the perceived offender. Third, victim surveys tend to emphasize crime events as incidents that take place at one point in time. However, victims can often experience a state of victimization in which they are threatened or victimized regularly or repeatedly. Finally, respondents may recall an event inaccurately. For instance, people may forget the event entirely or recall the specifics of the episode incorrectly. These and other factors may affect the precision of the estimates based on these surveys.
Data trends are discussed in this report when possible. Where trends are not discussed, either the data are not available in earlier surveys or the wording of the survey question changed from year to year, eliminating the ability to discuss any trend. For example, in Indicator 11, which reports on bullying using data from the School Crime Supplement survey, the 2007 questionnaire was revised to include information on cyber-bullying. Due to this change, the text of this indicator is no longer presenting trend information.
Where data from samples are reported, as is the case with most of the indicators in this report, the standard error is calculated for each estimate provided in order to determine the "margin of error" for these estimates. The standard errors of the estimates for different subpopulations in an indicator can vary considerably and should be taken into account when making comparisons. Throughout this report, in cases where the standard error was at least 30 percent of the associated estimate, the estimates were noted with a "!" symbol (interpret data with caution). In cases where the standard error was greater than 50 percent of the associated estimate, the estimate was suppressed. See appendix A for more information.
The comparisons in the text have been tested for statistical significance to ensure that the differences are larger than might be expected due to sampling variation. Unless otherwise noted, all statements cited in the report are statistically significant at the .05 level. Several test procedures were used, depending upon the type of data being analyzed and the nature of the statement being tested. The primary test procedure used in this report was Studentís t statistic, which tests the difference between two sample estimates. The t test formula was not adjusted for multiple comparisons. Linear trend tests were used when differences among percentages were examined relative to interval categories of a variable, rather than the differences between two discrete categories. This test allows one to examine whether, for example, the percentage of students who reported using drugs increased (or decreased) over time or whether the percentage of students who reported being physically attacked in school increased (or decreased) with age. When differences among percentages were examined relative to a variable with ordinal categories (such as grade), analysis of variance (ANOVA) was used to test for a linear relationship between the two variables.
Although percentages reported in the tables are generally rounded to one decimal place (e.g., 76.5 percent), percentages reported in the text and figures are generally rounded from the original number to whole numbers (with any value of 0.50 or above rounded to the next highest whole number). While the data labels on the figures have been rounded to whole numbers, the graphical presentation of these data is based on the unrounded estimates shown in the corresponding table.
Appendix A of this report contains descriptions of all the datasets used in this report and a discussion of how standard errors were calculated for each estimate.