This section briefly describes each of the datasets used in this report: the School-Associated Violent Deaths Study, the Supplementary Homicide Reports, the Web-based Injury Statistics Query and Reporting System Fatal, the National Crime Victimization Survey, the School Crime Supplement to the National Crime Victimization Survey, the Youth Risk Behavior Survey, the Schools and Staffing Survey, and the School Survey on Crime and Safety. Directions for obtaining more information are provided at the end of each description.
The School-Associated Violent Deaths Study (SAVD) is an epidemiological study developed by the Centers for Disease Control and Prevention in conjunction with the U.S. Department of Education and the U.S. Department of Justice. SAVD seeks to describe the epidemiology of school-associated violent deaths, identify common features of these deaths, estimate the rate of school-associated violent death in the United States, and identify potential risk factors for these deaths. The study includes descriptive data on all school-associated violent deaths in the United States, including all homicides, suicides, or legal intervention in which the fatal injury occurred on the campus of a functioning elementary or secondary school; while the victim was on the way to or from regular sessions at such a school; or while attending or on the way to or from an official school-sponsored event. Victims of such incidents include nonstudents, as well as students and staff members. SAVD includes descriptive information about the school, event, victim(s), and offender(s). The SAVD study has collected data from July 1, 1992, through the present.
SAVD uses a four-step process to identify and collect data on school-associated violent deaths. Cases are initially identified through a search of the LexisNexis newspaper and media database. Then law enforcement officials are contacted to confirm the details of the case and to determine if the event meets the case definition. Once a case is confirmed, a law enforcement official and a school official are interviewed regarding details about the school, event, victim(s), and offender(s). A copy of the full law enforcement report is also sought for each case. The information obtained on schools includes school demographics, attendance/absentee rates, suspensions/expulsions and mobility, school history of weapon-carrying incidents, security measures, violence prevention activities, school response to the event, and school policies about weapon carrying. Event information includes the location of injury, the context of injury (while classes were being held, during break, etc.), motives for injury, method of injury, and school and community events happening around the time period. Information obtained on victim(s) and offender(s) includes demographics, circumstances of the event (date/time, alcohol or drug use, number of persons involved), types and origins of weapons, criminal history, psychological risk factors, school-related problems, extracurricular activities, and family history, including structure and stressors.
One hundred and five school-associated violent deaths were identified from July 1, 1992, to June 30, 1994 (Kachur et al. 1996). A more recent report from this data collection identified 253 school-associated violent deaths between July 1, 1994, and June 30, 1999 (Anderson et al. 2001). Other publications from this study have described how the number of events change during the school year (Centers for Disease Control and Prevention 2001), the source of the firearms used in these events (Reza et al. 2003), and suicides that were associated with schools (Kauffman et al. 2004). The most recent publication describes trends in school-associated homicide from July 1, 1992, to June 30, 2006 (Centers for Disease Control and Prevention 2008). The interviews conducted on cases between July 1, 1994, and June 30, 1999, achieved a response rate of 97 percent for police officials and 78 percent for school officials. For several reasons, all data for years from 1999 to the present are flagged as preliminary. For some recent data, the interviews with school and law enforcement officials to verify case details have not been completed. The details learned during the interviews can occasionally change the classification of a case. Also, new cases may be identified because of the expansion of the scope of the media files used for case identification. Sometimes other cases not identified during earlier data years using the independent case finding efforts (which focus on nonmedia sources of information) will be discovered. Also, other cases may occasionally be identified while the law enforcement and school interviews are being conducted to verify known cases. For additional information about SAVD, contact:
Division of Violence Prevention
National Center for Injury Prevention and Control
Centers for Disease Control and Prevention
4770 Buford Highway NE
Atlanta, GA 30341-3742
Telephone: (770) 488-4648
The Supplementary Homicide Reports (SHR), which are a part of the Uniform Crime Reporting (UCR) program, provide incident-level information on criminal homicides, including situation (number of victims to number of offenders); the age, sex, and race of victims and known offenders; types of weapons used; circumstances of the incident; and the relationship of the victim to the offender. The data are provided monthly to the Federal Bureau of Investigation (FBI) by local law enforcement agencies participating in the FBI's UCR program. The data include murders and nonnegligent manslaughters in the United States from January 1980 to December 2010; that is, negligent manslaughters and justifiable homicides have been eliminated from the data. Based on law enforcement agency reports, the FBI estimates that 596,456 murders (including non-negligent manslaughters) were committed from 1980 to 2010. Agencies provided detailed information on 535,688 of these homicide victims.
About 90 percent of homicides are included in the SHR. However, adjustments can be made to the weights to correct for missing victim reports. Estimates from the SHR used in this report were generated by the Bureau of Justice Statistics (BJS) using a weight developed by BJS that reconciles the counts of SHR homicide victims with those in the UCR. The weight is the same for all cases for a given year. The weight represents the ratio of the number of homicides reported in the UCR to the number reported in the SHR. For additional information about SHR, contact:
Criminal Justice Information Services Division
Federal Bureau of Investigation
1000 Custer Hollow Road
Clarksburg, WV 26306
Telephone: (304) 625-4995
WISQARS Fatal provides mortality data related to injury. The mortality data reported in WISQARS Fatal come from death certificate data reported to the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention. Data include causes of death reported by attending physicians, medical examiners, and coroners. It also includes demographic information about decedents reported by funeral directors, who obtain that information from family members and other informants. NCHS collects, compiles, verifies, and prepares these data for release to the public. The data provide information about what types of injuries are leading causes of deaths, how common they are, and who they affect. These data are intended for a broad audiencethe public, the media, public health practitioners and researchers, and public health officialsto increase their knowledge of injury.
WISQARS Fatal mortality reports provide tables of the total numbers of injury-related deaths and the death rates per 100,000 U.S. population. The reports list deaths according to cause (mechanism) and intent (manner) of injury by state, race, Hispanic origin, sex, and age groupings. For more information on WISQARS Fatal, contact:
National Center for Injury Prevention and Control
4770 Buford Highway NE
Atlanta, GA 30341-3724
Telephone: (770) 488-1506
The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics by the U.S. Census Bureau, is the nation's primary source of information on crime and the victims of crime. Initiated in 1972 and redesigned in 1992, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, house- hold burglary, and motor vehicle theft experienced by Americans and their households each year. The survey measures both crimes reported to police and crimes not reported to the police.
NCVS estimates in this report may differ from previous published reports. This is because a small number of victimizations, referred to as series victimizations, are included in this report using a new counting strategy. High-frequency repeat victimizations, or series victimizations, are six or more similar but separate victimizations that occur with such frequency that the victim is unable to recall each individual event or describe each event in detail. As part of ongoing research efforts associated with the redesign of the NCVS, BJS investigated ways to include high-frequency repeat victimizations, or series victimizations, in estimates of criminal victimization. Including series victimizations would obtain a more accurate estimate of victimization. BJS has decided to include series victimizations using the victim's estimates of the number of times the victimizations occurred over the past 6 months, capping the number of victimizations within each series at a maximum of 10. This strategy for counting series victimizations balances the desire to estimate national rates and account for the experiences of persons with repeat victimizations while noting that some estimation errors exist in the number of times these victimizations occurred. Including series victimizations in national rates results in rather large increases in the level of violent victimization; however, trends in violence are generally similar regardless of whether series victimizations are included. For more information on the new counting strategy and supporting research, see Methods for Counting High-Frequency Repeat Victimizations in the National Crime Victimization Survey at http://bjs.ojp.usdoj.gov/content/pub/pdf/mchfrv.pdf.
Readers should note that in 2003, in accordance with changes to the Office of Management and Budget's standards for the classification of federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is followed by a question on race. The new question about race allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander. Analysis conducted by the Demographic Surveys Division at the U.S. Census Bureau showed that the new question had very little impact on the aggregate racial distribution of the NCVS respondents, with one exception. There was a 1.6 percentage point decrease in the percentage of respondents who reported themselves as White. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.
There were changes in the sample design and survey methodology in the 2006 NCVS that may have affected survey estimates. Caution should be used when comparing 2006 estimates to other years. Data from 2007 onward are comparable to earlier years. Analyses of the 2007 estimates indicate that the program changes made in 2006 had relatively small effects on NCVS changes. For more information on the 2006 NCVS data, see Criminal Victimization, 2006 at http://bjs.ojp.usdoj.gov/content/pub/pdf/cv06.pdf, the technical notes at http://bjs.ojp.usdoj.gov/content/pub/pdf/cv06tn.pdf, and Criminal Victimization, 2007 at http://bjs.ojp.usdoj.gov/content/pub/pdf/cv07.pdf.
The number of NCVS eligible households in sample in 2011 was about 89,000. They were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interview. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial Census. Within each sampled household, U.S. Census Bureau personnel attempts to interview all household members age 12 and older to determine whether they had been victimized by the measured crimes during the 6 months preceding the interview.
The first NCVS interview with a housing unit is conducted in person. Subsequent interviews are conducted by telephone, if possible. About 72,000 persons age 12 and older are interviewed each 6 months. Households remain in the sample for 3 years and are interviewed seven times at 6-month intervals. Since the survey's inception, the initial interview at each sample unit has been used only to bound future interviews to establish a time frame to avoid duplication of crimes uncovered in these subsequent interviews. Beginning in 2006, data from the initial interview have been adjusted to account for the effects of bounding and included in the survey estimates. After their seventh interview, households are replaced by new sample households. The NCVS has consistently obtained a response rate of over 90 percent at the household level. The completion rates for persons within households in 2011 were about 88 percent. Weights were developed to permit estimates for the total U.S. population 12 years and older. For more information about the NCVS, contact:
Created as a supplement to the NCVS and co- designed by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. This report includes data from the 1995, 1999, 2001, 2003, 2005, 2007, 2009, and 2011 collections. The 1989 data are not included in this report as a result of methodological changes to the NCVS and SCS. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Students are asked additional questions about security measures used by their school, students' participation in after school activities, students' perceptions of school rules, the presence of weapons and gangs in school, the presence of hate-related words and graffiti in school, student reports of bullying and reports of rejection at school, and the availability of drugs and alcohol in school. Students are also asked attitudinal questions relating to fear of victimization and avoidance behavior at school.
The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the NCVS sampling design and changes to the race/ethnicity variable beginning in 2003). Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 612, and were not home schooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 1218 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview. It should be noted that the first or unbounded NCVS interview has always been included in analysis of the SCS data and may result in the reporting of events outside of the requested reference period.
The prevalence of victimization for 1995, 1999, 2001, 2003, 2005, 2007, 2009, and 2011 was calculated by using NCVS incident variables appended to the SCS data files of the same year. The NCVS type of crime variable was used to classify victimizations of students in the SCS as serious violent, violent, or theft. The NCVS variables asking where the incident happened (at school) and what the victim was doing when it happened (attending school or on the way to or from school) were used to ascertain whether the incident happened at school. Only incidents that occurred inside the United States are included.
In 2001, the SCS survey instrument was modified from previous collections. First, in 1995 and 1999, “at school” was defined for respondents as in the school building, on the school grounds, or on a school bus. In 2001, the definition for “at school” was changed to mean in the school building, on school property, on a school bus, or going to and from school. This change was made to the 2001 questionnaire in order to be consistent with the definition of “at school” as it is constructed in the NCVS and was also used as the definition in subsequent SCS collections. Cognitive interviews conducted by the U.S. Census Bureau on the 1999 SCS suggested that modifications to the definition of “at school” would not have a substantial impact on the estimates.
A total of 9,700 students participated in the 1995 SCS, 8,400 in 1999, about 8,400 in 2001, about 7,200 in 2003, about 6,300 in 2005, about 5,600 in 2007, 5,000 in 2009, and 6,500 in 2011. In the 2011 SCS, the household completion rate was 91 percent.
In the 1995, 1999, 2001, 2003, 2005, 2007, and 2009 SCS, the household completion rates were 95 percent, 94 percent, 93 percent, 92 percent, 91 percent, 90 percent, and 92 percent respectively, and the student completion rates were 78 percent, 78 percent, 77 percent, 70 percent, 62 percent, 58 percent, and 56 percent respectively. For the 2011 SCS, the student completion rate was 63 percent. The overall unweighted SCS unit response rate (calculated by multiplying the household completion rate by the student completion rate) was 74 percent in 1995, about 73 percent in 1999, about 72 percent in 2001, about 64 percent in 2003, about 56 percent in 2005, about 53 percent in 2007, 51 percent in 2009, and 57 percent in 2011.
There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (U.S. Department of Education 2003). Due to the low unit response rate in 2005, 2007, and 2009, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents.
In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has four key student or school characteristic variables for which data is known for respondents and nonrespondents: sex, race/ethnicity, household income, and urbanicity, all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.
In 2005, the analysis of unit nonresponse bias found evidence of bias for the race, household income, and urbanicity variables. White (non-Hispanic) and Other (non-Hispanic) respondents had higher response rates than Black (non-Hispanic) and Hispanic respondents. Respondents from households with an income of $35,000$49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500$14,999, $15,000$24,999, and $25,000$34,999. Respondents who live in urban areas had lower response rates than those who live in rural or suburban areas. Although the extent of nonresponse bias cannot be determined, weighting adjustments, which corrected for differential response rates, should have reduced the problem.
In 2007, the analysis of unit nonresponse bias found evidence of bias by the race/ethnicity and household income variables. Hispanic respondents had lower response rates than other race/ethnicities.
Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $25,000. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting the nonresponse bias has little impact on the overall estimates.
In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other race/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting the nonresponse bias has little impact on the overall estimates. All analyses for this report are conducted with weighted survey data so that estimates are representative of the population. Weighting the data adjusts for unequal selection probabilities and for the effects of nonresponse.
Response rates for most SCS survey items in all survey years were hightypically over 97 percent of all eligible respondents meaning there is little potential for item nonresponse bias for most items in the survey. Weights were developed to compensate for differential probabilities of selection and nonresponse. The weighted data permit inferences about the eligible student population who were enrolled in schools in all SCS data years. For more information about SCS, contact:
Kathryn A. Chandler National Center for Education Statistics 1990 K Street NW Washington, DC 20006 Telephone: (202) 502-7486 E- mail: firstname.lastname@example.org Internet: http://nces.ed.gov/programs/crime
The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS) as well as surveys conducted in states and large urban school districts. This report uses 1993, 1995, 1997, 1999, 2001, 2003, 2005, 2007, 2009, and 2011 YRBSS data.
The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 912 in the United States. The target population consisted of all public and private school students in grades 912 in the 50 states and the District of Columbia. The first-stage sampling frame included selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are either counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.
The final stage of sampling consisted of randomly selecting, in each chosen school and in each of grades 912, one or two classrooms from either a required subject, such as English or social studies, or a required period, such as homeroom or second period. All students in selected classes were eligible to participate. Three strategies were used to oversample Black and Hispanic students: (1) larger sampling rates were used to select PSUs that are in high-Black and high-Hispanic strata; (2) a modified measure of size was used that increased the probability of selecting schools with a disproportionately high minority enrollment; and (3) two classes per grade, rather than one, were selected in schools with a high percentage of combined Black, Hispanic, Asian/Pacific Islander, or American Indian/Alaska Native enrollment. Approximately 16,300 students participated in the 1993 survey, 10,900 students participated in the 1995 survey, 16,300 students participated in the 1997 survey, 15,300 students participated in the 1999 survey, 13,600 students participated in the 2001 survey, 15,200 students participated in the 2003 survey, 13,900 students participated in the 2005 survey, 14,000 students participated in the 2007 survey, 16,400 students participated in the 2009 survey, and 15,400 participated in the 2011 survey.
The overall response rate was 70 percent for the 1993 survey, 60 percent for the 1995 survey, 69 percent for the 1997 survey, 66 percent for the 1999 survey, 63 percent for the 2001 survey, 67 percent for the 2003 survey, 67 percent for the 2005 survey, 68 percent for the 2007 survey, 71 percent for the 2009 survey, and 71 percent for the 2011 survey. NCES standards call for response rates of 85 percent or better for cross-sectional surveys, and bias analyses are required by NCES when that percentage is not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.
State-level data were downloaded from the Youth Online: Comprehensive Results web page (http://apps.nccd.cdc.gov/youthonline/App/Default.aspx). Each state and district school-based YRBS employs a two-stage, cluster sample design to produce representative samples of students in grades 912 in their jurisdiction. All except a few state samples, and all district samples, include only public schools, and each district sample includes only schools in the funded school district (e.g., San Diego Unified School District) rather than in the entire city (e.g., greater San Diego area).
In the first sampling stage in all except a few states and districts, schools are selected with probability proportional to school enrollment size. In the second sampling stage, intact classes of a required subject or intact classes during a required period (e.g., second period) are selected randomly. All students in sampled classes are eligible to participate. Certain states and districts modify these procedures to meet their individual needs. For example, in a given state or district, all schools, rather than a sample of schools, might be selected to participate. State and local surveys that have a scientifically selected sample, appropriate documentation, and an overall response rate greater than or equal to 60 percent are weighted. The overall response rate reflects the school response rate multiplied by the student response rate. These three criteria are used to ensure that the data from those surveys can be considered representative of students in grades 912 in that jurisdiction. A weight is applied to each record to adjust for student nonresponse and the distribution of students by grade, sex, and race/ethnicity in each jurisdiction. Therefore, weighted estimates are representative of all students in grades 912 attending schools in each jurisdiction. Surveys that do not have an overall response rate of greater than or equal to 60 percent and that do not have appropriate documentation are not weighted and are not included in this report.
In 2011, a total of 43 states and 21 districts had weighted data. Not all of the districts were contained in the 43 states. For example, California was not one of the 43 states that obtained weighted data but it contained several districts that did. For more information on the location of the districts please see http://www.cdc.gov/healthyyouth/yrbs/participation.htm. In sites with weighted data, the student sample sizes for the state and district YRBS ranged from 1,103 to 13,201. School response rates ranged from 73 to 100 percent, student response rates ranged from 60 to 88 percent, and overall response rates ranged from 60 to 86 percent.
Readers should note that reports of these data published by the CDC and in this report do not include percentages where the denominator includes less than 100 unweighted cases.
In 1999, in accordance with changes to the Office of Management and Budget's standards for the classification of federal data on race and ethnicity, the YRBS item on race/ethnicity was modified. The version of the race and ethnicity question used in 1993, 1995, and 1997 was:
How do you describe yourself?
The version used in 1999, 2001, 2003, and in the 2005, 2007, and 2009 state and local district surveys was:
How do you describe yourself? (Select one or more responses.)
In the 2005 national survey and in all 2007, 2009, and 2011 surveys, race/ethnicity was computed from two questions: (1) “Are you Hispanic or Latino?” (response options were “yes” and “no”), and (2) “What is your race?” (response options were “American Indian or Alaska Native,” “Asian,” “Black or African American,” “Native Hawaiian or Other Pacific Islander,” or “White”). For the second question, students could select more than one response option. For this report, students were classified as “Hispanic” if they answered “yes” to the first question, regardless of how they answered the second question. Students who answered “no” to the first question and selected more than one race/ethnicity in the second category were classified as “More than one race.” Students who answered “no” to the first question and selected only one race/ethnicity were classified as that race/ ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered “no” to the first question but did not answer the second question.
CDC has conducted two studies to understand the effect of changing the race/ethnicity item on the YRBS. Brener, Kann, and McManus (2003) found that allowing students to select more than one response to a single race/ethnicity question on the YRBS had only a minimal effect on reported race/ ethnicity among high school students. Eaton, Brener, Kann, and Pittman (2007) found that self-reported race/ethnicity was similar regardless of whether the single-question or a two-question format was used. For additional information about the YRBSS, contact:
Laura Kann Division of Adolescent and School Health National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention Centers for Disease Control and Prevention Mailstop K-33 4770 Buford Highway NE Atlanta, GA 30341-3717 Telephone: (770) 488-6181 E-mail: email@example.com Internet: http://www.cdc.gov/yrbs
This report draws upon data on teacher victimization from the Schools and Staffing Survey (SASS), which provides national- and state-level data on public schools and national- and affiliation-level data on private schools. The 199394, 19992000, 200304, and 200708 SASS were collected by the U.S. Census Bureau and sponsored by the National Center for Education Statistics (NCES). The 199394, 19992000, and 200304 administrations of SASS consisted of four sets of linked surveys, including surveys of schools, the principals of each selected school, a subsample of teachers within each school, and public school districts. The 200708 administration of SASS consisted of five types of questionnaires: district questionnaires, principal questionnaires, school questionnaires, teacher questionnaires, and school library media center questionnaires. In 199394, there were two sets of teacher surveys, public and private school teachers. In 19992000, there were four sets of teacher surveys, public, private, public charter, and Bureau of Indian Education (BIE) school teachers. In 200304 and 200708, there were three sets of teacher surveys, public (including public charter), private, and BIE. For this report, BIE and public charter schools are included with public schools.
The public school sampling frames for the 199394, 19992000, 200304, and 200708 SASS were created using the 199192, 199798, 200102, and 200506 NCES Common Core of Data (CCD) Public School Universe Files, respectively. In SASS, a school was defined as an institution or part of an institution that provides classroom instruction to students; has one or more teachers to provide instruction; serves students in one or more of grades 112 or the ungraded equivalent and is located in one or more buildings apart from a private home. It was possible for two or more schools to share the same building; in this case they were treated as different schools if they had different administrations (i.e., principals or school head). Since CCD and SASS differ in scope and their definition of a school, some records were deleted, added, or modified in order to provide better coverage and a more efficient sample design for SASS. Data were collected by multistage sampling, which began with the selection of schools.
This report uses 199394, 19992000, 200304, and 200708 SASS data. Approximately 10,000 public schools and 3,300 private schools were selected to participate in the 199394 SASS, 11,100 public schools (9,900 public schools, 100 BIE-funded schools, and 1,100 charter schools) and 3,600 private schools were selected to participate in the 19992000 SASS, 10,400 public schools (10,200 public schools and 200 BIE-funded schools) and 3,600 private schools were selected to participate in the 200304 SASS, and 9,980 public schools (9,800 public schools and 180 BIE-funded schools) and 2,940 private schools were selected to participate in the 200708 SASS. Within each school, teachers selected were further stratified into one of five teacher types in the following hierarchy: (1) Asian or Pacific Islander; (2) American Indian, Aleut, or Eskimo; (3) teachers who teach classes designed for students with limited English proficiency; (4) teachers in their first, second, or third year of teaching; and (5) teachers not classified in any of the other groups. Within each teacher stratum, teachers were selected systematically with equal probability. In 199394, approximately 57,000 public school teachers and 11,500 private school teachers were sampled. In 1999 2000, about 56,300 public school teachers, 500 BIE teachers, 4,400 public charter school teachers, and 10,800 private school teachers were sampled. In 200304, about 52,500 public school teachers, 700 BIE teachers, and 10,000 private school teachers were sampled. In 200708, about 47,440 public school teachers, 750 BIE teachers, and 8,180 private school teachers were sampled.
This report focuses on responses from teachers. The overall weighted response rate for public school teachers in 199394 was 88 percent. In 19992000, the overall weighted response rates were 77 percent for public school teachers, and 86 and 72 percent for BIE and public charter school teachers, respectively (which are included with public school teachers for this report). In 200304, the overall weighted response rates were 76 percent for public school teachers and 86 percent for BIE-funded school teachers (who are included with public school teachers). In 200708, the overall weighted response rates were 72 percent for public school teachers and 71 percent for BIE-funded school teachers (who are included with public school teachers). For private school teachers, the overall weighted response rates were 80 percent in 199394, about 67 percent in 19992000, about 70 percent in 200304, and 66 percent in 200708. Values were imputed for questionnaire items that should have been answered but were not. For additional information about SASS, contact:
The School Survey on Crime and Safety (SSOCS) is managed by the National Center for Education Statistics (NCES) on behalf of the U.S. Department of Education. SSOCS collects extensive crime and safety data from principals and school administrators of U.S. public schools. Data from this collection can be used to examine the relationship between school characteristics and violent and serious violent crimes in primary schools, middle schools, high schools, and combined schools. In addition, data from SSOCS can be used to assess what crime prevention programs, practices, and policies are used by schools. SSOCS has been conducted in school years 19992000, 200304, 200506, 200708, and 200910.
SSOCS was developed by NCES and is funded by the Office of Safe and Drug-Free Schools of the U.S. Department of Education. The 200910 SSOCS (SSOCS:2010) was conducted by the U.S. Census Bureau. Data collection began on February 24, 2010, when questionnaire packets were mailed to sampled schools, and continued through June 11, 2010. A total of 2,648 public schools submitted usable questionnaires: 684 primary schools, 909 middle schools, 948 high schools, and 107 combined schools.
The sampling frame for SSOCS:2010 was constructed from the 200708 Public Elementary/Secondary School Universe data file of the Common Core of Data (CCD), an annual collection of data on all public K12 schools and school districts. The SSOCS sampling frame was restricted to regular public schools in the United States and the District of Columbia (including charter schools).
A total of 3,476 schools were selected for the 2010 study. In February 2010, questionnaires were mailed to school principals, who were asked to complete the survey or to have it completed by the person most knowledgeable about discipline issues at the school. A total of 2,648 schools completed the survey. The weighted overall response rate was 80.8 percent.62 A nonresponse bias analysis was conducted on the 3 items with weighted item nonresponse rates below 85 percent. The detected bias was not deemed problematic enough to suppress any items from the data file. Weights were developed to adjust for the variable probabilities of selection and differential nonresponse and can be used to produce national estimates for regular public schools in the 200910 school year. For information on the 19992000, 200304, 200506, 200708, and 200910 iterations, see Neiman (2011). For more information about the School Survey on Crime and Safety, contact:
62The weighted response rate is calculated by applying the base sampling rates to the following ratio: completed cases/(total sampleknown ineligibles).