Skip Navigation
Click to open navigation

Appendix A: Technical Notes

Sources of Data

This section briefly describes each of the datasets used in this report: the School-Associated Violent Death Surveillance System, the National Vital Statistics System, the National Crime Victimization Survey, the School Crime Supplement to the National Crime Victimization Survey, the Youth Risk Behavior Surveillance System, the Schools and Staffing Survey, the National Teacher and Principal Survey, the School Survey on Crime and Safety, the Fast Response Survey System survey of school safety and discipline, EDFacts, the Monitoring the Future Survey, and the Studies of Active Shooter Incidents. Directions for obtaining more information are provided at the end of each description.


School-Associated Violent Deaths Surveillance System (SAVD-SS)

The School-Associated Violent Death Surveillance System (SAVD-SS) was developed by the Centers for Disease Control and Prevention (CDC) in conjunction with the U.S. Department of Education and the U.S. Department of Justice. The system contains descriptive data on all school-associated violent deaths in the United States, including homicides, suicides, and legal intervention deaths where the fatal injury occurred on the campus of a functioning elementary or secondary school; while the victim was on the way to or from regular sessions at such a school; or while attending or on the way to or from an official school-sponsored event. Victims of such incidents include students, as well as nonstudents (e.g., students’ parents, community residents, and school staff). The SAVD-SS includes data on the school, event,
victim(s), and offender(s). These data are used to describe the epidemiology of school-associated violent deaths, identify common features of these deaths, estimate the rate of school-associated violent deaths in the United States, and identify potential risk factors for these deaths. The CDC has collected SAVD-SS data from July 1, 1992, through the present.

The SAVD-SS uses a three-step process to identify and collect data on school-associated violent deaths. First, cases are identified through a systematic search of the LexisNexis newspaper and media database. Second, law enforcement officials from the office that investigated the death(s) are contacted to confirm the details of the case and to determine if the event meets the case definition. Third, once a case is confirmed, a copy of the full law enforcement report is requested for each case. Finally, in previous data years when possible, interviews were conducted with law enforcement and/or school officials familiar with cases to obtain contextual information about the incidents. However, interviews are no longer conducted as a part of SAVD-SS protocol. Information regarding the fatal incident is abstracted from law enforcement reports and includes the location of injury, context of injury (while classes were being held, during break, etc.), motives for injury, method of injury, and relationship, school, and community circumstances that may have been related to the incident (e.g., relationship problems with family members, school disciplinary issues, gang-related activity in the community). Information obtained on victim(s) and offender(s) includes demographics, contextual information about the event (date/time, alcohol or drug use, number of persons involved), types and origins of weapons, criminal history, psychological risk factors, school-related problems, extracurricular activities, and family history, including structure and stressors. For specific SAVD studies, school-level data for schools where incidents occur are obtained through the National Center for Education Statistics Common Core of Data and include school demographics, locale (e.g., urban, suburban, rural), grade levels comprising the school, Title I eligibility, and percentage of students eligible for free/reduced-price lunch among other variables.

All data years are flagged as “preliminary.” For some recent cases, the law enforcement reports have not yet been received. The details learned during data abstraction from law enforcement reports can occasionally change the classification of a case. New cases may be identified, because of the expansion of the scope of media files used for case identification. However, cases not identified during earlier data years may be discovered at a later date as a result of newly published media articles describing the incident. Occasionally, cases may be identified during law enforcement confirmation processes to verify known cases.

For additional information about SAVD, contact:

Kristin Holland, Ph.D., M.P.H.
Principal Investigator & Behavioral Scientist
School-Associated Violent Death Surveillance System
Division of Violence Prevention
National Center for Injury Control and Prevention
Centers for Disease Control and Prevention
(770) 488-3954
KHolland@cdc.gov

Top

National Vital Statistics System (NVSS)

The National Vital Statistics System (NVSS) is the system through which data on vital events—births, deaths, marriages, divorces, and fetal deaths—are provided to the National Center for Health Statistics (NCHS), part of the Centers for Disease Control and Prevention (CDC). The data are provided to NCHS through the Vital Statistics Cooperative Program (VSCP). Detailed mortality data from NVSS are accessed through CDC’s Wide-ranging Online Data for Epidemiologic Research (WONDER), providing the counts of homicides among youth ages 5–18 and suicides among youth ages 10–18 by school year (i.e., from July 1 through June 30).1 These counts are used to estimate the proportion of all youth homicides and suicides that are school-associated in a given school year. For more information on the NCHS and the NVSS, see https://www.cdc.gov/nchs/nvss.htm.

Top

National Crime Victimization Survey (NCVS)

The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics (BJS) by the U.S. Census Bureau, is the nation’s primary source of information on crime and the victims of crime. Initiated in 1972 and redesigned in 1992, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, household burglary, and motor vehicle theft experienced by Americans and American households each year. The survey measures both crimes reported to police and crimes not reported to the police.

NCVS estimates reported in Indicators of School Crime and Safety: 2013 and beyond may differ from those in previous published reports. This is because a small number of victimizations, referred to as series victimizations, are included in this report using a new counting strategy. High-frequency repeat victimizations, or series victimizations, refer to situations in which six or more similar but separate victimizations that occur with such frequency that the victim is unable to recall each individual event or describe each event in detail. As part of ongoing research efforts on the NCVS, BJS investigated ways to include high-frequency repeat victimizations, or series victimizations, in estimates of criminal victimization, which results in more accurate estimates of victimization. BJS now includes series victimizations using the victim’s estimates of the number of times the victimization occurred over the past 6 months, capping the number of victimizations within each series at 10. This strategy balances the desire to estimate national rates and account for the experiences of persons who have been subjected to repeat victimizations against the desire to minimize the estimation errors that can occur when repeat victimizations are reported. Including series victimizations in national rates results in rather large increases in the level of violent victimization; however, trends in violence are generally similar regardless of whether series victimizations are included. For more information on the new counting strategy and supporting research, see Methods for Counting High-Frequency Repeat Victimizations in the National Crime Victimization Survey (Lauritsen et al. 2012) at https://www.bjs.gov/content/pub/pdf/mchfrv.pdf.

Readers should note that in 2003, in accordance with changes to the U.S. Office of Management and Budget’s standards for classifying federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is now followed by a new question about race. The new question about race allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander. An analysis conducted by the Demographic Surveys Division at the U.S. Census Bureau showed that the new race question had very little impact on the aggregate racial distribution of NCVS respondents, with one exception: There was a 1.6 percentage point decrease in the percentage of respondents who reported themselves as White. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.

Every 10 years, the NCVS sample is redesigned to reflect changes in the population. In the 2006 NCVS, changes in the sample design and survey methodology affected the survey’s estimates. Caution should be used when comparing 2006 estimates to estimates of other years. For more information on the 2006 NCVS data, see Criminal Victimization, 2006 (Rand and Catalano 2007) at https://bjs.gov/content/pub/pdf/cv06.pdf, the technical notes at http://www.bjs.gov/content/pub/pdf/cv06tn.pdf, and Criminal Victimization, 2007 (Rand 2008) at https://www.bjs.gov/content/pub/pdf/cv07.pdf. Due to a sample increase and redesign in 2016, victimization estimates among youth were not comparable to estimates for other years and are not available in this report. For more information on the redesign, see https://www.bjs.gov/content/pub/pdf/cv16re.pdf.

The number of NCVS-eligible households in the 2017 sample was approximately 192,111. Households were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interviews. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial Census. Within each sampled household, the U.S. Census Bureau interviewer attempts to interview all household members age 12 and older to determine whether they had been victimized by the measured crimes during the 6 months preceding the interview.

The first NCVS interview with a housing unit is conducted in person. Subsequent interviews are conducted by telephone, if possible. All persons age 12 and older are interviewed every 6 months. Households remain in the sample for 3 years and are interviewed seven times at 6-month intervals. Since the survey’s inception, the initial interview at each sample unit has been used only to bound future interviews to establish a time frame to avoid duplication of crimes uncovered in these subsequent interviews. Beginning in 2006, data from the initial interview have been adjusted to account for the effects of bounding and have been included in the survey estimates. After a household has been interviewed its seventh time, it is replaced by a new sample household. In 2017, the household response rate was about 76 percent, and the completion rate for persons within households was about 84 percent. Weights were developed to permit estimates for the total U.S. population 12 years and older. For more information about the NCVS, contact:

Barbara A. Oudekerk
Victimization Statistics Branch
Bureau of Justice Statistics
Barbara.A.Oudekerk@usdoj.gov
https://www.bjs.gov/

Top

School Crime Supplement (SCS)

Created as a supplement to the NCVS and co-designed by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. This report includes data from the 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017 collections. The 1989 data are not included in this report as a result of methodological changes to the NCVS and SCS. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Students are asked additional questions about security measures used by their school, students’ participation in afterschool activities, students’ perceptions of school rules, the presence of weapons and gangs in school, the presence of hate-related words and graffiti in school, student reports of bullying and reports of rejection at school, and the availability of drugs and alcohol in school. Students are also asked attitudinal questions relating to fear of victimization and avoidance behavior at school.

The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the NCVS sampling design and changes to the race/ethnicity variable beginning in 2003). Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 6–12, and were not homeschooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 12–18 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview. It should be noted that the first or unbounded NCVS interview has always been included in analysis of the SCS data and may result in the reporting of events outside of the requested reference period.

The prevalence of victimization for 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017 was calculated by using NCVS incident variables appended to the SCS data files of the same year. The NCVS type of crime variable was used to classify victimizations of students in the SCS as serious violent, violent, or theft. The NCVS variables asking where the incident happened (at school) and what the victim was doing when it happened (attending school or on the way to or from school) were used to ascertain whether the incident happened at school. Only incidents that occurred inside the United States are included.

In 2001, the SCS survey instrument was modified from previous collections. First, in 1995 and 1999, “at school” was defined for respondents as in the school building, on the school grounds, or on a school bus. In 2001, the definition for “at school” was changed to mean in the school building, on school property, on a school bus, or going to and from school. This change was made to the 2001 questionnaire in order to be consistent with the definition of “at school” as it is constructed in the NCVS and was also used as the definition in subsequent SCS collections. Cognitive interviews conducted by the U.S. Census Bureau on the 1999 SCS suggested that modifications to the definition of “at school” would not have a substantial impact on the estimates.

A total of about 9,700 students participated in the 1995 SCS, 8,400 in 1999, 8,400 in 2001, 7,200 in 2003, 6,300 in 2005, 5,600 in 2007, 5,000 in 2009, 6,500 in 2011, 5,500 in 2015, and 7,100 in 2017. In the 2017 SCS, the household completion rate was 76 percent.

In the 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017 SCS, the household completion rates were 95 percent, 94 percent, 93 percent, 92 percent, 91 percent, 90 percent, 92 percent, 91 percent, 86 percent, 82 percent, and 76 percent, respectively, and the student completion rates were 78 percent, 78 percent, 77 percent, 70 percent, 62 percent, 58 percent, 56 percent, 63 percent, 60 percent, 58 percent, and 52 percent, respectively. The overall unweighted SCS unit response rate (calculated by multiplying the household completion rate by the student completion rate) was about 74 percent in 1995, 73 percent in 1999, 72 percent in 2001, 64 percent in 2003, 56 percent in 2005, 53 percent in 2007, 51 percent in 2009, 57 percent in 2011, 51 percent in 2013, 48 percent in 2015, and 40 percent in 2017.

There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (U.S. Department of Education 2003). Due to the low unit response rate in 2005, 2007, 2009, 2011, 2013, 2015, and 2017, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents. In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has several key student or school characteristic variables for which data are known for respondents and nonrespondents: sex, age, race/ethnicity, household income, region, and urbanicity, all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.

In 2005, the analysis of unit nonresponse bias found evidence of bias for the race, household income, and urbanicity variables. White (non-Hispanic) and Other (non-Hispanic) respondents had higher response rates than Black (non-Hispanic) and Hispanic respondents. Respondents from households with an income of $35,000–$49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500–$14,999, $15,000–$24,999, and $25,000–$34,999. Respondents who live in urban areas had lower response rates than those who live in rural or suburban areas. Although the extent of nonresponse bias cannot be determined, weighting adjustments, which corrected for differential response rates, should have reduced the problem.

In 2007, the analysis of unit nonresponse bias found evidence of bias by the race/ethnicity and household income variables. Hispanic respondents had lower response rates than other races/ethnicities. Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $25,000. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other races/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2011, the analysis of unit nonresponse bias found evidence of potential bias for the age variable. Respondents 12 to 17 years old had higher response rates than did 18-year-old respondents in the NCVS and SCS interviews. Weighting the data adjusts for unequal selection probabilities and for the effects of nonresponse. The weighting adjustments that correct for differential response rates are created by region, age, race, and sex, and should have reduced the effect of nonresponse.

In 2013, the analysis of unit nonresponse bias found evidence of potential bias for the age, region, and Hispanic origin variables in the NCVS interview response. Within the SCS portion of the data, only the age and region variables showed significant unit nonresponse bias. Further analysis indicated only the age 14 and the west region categories showed positive response biases that were significantly different from some of the other categories within the age and region variables. Based on the analysis, nonresponse bias seems to have little impact on the SCS results.

In 2015, the analysis of unit nonresponse bias found evidence of potential bias for age, race, Hispanic origin, urbanicity, and region in the NCVS interview response. For the SCS interview, the age, race, urbanicity, and region variables showed significant unit nonresponse bias. The age 14 group and rural areas showed positive response biases that were significantly different from other categories within the age and urbanicity variables. The northeast region and Asian race group showed negative response biases that were significantly different from other categories within the region and race variables. These results provide evidence that these subgroups may have a nonresponse bias associated with them. Response rates for most SCS survey items in all survey years were high—typically 95 percent or more, meaning there is little potential for item nonresponse bias for most items in the survey.

In 2017, the analysis of unit nonresponse bias found that the race/ethnicity and census region variables showed significant differences in response rates between different race/ethnicity and census region subgroups. Respondent and nonrespondent distributions were significantly different for the race/ethnicity subgroup only. However, after using weights adjusted for person nonresponse, there was no evidence that these response differences introduced nonresponse bias in the final victimization estimates. Response rates for key SCS items were about 98 percent or higher, meaning there was little potential for item nonresponse bias for most items in the survey.

The weighted data permit inferences about the eligible student population who were enrolled in schools in all SCS data years. For more information about SCS, contact:

Rachel Hansen
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW
Washington, DC 20202
(202) 245-7082
rachel.hansen@ed.gov
https://nces.ed.gov/programs/crime

Top

Youth Risk Behavior Surveillance System (YRBSS)

The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS) as well as surveys conducted in states, territories, tribes, and large urban school districts. This report uses 1993, 1995, 1997, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017 YRBSS data.

The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 9–12 in the United States. In each survey, the target population consisted of all public and private school students in grades 9–12 in the 50 states and the District of Columbia. The first-stage sampling frame included selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are either counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.

The final stage of sampling consisted of randomly selecting, in each chosen school and in each of grades 9–12, one or two classrooms from either a required subject, such as English or social studies, or a required period, such as homeroom or second period. All students in selected classes were eligible to participate. In surveys conducted before 2013, three strategies were used to oversample Black and Hispanic students: (1) larger sampling rates were used to select PSUs that are in high-Black and high-Hispanic strata; (2) a modified measure of size was used that increased the probability of selecting schools with a disproportionately high minority enrollment; and (3) two classes per grade, rather than one, were selected in schools with a high percentage of Black or Hispanic enrollment. In 2013, 2015, and 2017, only selection of two classes per grade was needed to achieve an adequate precision with minimum variance. Approximately 16,300 students participated in the 1993 survey, 10,900 participated in the 1995 survey, 16, 300 participated in the 1997 survey, 15,300 participated in the 1999 survey, 13,600 participated in the 2001 survey, 15,200 participated in the 2003 survey, 13,900 participated in the 2005 survey, 14,000 participated in the 2007 survey, 16,400 participated in the 2009 survey, 15,400 participated in the 2011 survey, 13,600 participated in the 2013 survey, 15,600 participated in the 2015 survey, and 14,800 participated in the 2017 survey.

The overall response rate was 70 percent for the 1993 survey, 60 percent for the 1995 survey, 69 percent for the 1997 survey, 66 percent for the 1999 survey, 63 percent for the 2001 survey, 67 percent for the 2003 survey, 67 percent for the 2005 survey, 68 percent for the 2007 survey, 71 percent for the 2009 survey, 71 percent for the 2011 survey, 68 percent for the 2013 survey, 60 percent for the 2015 survey, and 60 percent for the 2017 survey. NCES standards call for response rates of 85 percent or better for cross-sectional surveys, and bias analyses are generally required by NCES when that percentage is not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available. A school nonresponse bias analysis, however, was done for the 2017 survey. This analysis found some evidence of potential bias by school type and school poverty level, but concluded that the bias had little impact on the overall estimates and would be further reduced by weight adjustment. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.

State-level data were downloaded from the Youth Online: Comprehensive Results web page (https://nccd.cdc.gov/YouthOnline/). Each state and district school-based YRBS employs a two-stage, cluster sample design to produce representative samples of students in grades 9–12 in their jurisdiction. All except one state sample (South Dakota), and all district samples, include only public schools, and each district sample includes only schools in the funded school district (e.g., San Diego Unified School District) rather than in the entire city (e.g., greater San Diego area).

In the first sampling stage in all except a few states and districts, schools are selected with probability proportional to school enrollment size. In the second sampling stage, intact classes of a required subject or intact classes during a required period (e.g., second period) are selected randomly. All students in sampled classes are eligible to participate. Certain states and districts modify these procedures to meet their individual needs. For example, in a given state or district, all schools, rather than a sample of schools, might be selected to participate. State and local surveys that have a scientifically selected sample, appropriate documentation, and an overall response rate greater than or equal to 60 percent are weighted. The overall response rate reflects the school response rate multiplied by the student response rate. These three criteria are used to ensure that the data from those surveys can be considered representative of students in grades 9–12 in that jurisdiction. A weight is applied to each record to adjust for student nonresponse and the distribution of students by grade, sex, and race/ethnicity in each jurisdiction. Therefore, weighted estimates are representative of all students in grades 9–12 attending schools in each jurisdiction. Surveys that do not have an overall response rate of greater than or equal to 60 percent and that do not have appropriate documentation are not weighted and are not included in this report.

In 2017, a total of 39 states and 21 districts had weighted data. Not all of the districts were contained in the 39 states. For example, Texas was not one of the 39 states that obtained weighted data, but it contained two districts that did. For more information on the location of the districts, see https://www.cdc.gov/healthyyouth/data/yrbs/participation.htm. In sites with weighted data, the student sample sizes for the state and district YRBS ranged from 805 to 51,807. School response rates ranged from 68 to 100 percent, student response rates ranged from 67 to 90 percent, and overall response rates ranged from 60 to 89 percent.

Readers should note that reports of these data published by the CDC and in this report do not include percentages where the denominator includes less than 100 unweighted cases.

In 1999, in accordance with changes to the Office of Management and Budget’s standards for the classification of federal data on race and ethnicity, the YRBS item on race/ethnicity was modified. The version of the race and ethnicity question used in 1993, 1995, and 1997 was:

How do you describe yourself?

  1. White—not Hispanic
  2. Black—not Hispanic
  3. Hispanic or Latino
  4. Asian or Pacific Islander
  5. American Indian or Alaskan Native
  6. Other

The version used in 1999, 2001, 2003, and in the 2005 state and local district surveys was:

How do you describe yourself? (Select one or more responses.)

  1. American Indian or Alaska Native
  2. Asian
  3. Black or African American
  4. Hispanic or Latino
  5. Native Hawaiian or Other Pacific Islander
  6. White

In the 2005 national survey and in all 2007, 2009, 2011, 2013, 2015, and 2017 surveys, race/ethnicity was computed from two questions: (1) “Are you Hispanic or Latino?” (response options were “yes” and “no”), and (2) “What is your race?” (response options were “American Indian or Alaska Native,” “Asian,” “Black or African American,” “Native Hawaiian or Other Pacific Islander,” or “White”). For the second question, students could select more than one response option. For this report, students were classified as “Hispanic” if they answered “yes” to the first question, regardless of how they answered the second question. Students who answered “no” to the first question and selected more than one race/ethnicity in the second category were classified as “More than one race.” Students who answered “no” to the first question and selected only one race/ethnicity were classified as that race/ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered “no” to the first question but did not answer the second question.

CDC has conducted two studies to understand the effect of changing the race/ethnicity item on the YRBS. Brener, Kann, and McManus (2003) found that allowing students to select more than one response to a single race/ethnicity question on the YRBS had only a minimal effect on reported race/ethnicity among high school students. Eaton et al. (2007) found that self-reported race/ethnicity was similar regardless of whether the single-question or a two-question format was used.

For additional information about the YRBSS, contact:

Nancy Brener
Division of Adolescent and School Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
Centers for Disease Control and Prevention
Mailstop E-75
1600 Clifton Road NE
Atlanta, GA 30329
(404) 718-8133
nad1@cdc.gov
https://www.cdc.gov/yrbs

Top

Schools and Staffing Survey (SASS)

The Schools and Staffing Survey (SASS) is a set of related questionnaires that collect descriptive data on the context of public and private elementary and secondary education. Data reported by districts, schools, principals, teachers, and library media centers provide a variety of statistics on the condition of education in the United States that may be used by policymakers and the general public. The SASS system covers a wide range of topics, including teacher demand, teacher and principal characteristics, teachers’ and principals’ perceptions of school climate and problems in their schools, teacher and principal compensation, district hiring and retention practices, general conditions in schools, and basic characteristics of the student population.

SASS data are collected through a mail questionnaire with telephone and in-person field follow-up. SASS has been conducted by the U.S. Census Bureau for NCES since the first administration of the survey, which was conducted during the 1987–88 school year. Subsequent SASS administrations were conducted in 1990–91, 1993–94, 1999–2000, 2003–04, 2007–08, and 2011–12.

SASS is designed to produce national, regional, and state estimates for public elementary and secondary schools, school districts, principals, teachers, and school library media centers; and national and regional estimates for public charter schools, as well as principals, teachers, and school library media centers within these schools. For private schools, the sample supports national, regional, and affiliation estimates for schools, principals, and teachers.

From its inception, SASS has had five core components: school questionnaires, teacher listing forms, teacher questionnaires, principal questionnaires, and school district (prior to 1999–2000, “teacher demand and shortage”) questionnaires. A sixth component, school library media center questionnaires, was introduced in the 1993–94 administration and has been included in every subsequent administration of SASS. School library data were also collected in the 1990–91 administration of the survey through the school and principal questionnaires.

School questionnaires used in SASS include the Public and Private School Questionnaires, teacher questionnaires include the Public and Private School Teacher Questionnaires, principal questionnaires include the Public and Private School Principal (or School Administrator) Questionnaires, school district questionnaires include the School District (or Teacher Demand and Shortage) Questionnaire, and library media center questionnaires include the School Library Media Center Questionnaire.

Although the five core questionnaires and the school library media questionnaires have remained relatively stable over the various administrations of SASS, the survey has changed to accommodate emerging issues in elementary and secondary education. Some items have been added, some have been deleted, and some questionnaire items have been reworded.

During the 1990–91 SASS cycle, NCES worked with the Office of Indian Education to add an Indian School Questionnaire to SASS, and it remained a part of SASS through 2007–08. The Indian School Questionnaire explores the same school-level issues that the Public and Private School Questionnaires explore, allowing comparisons among the three types of schools. The 1990–91, 1993–94, 1999–2000, 2003–04, and 2007–08 administrations of SASS obtained data on Bureau of Indian Education (BIE) schools (schools funded or operated by the BIE), but the 2011–12 administration did not collect data from BIE schools. SASS estimates for all survey years presented in this report exclude BIE schools, and as a result, estimates in this report may differ from those in previously published reports.

School library media center questionnaires were administered in public, private, and BIE schools as part of the 1993–94 and 1999–2000 SASS. During the 2003–04 administration of SASS, only library media centers in public schools were surveyed, and in 2007–08 library media centers in public schools and BIE and BIE-funded schools were surveyed. The 2011–12 survey collected data only on school library media centers in traditional public schools and in public charter schools. School library questions focused on facilities, services and policies, staffing, technology, information literacy, collections and expenditures, and media equipment. New or revised topics included access to online licensed databases, resource availability, and additional elements on information literacy. The Student Records and Library Media Specialist/Librarian Questionnaires were administered only in 1993–94.

As part of the 1999–2000 SASS, the Charter School Questionnaire was sent to the universe of charter schools in operation in 1998–99. In 2003–04 and in subsequent administrations of SASS, charter schools were included in the public school sample as opposed to being sent a separate questionnaire. Another change in the 2003–04 administration of SASS was a revised data collection procedure using a primary in-person contact within the school intended to reduce the field follow-up phase.

The SASS teacher surveys collect information on the characteristics of teachers, such as their age, race/ethnicity, years of teaching experience, average number of hours per week spent on teaching activities, base salary, average class size, and highest degree earned. These teacher-reported data may be combined with related information on their school’s characteristics, such as school type (e.g., public traditional, public charter, Catholic, private other religious, and private nonsectarian), community type, and school enrollment size. The teacher questionnaires also ask for information on teacher opinions regarding the school and teaching environment. In 1993–94, about 53,000 public school teachers and 10,400 private school teachers were sampled. In 1999–2000, about 56,300 public school teachers, 4,400 public charter school teachers, and 10,800 private school teachers were sampled. In 2003–04, about 52,500 public school teachers and 10,000 private school teachers were sampled. In 2007–08, about 48,400 public school teachers and 8,200 private school teachers were sampled. In 2011–12, about 51,100 public school teachers and 7,100 private school teachers were sampled. Weighted overall response rates in 2011–12 were 61.8 percent for public school teachers and 50.1 percent for private school teachers.

The SASS principal surveys focus on such topics as age, race/ethnicity, sex, average annual salary, years of experience, highest degree attained, perceived influence on decisions made at the school, and hours spent per week on all school activities. These data on principals can be placed in the context of other SASS data, such as the type of the principal’s school (e.g., public traditional, public charter, Catholic, other religious, or nonsectarian), enrollment, and percentage of students eligible for free or reduced-price lunch. In 2003–04, about 10,200 public school principals were sampled, and in 2007–08, about 9,800 public school principals were sampled. In 2011–12, about 11,000 public school principals and 3,000 private school principals were sampled. Weighted response rates in 2011–12 for public school principals and private school principals were 72.7 percent and 64.7 percent, respectively.

The SASS 2011–12 sample of schools was confined to the 50 states and the District of Columbia and excludes the other jurisdictions, the Department of Defense overseas schools, the BIE schools, and schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent. The SASS 2011–12 sample included 10,250 traditional public schools, 750 public charter schools, and 3,000 private schools.

The public school sample for the 2011–12 SASS was based on an adjusted public school universe file from the 2009–10 Common Core of Data (CCD), a database of all the nation’s public school districts and public schools. The private school sample for the 2011–12 SASS was selected from the 2009–10 Private School Universe Survey (PSS), as updated for the 2011–12 PSS. This update collected membership lists from private school associations and religious denominations, as well as private school lists from state education departments. The 2011–12 SASS private school frame was further augmented by the inclusion of additional schools that were identified through the 2009–10 PSS area frame data collection.

Additional resources available regarding SASS include the methodology report Quality Profile for SASS, Rounds 1–3: 1987–1995, Aspects of the Quality of Data in the Schools and Staffing Surveys (SASS) (Kalton et al. 2000) (NCES 2000-308), as well as these reports: Documentation for the 2011–12 Schools and Staffing Survey (Cox et al. 2017) and User’s Manual for the 2011–12 Schools and Staffing Survey, Volumes 1–6 (Goldring et al. 2013) (NCES 2013-330 through 2013-335). For additional information about the SASS program, contact:

Isaiah O’Rear
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
isaiah.orear@ed.gov
https://nces.ed.gov/surveys/sass

Top

National Teacher and Principal Survey (NTPS)

The National Teacher and Principal Survey is a set of related questionnaires that collect descriptive data on the context of elementary and secondary education. Data reported by schools, principals, and teachers provide a variety of statistics on the condition of education in the United States that may be used by policymakers and the general public. The NTPS system covers a wide range of topics, including teacher demand, teacher and principal characteristics, teachers’ and principals’ perceptions of school climate and problems in their schools, teacher and principal compensation, district hiring and retention practices, general conditions in schools, and basic characteristics of the student population.

The NTPS was first conducted during the 2015–16 school year. The survey is a redesign of the Schools and Staffing Survey (SASS), which was conducted from the 1987–88 school year to the 2011–12 school year. Although the NTPS maintains the SASS survey’s focus on schools, teachers, and administrators, the NTPS has a different structure and sample than SASS. In addition, whereas SASS operated on a 4-year survey cycle, the NTPS operates on a 2-year survey cycle.

The school sample for the 2015–16 NTPS was based on an adjusted public school universe file from the 2013–14 Common Core of Data (CCD), a database of all the nation’s public school districts and public schools. The NTPS definition of a school is the same as the SASS definition of a school—an institution or part of an institution that provides classroom instruction to students, has one or more teachers to provide instruction, serves students in one or more of grades 1–12 or the ungraded equivalent, and is located in one or more buildings apart from a private home.

The 2015–16 NTPS universe of schools is confined to the 50 states plus the District of Columbia. It excludes the Department of Defense dependents schools overseas, schools in U.S. territories overseas, and CCD schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent. Bureau of Indian Education schools are included in the NTPS universe, but these schools were not oversampled and the data do not support separate BIE estimates.

The NTPS includes three key components: school questionnaires, principal questionnaires, and teacher questionnaires. NTPS data are collected by the U.S. Census Bureau through a mail questionnaire with telephone and in-person field follow-up. The school and principal questionnaires were sent to sampled schools, and the teacher questionnaire was sent to a sample of teachers working at sampled schools. The NTPS school sample consisted of about 8,300 public schools; the principal sample consisted of about 8,300 public school principals; and the teacher sample consisted of about 40,000 public school teachers.

The school questionnaire asks knowledgeable school staff members about grades offered, student attendance and enrollment, staffing patterns, teaching vacancies, programs and services offered, curriculum, and community service requirements. In addition, basic information is collected about the school year, including the beginning time of students’ school days and the length of the school year. The weighted unit response rate for the 2015–16 school survey was 72.5 percent.

The principal questionnaire collects information about principal/school head demographic characteristics, training, experience, salary, goals for the school, and judgments about school working conditions and climate. Information is also obtained on professional development opportunities for teachers and principals, teacher performance, barriers to dismissal of underperforming teachers, school climate and safety, parent/guardian participation in school events, and attitudes about educational goals and school governance. The weighted unit response rate for the 2015–16 principal survey was 71.8 percent.

The teacher questionnaire collects data from teachers about their current teaching assignment, workload, education history, and perceptions and attitudes about teaching. Questions are also asked about teacher preparation, induction, organization of classes, computers, and professional development. The weighted response rate for the 2015–16 teacher survey was 67.8 percent.

Further information about the NTPS is available in User’s Manual for the 2015–16 National Teacher and Principal Survey, Volumes 1–4 (Goldring et al. 2017) (NCES 2017-131 through NCES 2017-134).

For additional information about the NTPS program, please contact:

Maura Spiegelman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
maura.spiegelman@ed.gov
https://nces.ed.gov/surveys/ntps

Top

School Survey on Crime and Safety (SSOCS)

The School Survey on Crime and Safety (SSOCS) is the only recurring federal survey that collects detailed information on the incidence, frequency, seriousness, and nature of violence affecting students and school personnel, as well as other indicators of school safety from the schools’ perspective. SSOCS is conducted by the National Center for Education Statistics (NCES) within the U.S. Department of Education and collected by the U.S. Census Bureau. Data from this collection can be used to examine the relationship between school characteristics and violent and serious violent crimes in primary, middle, high, and combined schools. In addition, data from SSOCS can be used to assess what crime prevention programs, practices, and policies are used by schools. SSOCS has been conducted in school years 1999– 2000, 2003–04, 2005–06, 2007–08, 2009–10, and 2015–16.

The sampling frame for SSOCS:2016 was constructed from the 2013–14 Public Elementary/Secondary School Universe data file of the Common Core of Data (CCD), an annual collection of data on all public K–12 schools and school districts. The SSOCS sampling frame was restricted to regular public schools (including charter schools) in the United States and the District of Columbia. Other types of schools from the CCD Public Elementary/ Secondary School Universe file were excluded from the SSOCS sampling frame. For instance, schools in Puerto Rico, American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands, as well as Department of Defense dependents schools and Bureau of Indian Education schools, were excluded. Also excluded were special education, alternative, vocational, virtual, newly closed, ungraded, and home schools, and schools with the highest grade of kindergarten or lower.

The SSOCS:2016 universe totaled 83,600 schools. From this total, 3,553 schools were selected for participation in the survey. The sample was stratified by instructional level, type of locale (urbanicity), and enrollment size. The sample of schools in each instructional level was allocated to each of the 16 cells formed by the cross-classification of the four categories of enrollment size and four types of locale. The target number of responding schools allocated to each of the 16 cells was proportional to the sum of the square roots of the total student enrollment over all schools in the cell. The target respondent count within each stratum was then inflated to account for anticipated nonresponse; this inflated count was the sample size for the stratum.

Data collection began in February 2016 and ended in early July 2016. Questionnaire packets were mailed to the principals of the sampled schools, who were asked to complete the survey or have it completed by the person at the school who is most knowledgeable about school crime and policies for providing a safe school environment. A total of 2,092 public schools submitted usable questionnaires, resulting in an overall weighted unit response rate of 62.9 percent.

For more information about the SSOCS, contact:

Rachel Hansen
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-7082
rachel.hansen@ed.gov
https://nces.ed.gov/surveys/ssocs/

Top

Fast Response Survey System (FRSS)

The Fast Response Survey System (FRSS), established in 1975, collects issue-oriented data quickly, with a minimal burden on respondents. The FRSS, whose surveys collect and report data on key education issues at the elementary and secondary levels, was designed to meet the data needs of Department of Education analysts, planners, and decisionmakers when information could not be collected quickly through NCES’s large recurring surveys. Findings from FRSS surveys have been included in congressional reports, testimony to congressional subcommittees, NCES reports, and other Department of Education reports. The findings are also often used by state and local education officials.

Data collected through FRSS surveys are representative at the national level, drawing from a sample that is appropriate for each study. The FRSS collects data from state education agencies and national samples of other educational organizations and participants, including local education agencies, public and private elementary and secondary schools, elementary and secondary school teachers and principals, and public libraries and school libraries. To ensure a minimal burden on respondents, the surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly.

The FRSS survey “School Safety and Discipline: 2013– 14” (FRSS 106) collected information on specific safety and discipline plans and practices, training for classroom teachers and aides related to school safety and discipline issues, security personnel, frequency of specific discipline problems, and number of incidents of various offenses. The sample for the “School Safety and Discipline: 2013–14” survey was selected from the 2011–12 Common Core of Data (CCD) Public School Universe file. Approximately 1,600 regular public elementary, middle, and high school/combined schools in the 50 states and the District of Columbia were selected for the study. (For the purposes of the study, “regular” schools included charter schools.) In February 2014, questionnaires and cover letters were mailed to the principal of each sampled school. The letter requested that the questionnaire be completed by the person most knowledgeable about discipline issues at the school, and respondents were offered the option of completing the survey either on paper or online. Telephone follow-up for survey nonresponse and data clarification was initiated in March 2014 and completed in July 2014. About 1,350 schools completed the survey. The weighted response rate was 85 percent.

One of the goals of the FRSS “School Safety and Discipline: 2013–14” survey is to allow comparisons to the School Survey on Crime and Safety (SSOCS) data. Consistent with the approach used on SSOCS, respondents were asked to report for the current 2013–14 school year to date. Information about violent incidents that occurred in the school between the time that the survey was completed and the end of the school year are not included in the survey data.

For more information about the FRSS, contact:

Chris Chapman
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
Chris.Chapman@ed.gov
https://nces.ed.gov/surveys/frss/

Top

Campus Safety and Security Survey

The Campus Safety and Security Survey is administered by the Office of Postsecondary Education. Since 1990, all postsecondary institutions participating in Title IV student financial aid programs have been required to comply with the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act, known as the Clery Act. Originally, Congress enacted the Crime Awareness and Campus Security Act, which was amended in 1992, 1998, and again in 2000. The 1998 amendments renamed the law the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act. The Clery Act requires schools to give timely warnings of crimes to the student body and staff; to publicize campus crime and safety policies; and to collect, report, and disseminate campus crime data.

Crime statistics are collected and disseminated by campus security authorities. These authorities include campus police; nonpolice security staff responsible for monitoring campus property; municipal, county, or state law enforcement agencies with institutional agreements for security services; individuals and offices designated by the campus security policies as those to whom crimes should be reported; and officials of the institution with significant responsibility for student and campus activities. The act requires disclosure for offenses committed at geographic locations associated with each institution. For on-campus crimes, this includes property and buildings owned or controlled by the institution. In addition to on-campus crimes, the act requires disclosure of crimes committed in or on a noncampus building or property owned or controlled by the institution for educational purposes or for recognized student organizations, and on public property within or immediately adjacent to and accessible from the campus.

There are three types of statistics described in this report: criminal offenses; arrests for illegal weapons possession and violation of drug and liquor laws; and disciplinary referrals for illegal weapons possession and violation of drug and liquor laws. Criminal offenses include homicide, sex offenses, robbery, aggravated assaults, burglary, motor vehicle theft, and arson. Only the most serious offense is counted when more than one offense was committed during an incident. The two other categories, arrests and referrals, include counts for illegal weapons possession and violation of drug and liquor laws. Arrests and referrals relate to only those that are in violation of the law and not just in violation of institutional policies. If no federal, state, or local law was violated, these events are not reported. Further, if an individual is arrested and referred for disciplinary action for an offense, only the arrest is counted. Arrest is defined to include persons processed by arrest, citation, or summons, including those arrested and released without formal charges being placed. Referral for disciplinary action is defined to include persons referred to any official who initiates a disciplinary action of which a record is kept and which may result in the imposition of a sanction. Referrals may or may not involve the police or other law enforcement agencies.

All criminal offenses and arrests may include students, faculty, staff, and the general public. These offenses may or may not involve students that are enrolled in the institution. Referrals primarily deal with persons associated formally with the institution (i.e., students, faculty, staff ).

Campus security and police statistics do not necessarily reflect the total amount or even the nature of crime on campus. Rather, they reflect incidents that have been reported and recorded by campus security and/or local police. The process of reporting and recording alleged criminal incidents involve some well-known social filters and steps beginning with the victim. First, the victim or some other party must recognize that a possible crime has occurred and report the event. The event must then be recorded, and if it is recorded, the nature and type of offense must be classified. This classification may differ from the initial report due to the collection of additional evidence, interviews with witnesses, or through officer discretion. Also, the date an incident is reported may be much later than the date of the actual incident. For example, a victim may not realize something was stolen until much later, or a victim of violence may wait a number of days to report a crime. Other factors are related to the probability that an incident is reported, including the severity of the event, the victim’s confidence and prior experience with the police or security agency, or influence from third parties (e.g., friends and family knowledgeable about the incident). Finally the reader should be mindful that these figures represent alleged criminal offenses reported to campus security and/ or local police within a given year, and they do not necessarily reflect prosecutions or convictions for crime. More information on the reporting of campus crime and safety data may be obtained from: The Handbook for Campus Safety and Security Reporting (U.S. Department of Education 2016) https://www2ed.gov/admins/lead/safety/campus.html#handbook.

Policy Coordination, Development, and Accreditation Service
Office of Postsecondary Education
U.S. Department of Education
https://ope.ed.gov/security/index.aspx

Campus Safety and Security Help Desk
(800) 435-5985
CampusSafetyHelp@westat.com

Top

EDFacts

EDFacts is a centralized data collection through which state education agencies submit K–12 education data to the U.S. Department of Education (ED). All data in EDFacts are organized into “data groups” and reported to ED using defined file specifications. Depending on the data group, state education agencies may submit aggregate counts for the state as a whole or detailed counts for individual schools or school districts. EDFacts does not collect student-level records. The entities that are required to report EDFacts data vary by data group but may include the 50 states, the District of Columbia, the Department of Defense (DoD) dependents schools, the Bureau of Indian Education, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. More information about EDFacts file specifications and data groups can be found at https://www2.ed.gov/about/inits/ed/edfacts/index.html

EDFacts is a universe collection and is not subject to sampling error, but nonsampling errors such as nonresponse and inaccurate reporting may occur. ED attempts to minimize nonsampling errors by training data submission coordinators and reviewing the quality of state data submissions. However, anomalies may still be present in the data.

Differences in state data collection systems may limit the comparability of EDFacts data across states and across time. To build EDFacts files, state education agencies rely on data that were reported by their schools and school districts. The systems used to collect these data are evolving rapidly and differ from state to state. For example, there is a large shift in California’s firearm incident data between 2010–11 and 2011–12. California cited a new student data system that more accurately collects firearm incident data as the reason for the magnitude of the difference.

In some cases, EDFacts data may not align with data reported on state education agency websites. States may update their websites on different schedules than those they use to report to ED. Further, ED may use methods to protect the privacy of individuals represented within the data that could be different from the methods used by an individual state.

EDFacts firearm incidents data are collected in data group 596 within file 086. EDFacts collects this data group on behalf of the Office of Safe and Healthy Students in the Office of Elementary and Secondary Education. The definition for this data group is “The unduplicated number of students who were involved in an incident involving a firearm.” The reporting period is the entire school year. For more information about this data group, see file specification 086 for the relevant school year, available at https://www2.ed.gov/about/inits/ed/edfacts/sy-16-17-nonxml.html.

For more information about EDFacts, contact:

EDFacts
Administrative Data Division
Elementary/Secondary Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
EDFacts@ed.gov
https://www2.ed.gov/about/inits/ed/edfacts/index.html

Top

Monitoring the Future Survey

The National Institute on Drug Abuse of the U.S. Department of Health and Human Services is the primary supporter of the long-term study titled “Monitoring the Future: A Continuing Study of American Youth,” conducted by the University of Michigan Institute for Social Research. One component of the study deals with student drug abuse. Results of the national sample survey have been published annually since 1975. With the exception of 1975, when about 9,400 students participated in the survey, the annual samples comprise roughly 16,000 students in 150 public and private schools. Students complete self-administered questionnaires given to them in their classrooms by University of Michigan personnel. Each year, 8th-, 10th-, and 12th-graders are surveyed (12th-graders since 1975, and 8th- and 10th-graders since 1991). The 8th- and 10th-grade surveys are anonymous, while the 12th-grade survey is confidential. The 10th-grade samples involve about 17,000 students in 140 schools each year, while the 8th-grade samples have approximately 18,000 students in about 150 schools. In all, approximately 50,000 students from about 420 public and private secondary schools are surveyed annually. Approximately 90 percent of 8th-grade students, 88 percent of 10th-grade students, and 80 percent of 12th-grade students surveyed participated in the study in 2016. Beginning with the class of 1976, a randomly selected sample from each senior class has been followed in the years after high school on a continuing basis.

Understandably, there is some reluctance to admit illegal activities. Also, students who are out of school on the day of the survey are nonrespondents, and the survey does not include high school dropouts. The inclusion of absentees and dropouts would tend to increase the proportion of individuals who had used drugs. A 1983 study found that the inclusion of absentees could increase some of the drug usage estimates by as much as 2.7 percentage points. (Details on that study and its methodology were published in Drug Use Among American High School Students, College Students, and Other Young Adults, by L.D. Johnston, P.M. O’Malley, and J.G. Bachman, available from the National Clearinghouse on Drug Abuse Information, 5600 Fishers Lane, Rockville, MD 20857.)

The 2017 Monitoring the Future survey involved about 43,700 8th-, 10th-, and 12th-grade students in 360 secondary schools nationwide. The first published results were presented in Monitoring the Future, National Results on Drug Use, 1975–2017: Overview, Key Findings on Adolescent Drug Use, at http://www.monitoringthefuture.org.

Further information on the Monitoring the Future drug abuse survey may be obtained from:

National Institute on Drug Abuse
Division of Epidemiology, Services and Prevention Research
6001 Executive Boulevard
Bethesda, MD 20892
mtfinformation@umich.edu
http://www.monitoringthefuture.org

Top

Studies of Active Shooter Incidents

The Investigative Assistance for Violent Crimes Act of 2012, which was signed into law in 2013, authorizes the attorney general, upon the request of an appropriate state or local law enforcement official, to “assist in the investigation of violent acts and shootings occurring in a place of public use and in the investigation of mass killings and attempted mass killings.” The attorney general delegated this responsibility to the Federal Bureau of Investigation (FBI).

In 2014, the FBI initiated studies of active shooter incidents in order to advance the understanding of these incidents and provide law enforcement agencies with data that can inform efforts toward preventing, preparing for, responding to, and recovering from them.

Data on active shooter incidents at educational institutions come from the FBI reports A Study of Active Shooter Incidents in the United States Between 2000 and 2013, Active Shooter Incidents in the United States in 2014 and 2015, and Active Shooter Incidents in the United States in 2016 and 2017, which can be accessed at https://www.fbi.gov/about/partnerships/office-of-partner-engagement/active-shooter-resources.

Further information about FBI resources on active shooter incidents may be obtained from:

Active Shooter Resources
Office of Partner Engagement
Federal Bureau of Investigation
U.S. Department of Justice
935 Pennsylvania Avenue NW
Washington, DC 20535
https://www.fbi.gov/about/partnerships/office-of-partner-engagement/active-shooter-resources

Top


1 For the purposes of this report, self-inflicted deaths among 5- to 9-year-olds are not counted because determining suicidal intent in younger children can be difficult.