National Health Interview Survey
The Summary of Notifiable Diseases, a publication of the Morbidity and Mortality Weekly Report (MMWR), contains the official statistics, in tabular and graphical form, for the reported occurrence of nationally notifiable infectious diseases in the United States. These statistics are collected and compiled from reports sent by U.S. state and territory, New York City, and District of Columbia health departments to the National Notifiable Diseases Surveillance System (NNDSS), which is operated by the Centers for Disease Control and Prevention (CDC) in collaboration with the Council of State and Territorial Epidemiologists.
For more information on the MMWR: Summary of Notifiable Diseases, see https://www.cdc.gov/mmwr/mmwr_nd/.
National Vital Statistics System
The National Vital Statistics System (NVSS) is the method by which data on vital events—births, deaths, marriages, divorces, and fetal deaths—are provided to the National Center for Health Statistics (NCHS), part of the Centers for Disease Control and Prevention (CDC). The data are provided to NCHS through the Vital Statistics Cooperative Program (VSCP). In 1984 and earlier years, the VSCP included varying numbers of states that provided data based on a 100 percent sample of their birth certificates. Data for states not in the VSCP were based on a 50 percent sample of birth certificates filed in those states. Population data used to compile birth rates are based on special estimation procedures and are not actual counts.
Race and Hispanic ethnicity are reported separately in the NVSS. Data are available for non-Hispanic Whites and non-Hispanic Blacks for 1990 and later; however, for 1980 and 1985, data for Whites and Blacks may include persons of Hispanic ethnicity. For all years, Asian/Pacific Islander and American Indian/Alaska Native categories include persons of Hispanic ethnicity.
For more information on the NCHS and the NVSS, see https://www.cdc.gov/nchs/nvss/index.htm.
School-Associated Violent Deaths Study
The School-Associated Violent Death Surveillance System (SAVD-SS) is an epidemiological study developed by the Centers for Disease Control and Prevention in conjunction with the U.S. Department of Education and the U.S. Department of Justice. SAVD-SS seeks to describe the epidemiology of school-associated violent deaths, identify common features of these deaths, estimate the rate of school-associated violent death in the United States, and identify potential risk factors for these deaths. The study includes descriptive data on all school-associated violent deaths in the United States, including all homicides, suicides, or legal intervention in which the fatal injury occurred on the campus of a functioning elementary or secondary school; while the victim was on the way to or from regular sessions at such a school; or while attending or on the way to or from an official school-sponsored event. Victims of such incidents include nonstudents, as well as students and staff members. SAVD-SS includes descriptive information about the school, event, victim(s), and offender(s). The study has collected data since July 1, 1992.
SAVD-SS uses a four-step process to identify and collect data on school-associated violent deaths. Cases are initially identified through a search of the LexisNexis newspaper and media database. Then law enforcement officials are contacted to confirm the details of the case and to determine if the event meets the case definition. Once a case is confirmed, a law enforcement official and a school official are interviewed regarding details about the school, event, victim(s), and offender(s). A copy of the full law enforcement report is also sought for each case. The information obtained on schools includes school demographics, attendance/absentee rates, suspensions/expulsions and mobility, school history of weapon-carrying incidents, security measures, violence prevention activities, school response to the event, and school policies about weapon carrying. Event information includes the location of injury, the context of injury (while classes were being held, during break, etc.), motives for injury, method of injury, and school and community events happening around the time period. Information obtained on victim(s) and offender(s) includes demographics, circumstances of the event (date/time, alcohol or drug use, number of persons involved), types and origins of weapons, criminal history, psychological risk factors, school-related problems, extracurricular activities, and family history, including structure and stressors.
Some 105 school-associated violent deaths were identified from July 1, 1992, to June 30, 1994 (Kachur et al., 1996, School-Associated Violent Deaths in the United States, 1992 to 1994, Journal of the American Medical Association, 275: 1729–1733). A more recent report from this data collection identified 253 school-associated violent deaths between July 1, 1994, and June 30, 1999 (Anderson et al., 2001, School-Associated Violent Deaths in the United States, 1994–1999, Journal of the American Medical Association, 286: 2695–2702). Other publications from this study have described how the number of events change during the school year (Centers for Disease Control and Prevention, 2001, Temporal Variations in School-Associated Student Homicide and Suicide Events—United States, 1992–1999, Morbidity and Mortality Weekly Report, 50: 657–660), the source of the firearms used in these events (Reza et al., 2003, Source of Firearms Used by Students in School-Associated Violent Deaths—United States, 1992–1999, Morbidity and Mortality Weekly Report, 52: 169–172), and suicides that were associated with schools (Kauffman et al., 2004, School-Associated Suicides—United States, 1994–1999, Morbidity and Mortality Weekly Report, 53: 476–478). The most recent publication describes trends in school-associated homicide from July 1, 1992, to June 30, 2006 (Centers for Disease Control and Prevention, 2008, School-Associated Student Homicides—United States, 1992–2006, Morbidity and Mortality Weekly Report 2008, 57: 33–36). The interviews conducted on cases between July 1, 1994, and June 30, 1999, achieved a response rate of 97 percent for police officials and 78 percent for school officials.
For several reasons, all data for years from 1999 to the present are flagged as preliminary. For some recent data, the interviews with school and law enforcement officials to verify case details have not been completed. The details learned during the interviews can occasionally change the classification of a case. Also, new cases may be identified because of the expansion of the scope of the media files used for case identification. Sometimes other cases not identified during earlier data years using the independent case finding efforts (which focus on nonmedia sources of information) will be discovered. Also, other cases may occasionally be identified while the law enforcement and school interviews are being conducted to verify known cases.
Further information on SAVD-SS may be obtained from
Principal Investigator & Behavioral Scientist
School-Associated Violent Death Surveillance System
Division of Violence Prevention
National Center for Injury Control and Prevention
Centers for Disease Control and Prevention
1600 Clifton Rd.
Atlanta, GA 30329
Web-Based Injury Statistics Query and Reporting System Fatal
Web-Based Injury Statistics Query and Reporting System (WISQARS) Fatal is an interactive online database that provides mortality data related to injury. The mortality data reported in WISQARS Fatal come from death certificate data reported to the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention. Data include causes of death reported by attending physicians, medical examiners, and coroners and demographic information about decedents reported by funeral directors, who obtain that information from family members and other informants. NCHS collects, compiles, verifies, and prepares these data for release to the public. The data provide information about unintentional injury, homicide, and suicide as leading causes of death, how common these causes of death are, and whom they affect. These data are intended for a broad audience—the public, the media, public health practitioners and researchers, and public health officials—to increase their knowledge of injury.
WISQARS Fatal mortality reports provide tables of the total numbers of injury-related deaths and the death rates per 100,000 U.S. population. The reports list deaths according to cause (mechanism) and intent (manner) of injury by state, race, Hispanic origin, sex, and age groupings.
Further information on WISQARS Fatal may be obtained from
National Center for Injury Prevention and Control
Centers for Disease Control and Prevention
1600 Clifton Rd.
Atlanta, GA 30329
Youth Risk Behavior Surveillance System
The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS), as well as surveys conducted in states and large urban school districts.
The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 9–12 in the United States. The target population consists of all public and private school students in grades 9–12 in the 50 states and the District of Columbia. The first-stage sampling frame includes selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are either counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.
The final stage of sampling consists of randomly selecting, in each chosen school and in each of grades 9–12, one or two classrooms from either a required subject, such as English or social studies, or a required period, such as homeroom or second period. All students in selected classes are eligible to participate. In surveys conducted before 2013, three strategies were used to oversample Black and Hispanic students: (1) larger sampling rates were used to select PSUs that are in high-Black and high-Hispanic strata; (2) a modified measure of size was used that increased the probability of selecting schools with a disproportionately high minority enrollment; and (3) two classes per grade, rather than one, were selected in schools with a high percentage of combined Black, Hispanic, Asian/Pacific Islander, or American Indian/Alaska Native enrollment. In 2013, only selection of two classes per grade was needed to achieve an adequate precision with minimum variance. Approximately 16,300 students participated in the 1993 survey, 10,900 students participated in the 1995 survey, 16,300 students participated in the 1997 survey, 15,300 students participated in 1999, 13,600 students participated in 2001, 15,200 students participated in 2003, 13,900 participated in 2005, 14,000 participated in 2007, 16,400 participated in 2009, 15,400 participated in 2011, 13,600 participated in 2013, 15,600 participated in 2015, and 14,700 participated in 2017.
The overall response rate was 70 percent for the 1993 survey, 60 percent for the 1995 survey, 69 percent for the 1997 survey, 66 percent in 1999, 63 percent in 2001, 67 percent in 2003, 67 percent in 2005, 68 percent in 2007, 71 percent in 2009, 71 percent in 2011, 68 percent in 2013, 60 percent in 2015, and 60 percent in 2017. NCES standards call for response rates of 85 percent or greater for cross-sectional surveys, and bias analyses are required by NCES when that percentage is not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.
State-level data were downloaded from the Youth Online: Comprehensive Results web page (https://nccd.cdc.gov/Youthonline/App/Default.aspx). Each state and district school-based YRBS employs a two-stage, cluster sample design to produce representative samples of students in grades 9–12 in their jurisdiction. All except a few state samples, and all district samples, include only public schools, and each district sample includes only schools in the funded school district (e.g., San Diego Unified School District) rather than in the entire city (e.g., greater San Diego area).
In the first sampling stage in all except a few states and districts, schools are selected with probability proportional to school enrollment size. In the second sampling stage, intact classes of a required subject or intact classes during a required period (e.g., second period) are selected randomly. All students in sampled classes are eligible to participate. Certain states and districts modify these procedures to meet their individual needs. For example, in a given state or district, all schools, rather than a sample of schools, might be selected to participate. State and local surveys that have a scientifically selected sample, appropriate documentation, and an overall response rate greater than or equal to 60 percent are weighted. The overall response rate reflects the school response rate multiplied by the student response rate. These three criteria are used to ensure that the data from those surveys can be considered representative of students in grades 9–12 in that jurisdiction. A weight is applied to each record to adjust for student nonresponse and the distribution of students by grade, sex, and race/ethnicity in each jurisdiction. Therefore, weighted estimates are representative of all students in grades 9–12 attending schools in each jurisdiction. Surveys that do not have an overall response rate of greater than or equal to 60 percent and that do not have appropriate documentation are not weighted and are not included in this report.
In the 2017 YRBS, 39 states and 21 large urban districts had weighted data. (For information on the location of the districts, please see https://www.cdc.gov/healthyyouth/data/yrbs/participation.htm.) In 26 states and 13 large urban school districts, weighted estimates are representative of all students in grades 9–12 attending regular public schools; in 13 states and 8 large urban school districts, weighted estimates are representative of regular public school students plus students in grades 9–12 in other types of public schools (e.g., public alternative, special education, or vocational schools or Bureau of Indian Education schools). The student sample sizes ranged from 1,273 to 51,087 across the states and from 805 to 10,191 across the large urban school districts. Among the states, the school response rates ranged from 68 percent to 100 percent, student response rates ranged from 66 percent to 90 percent, and overall response rates ranged from 60 percent to 82 percent. Among the large urban school districts, the school response rates ranged from 84 percent to 100 percent, student response rates ranged from 63 percent to 89 percent, and overall response rates ranged from 61 percent to 89 percent.
For the 2015 YRBS, data from 37 states and 19 large urban districts were weighted. In 36 states and all large urban school districts, weighted estimates are representative of all students in grades 9–12 attending public schools in each jurisdiction. In one state (South Dakota), weighted estimates are representative of all students in grades 9–12 attending public and private schools. Student sample sizes ranged from 1,313 to 55,596 across the states and from 1,052 to 10,419 across the large urban school districts. Among the states, school response rates ranged from 70 percent to 100 percent, student response rates ranged from 64 percent to 90 percent, and overall response rates ranged from 60 percent to 84 percent. Among the large urban school districts, school response rates ranged from 90 percent to 100 percent, student response rates ranged from 66 percent to 88 percent, and overall response rates ranged from 64 percent to 88 percent.
In 2013, a total of 42 states and 21 districts had weighted data. Not all of the districts were contained in the 42 states. For example, California was not one of the 42 states that obtained weighted data, but it contained several districts that did. In sites with weighted data, the student sample sizes for the state and district YRBS ranged from 1,107 to 53,785. School response rates ranged from 70 to 100 percent, student response rates ranged from 60 to 94 percent, and overall response rates ranged from 60 to 87 percent.
Readers should note that reports of these data published by the CDC and in this report do not include percentages for which the denominator includes fewer than 100 unweighted cases.
In 1999, in accordance with changes to the Office of Management and Budget’s standards for the classification of federal data on race and ethnicity, the YRBS item on race/ethnicity was modified. The version of the race and ethnicity question used in 1993, 1995, and 1997 was
How do you describe yourself?
The version used in 1999, 2001, 2003, and in the 2005, 2007, and 2009 state and local district surveys was
How do you describe yourself? (Select one or more responses.)
In the 2005 national survey and in all 2007, 2009, 2011, 2013, and 2015 surveys, race/ethnicity was computed from two questions: (1) “Are you Hispanic or Latino?” (response options were “Yes” and “No”), and (2) “What is your race?” (response options were “American Indian or Alaska Native,” “Asian,” “Black or African American,” “Native Hawaiian or Other Pacific Islander,” or “White”). For the second question, students could select more than one response option. For this report, students were classified as “Hispanic” if they answered “Yes” to the first question, regardless of how they answered the second question. Students who answered “No” to the first question and selected more than one race/ethnicity in the second category were classified as “More than one race.” Students who answered “No” to the first question and selected only one race/ethnicity were classified as that race/ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered “No” to the first question but did not answer the second question.
CDC has conducted two studies to understand the effect of changing the race/ethnicity item on the YRBS. Brener, Kann, and McManus (Public Opinion Quarterly, 67:227–226, 2003) found that allowing students to select more than one response to a single race/ethnicity question on the YRBS had only a minimal effect on reported race/ethnicity among high school students. Eaton, Brener, Kann, and Pittman (Journal of Adolescent Health, 41: 488–494, 2007) found that self-reported race/ethnicity was similar regardless of whether the single-question or a two-question format was used.
Further information on the YRBSS may be obtained from
Division of Adolescent and School Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
Centers for Disease Control and Prevention
1600 Clifton Road
Atlanta, GA 30329