Skip Navigation
Digest of Education Statistics: 2015
Digest of Education Statistics: 2015

NCES 2016-014
December 2016

Appendix A.4. Centers for Disease Control and Prevention

National Health Interview Survey

The National Health Interview Survey (NHIS) is the principal source of information on the health of the civilian noninstitutionalized population of the United States and is one of the major data collection programs of the National Center for Health Statistics (NCHS), which is part of the Centers for Disease Control and Prevention (CDC). The main objective of the NHIS is to monitor the health of the U.S. population through the collection and analysis of data on a broad range of health topics. A major strength of this survey lies in its ability to display these health characteristics by many demographic and socioeconomic characteristics.

The NHIS covers the civilian noninstitutionalized population residing in the United States at the time of the interview. The NHIS is a cross-sectional household interview survey. Sampling and interviewing are continuous throughout each year. The sampling plan follows a multistage area probability design that permits the representative sampling of households and noninstitutional group quarters (e.g., college dormitories). The sampling plan is redesigned after every decennial census. The current sampling plan was implemented in 2006. It is similar in many ways to the previous sampling plan, which was in place from 1995 to 2005. The first stage of the current sampling plan consists of a sample of 428 primary sampling units (PSUs) drawn from approximately 1,900 geographically defined PSUs that cover the 50 states and the District of Columbia. A PSU consists of a county, a small group of contiguous counties, or a metropolitan statistical area.

The revised NHIS questionnaire, implemented since 1997, contains Core questions and Supplements. The Core questions remain largely unchanged from year to year and allow for trends analysis and for data from more than one year to be pooled to increase sample size for analytic purposes. The Core contains four major components: Household, Family, Sample Adult, and Sample Child.

The Household component collects limited demographic information on all of the individuals living in a particular house. The Family component verifies and collects additional demographic information on each member from each family in the house and collects data on topics including health status and limitations, injuries, healthcare access and utilization, health insurance, and income and assets. The Family Core component allows the NHIS to serve as a sampling frame for additional integrated surveys as needed.

Data are collected through a personal household interview conducted by interviewers employed and trained by the U.S. Bureau of the Census according to procedures specified by the NCHS.

Further information on the NHIS may be obtained from

Information Dissemination Staff
National Center for Health Statistics
Centers for Disease Control and Prevention
3311 Toledo Road, Room 5407
Hyattsville, MD 20782-2003
(800) 232-4636
nhis@cdc.gov
http://www.cdc.gov/nchs/nhis.htm

Top

Morbidity and Mortality Weekly Report: Summary of Notifiable Diseases

The Summary of Notifiable Diseases, a publication of the Morbidity and Mortality Weekly Report (MMWR), contains the official statistics, in tabular and graphic form, for the reported occurrence of nationally notifiable infectious diseases in the United States. These statistics are collected and compiled from reports sent by state health departments and territories to the National Notifiable Diseases Surveillance System (NNDSS), which is operated by the Centers for Disease Control and Prevention (CDC) in collaboration with the Council of State and Territorial Epidemiologists.

For more information on the MMWR: Summary of Notifiable Diseases, see http://www.cdc.gov/mmwr/mmwr_nd/.

Top

National Vital Statistics System

The National Vital Statistics System (NVSS) is the method by which data on vital events—births, deaths, marriages, divorces, and fetal deaths—are provided to the National Center for Health Statistics (NCHS), part of the Centers for Disease Control and Prevention (CDC). The data are provided to NCHS through the Vital Statistics Cooperative Program (VSCP). In 1984 and earlier years, the VSCP included varying numbers of states that provided data based on a 100 percent sample of their birth certificates. Data for states not in the VSCP were based on a 50 percent sample of birth certificates filed in those states. Population data used to compile birth rates are based on special estimation procedures and are not actual counts.

Race and Hispanic ethnicity are reported separately in the NVSS. Data are available for non-Hispanic Whites and non-Hispanic Blacks for 1990 and later; however, for 1980 and 1985, data for Whites and Blacks may include persons of Hispanic ethnicity. For all years, Asian/Pacific Islander and American Indian/Alaska Native categories include persons of Hispanic ethnicity.

For more information on the NCHS and the NVSS, see http://www.cdc.gov/nchs/nvss.htm.

Top

School-Associated Violent Deaths Study

The School-Associated Violent Deaths Study (SAVD) is an epidemiological study developed by the Centers for Disease Control and Prevention in conjunction with the U.S. Department of Education and the U.S. Department of Justice. SAVD seeks to describe the epidemiology of school-associated violent deaths, identify common features of these deaths, estimate the rate of school-associated violent death in the United States, and identify potential risk factors for these deaths. The study includes descriptive data on all school-associated violent deaths in the United States, including all homicides, suicides, or legal intervention in which the fatal injury occurred on the campus of a functioning elementary or secondary school; while the victim was on the way to or from regular sessions at such a school; or while attending or on the way to or from an official school-sponsored event. Victims of such incidents include nonstudents, as well as students and staff members. SAVD includes descriptive information about the school, event, victim(s), and offender(s). The SAVD study has collected data since July 1, 1992.

SAVD uses a four-step process to identify and collect data on school-associated violent deaths. Cases are initially identified through a search of the LexisNexis newspaper and media database. Then law enforcement officials are contacted to confirm the details of the case and to determine if the event meets the case definition. Once a case is confirmed, a law enforcement official and a school official are interviewed regarding details about the school, event, victim(s), and offender(s). A copy of the full law enforcement report is also sought for each case. The information obtained on schools includes school demographics, attendance/absentee rates, suspensions/expulsions and mobility, school history of weapon-carrying incidents, security measures, violence prevention activities, school response to the event, and school policies about weapon carrying. Event information includes the location of injury, the context of injury (while classes were being held, during break, etc.), motives for injury, method of injury, and school and community events happening around the time period. Information obtained on victim(s) and offender(s) includes demographics, circumstances of the event (date/time, alcohol or drug use, number of persons involved), types and origins of weapons, criminal history, psychological risk factors, school-related problems, extracurricular activities, and family history, including structure and stressors.

Some 105 school-associated violent deaths were identified from July 1, 1992, to June 30, 1994 (Kachur et al., 1996, School-Associated Violent Deaths in the United States, 1992 to 1994, Journal of the American Medical Association, 275: 1729–1733). A more recent report from this data collection identified 253 school-associated violent deaths between July 1, 1994, and June 30, 1999 (Anderson et al., 2001, School-Associated Violent Deaths in the United States, 1994–1999, Journal of the American Medical Association, 286: 2695–2702). Other publications from this study have described how the number of events change during the school year (Centers for Disease Control and Prevention, 2001, Temporal Variations in School-Associated Student Homicide and Suicide Events—United States, 1992–1999, Morbidity and Mortality Weekly Report, 50: 657–660), the source of the firearms used in these events (Reza et al., 2003, Source of Firearms Used by Students in School-Associated Violent Deaths—United States, 1992–1999, Morbidity and Mortality Weekly Report, 52: 169–172), and suicides that were associated with schools (Kauffman et al., 2004, School-Associated Suicides—United States, 1994–1999, Morbidity and Mortality Weekly Report, 53: 476–478). The most recent publication describes trends in school-associated homicide from July 1, 1992, to June 30, 2006 (Centers for Disease Control and Prevention, 2008, School-Associated Student Homicides—United States, 1992–2006, Morbidity and Mortality Weekly Report 2008, 57: 33–36). The interviews conducted on cases between July 1, 1994, and June 30, 1999, achieved a response rate of 97 percent for police officials and 78 percent for school officials. For several reasons, all data for years from 1999 t the present are flagged as preliminary. For some recent data, the interviews with school and law enforcement officials to verify case details have not been completed. The details learned during the interviews can occasionally change the classification of a case. Also, new cases may be identified because of the expansion of the scope of the media files used for case identification. Sometimes other cases not identified during earlier data years using the independent case finding efforts (which focus on nonmedia sources of information) will be discovered. Also, other cases may occasionally be identified while the law enforcement and school interviews are being conducted to verify known cases.

Further information on SAVD may be obtained from

Jeff Hall
Division of Violence Prevention
National Center for Injury Prevention and Control
Centers for Disease Control and Prevention
4770 Buford Highway NE
Mailstop F63
Atlanta, GA 30341-3742
(770) 488-4648
JHall2@CDC.gov
http://www.cdc.gov/violenceprevention/index.html

Top

Web-based Injury Statistics Query and Reporting System Fatal

WISQARS Fatal provides mortality data related to injury. The mortality data reported in WISQARS Fatal come from death certificate data reported to the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention. Data include causes of death reported by attending physicians, medical examiners, and coroners and demographic information about decedents reported by funeral directors, who obtain that information from family members and other informants. NCHS collects, compiles, verifies, and prepares these data for release to the public. The data provide information about unintentional injury, homicide, and suicide as leading causes of death, how common these causes of death are, and whom they affect. These data are intended for a broad audience—the public, the media, public health practitioners and researchers, and public health officials—to increase their knowledge of injury.

WISQARS Fatal mortality reports provide tables of the total numbers of injury-related deaths and the death rates per 100,000 U.S. population. The reports list deaths according to cause (mechanism) and intent (manner) of injury by state, race, Hispanic origin, sex, and age groupings.

Further information on WISQARS Fatal may be obtained from

National Center for Injury Prevention and Control
Mailstop K65
4770 Buford Highway NE
Atlanta, GA 30341-3724
(770) 488-1506
ohcinfo@cdc.gov
www.cdc.gov/info
http://www.cdc.gov/injury/wisqars/fatal_help/data_sources.html

Top

Youth Risk Behavior Surveillance System

The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS), as well as surveys conducted in states and large urban school districts.
The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 9–12 in the United States. The target population consisted of all public and private school students in grades 9–12 in the 50 states and the District of Columbia. The first-stage sampling frame included selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are either counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.

The final stage of sampling consisted of randomly selecting, in each chosen school and in each of grades 9–12, one or two classrooms from either a required subject, such as English or social studies, or a required period, such as homeroom or second period. All students in selected classes were eligible to participate. In surveys conducted before 2013, three strategies were used to oversample Black and Hispanic students: (1) larger sampling rates were used to select PSUs that are in high-Black and high-Hispanic strata; (2) a modified measure of size was used that increased the probability of selecting schools with a disproportionately high minority enrollment; and (3) two classes per grade, rather than one, were selected in schools with a high percentage of combined Black, Hispanic, Asian/Pacific Islander, or American Indian/Alaska Native enrollment. In 2013, only selection of two classes per grade was needed to achieve an adequate precision with minimum variance. Approximately 16,300 students participated in the 1993 survey, 10,900 students participated in the 1995 survey, 16,300 students participated in the 1997 survey, 15,300 students participated in the 1999 survey, 13,600 students participated in the 2001 survey, 15,200 students participated in the 2003 survey, 13,900 students participated in the 2005 survey, 14,000 students participated in the 2007 survey, 16,400 students participated in the 2009 survey, 15,400 participated in the 2011 survey, and 13,600 participated in the 2013 survey.

The overall response rate was 70 percent for the 1993 survey, 60 percent for the 1995 survey, 69 percent for the 1997 survey, 66 percent for the 1999 survey, 63 percent for the 2001 survey, 67 percent for the 2003 survey, 67 percent for the 2005 survey, 68 percent for the 2007 survey, 71 percent for the 2009 survey, 71 percent for the 2011 survey, and 68 percent for the 2013 survey. NCES standards call for response rates of 85 percent or better for cross-sectional surveys, and bias analyses are required by NCES when that percentage is not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.

State-level data were downloaded from the Youth Online: Comprehensive Results web page (http://nccd.cdc.gov/YouthOnline/). Each state and district school-based YRBS employs a two-stage, cluster sample design to produce representative samples of students in grades 9–12 in their jurisdiction. All except a few state samples, and all district samples, include only public schools, and each district sample includes only schools in the funded school district (e.g., San Diego Unified School District) rather than in the entire city (e.g., greater San Diego area).

In the first sampling stage in all except a few states and districts, schools are selected with probability proportional to school enrollment size. In the second sampling stage, intact classes of a required subject or intact classes during a required period (e.g., second period) are selected randomly. All students in sampled classes are eligible to participate. Certain states and districts modify these procedures to meet their individual needs. For example, in a given state or district, all schools, rather than a sample of schools, might be selected to participate. State and local surveys that have a scientifically selected sample, appropriate documentation, and an overall response rate greater than or equal to 60 percent are weighted. The overall response rate reflects the school response rate multiplied by the student response rate. These three criteria are used to ensure that the data from those surveys can be considered representative of students in grades 9–12 in that jurisdiction. A weight is applied to each record to adjust for student nonresponse and the distribution of students by grade, sex, and race/ethnicity in each jurisdiction. Therefore, weighted estimates are representative of all students in grades 9–12 attending schools in each jurisdiction. Surveys that do not have an overall response rate of greater than or equal to 60 percent and that do not have appropriate documentation are not weighted and are not included in this report.

In 2013, a total of 42 states and 21 districts had weighted data. Not all of the districts were contained in the 42 states. For example, California was not one of the 42 states that obtained weighted data, but it contained several districts that did. For more information on the location of the districts, please see http://www.cdc.gov/healthyyouth/yrbs/participation.htm. In sites with weighted data, the student sample sizes for the state and district YRBS ranged from 1,107 to 53,785. School response rates ranged from 70 to 100 percent, student response rates ranged from 60 to 94 percent, and overall response rates ranged from 60 to 87 percent.

Readers should note that reports of these data published by the CDC and in this report do not include percentages for which the denominator includes fewer than 100 unweighted cases.

In 1999, in accordance with changes to the Office of Management and Budget's standards for the classification of federal data on race and ethnicity, the YRBS item on race/ethnicity was modified. The version of the race and ethnicity question used in 1993, 1995, and 1997 was

How do you describe yourself?

  1. White—not Hispanic
  2. Black—not Hispanic
  3. Hispanic or Latino
  4. Asian or Pacific Islander
  5. American Indian or Alaskan Native
  6. Other

The version used in 1999, 2001, 2003, and in the 2005, 2007, and 2009 state and local district surveys was

How do you describe yourself? (Select one or more responses.)

  1. American Indian or Alaska Native
  2. Asian
  3. Black or African American
  4. Hispanic or Latino
  5. Native Hawaiian or Other Pacific Islander
  6. White

In the 2005 national survey and in all 2007, 2009, 2011, and 2013 surveys, race/ethnicity was computed from two questions: (1) "Are you Hispanic or Latino?" (response options were "Yes" and "No"), and (2) "What is your race?" (response options were "American Indian or Alaska Native," "Asian," "Black or African American," "Native Hawaiian or Other Pacific Islander," or "White"). For the second question, students could select more than one response option. For this report, students were classified as "Hispanic" if they answered "Yes" to the first question, regardless of how they answered the second question. Students who answered "No" to the first question and selected more than one race/ethnicity in the second category were classified as "More than one race." Students who answered "No" to the first question and selected only one race/ethnicity were classified as that race/ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered "No" to the first question but did not answer the second question.

CDC has conducted two studies to understand the effect of changing the race/ethnicity item on the YRBS. Brener, Kann, and McManus (Public Opinion Quarterly, 67:227–226, 2003) found that allowing students to select more than one response to a single race/ethnicity question on the YRBS had only a minimal effect on reported race/ethnicity among high school students. Eaton, Brener, Kann, and Pittman (Journal of Adolescent Health, 41:488–494, 2007) found that self-reported race/ethnicity was similar regardless of whether the single-question or a two-question format was used.

Further information on the YRBSS may be obtained from

Laura Kann
Division of Adolescent and School Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
Centers for Disease Control and Prevention
Mailstop E-75
1600 Clifton Road NE
Atlanta, GA 30329
(404) 718-8132
lkk1@cdc.gov
http://www.cdc.gov/yrbs