Skip Navigation
small NCES header image

General Information

    The information presented in this report was obtained from many data sources, including databases from the National Center for Education Statistics (NCES), the Centers for Disease Control and Prevention (CDC), the Bureau of Justice Statistics (BJS), and the Survey Research Center (SRC) of the University of Michigan. While some of the data were collected from universe surveys, most were gathered by sample surveys. Some questions from different surveys may appear the same, but they were actually asked of different populations of students (e.g., high school seniors or students in grades 9 through 12); in different years; about experiences that occurred within different periods of times (e.g., in the past 4 weeks or during the past 12 months); and at different locations (e.g., in school or at home). Readers of this report should take particular care when comparing data from the different data sources. Because of the variation in collection procedures, timing, phrasing of questions, and so forth, the results from the different sources may not be strictly comparable. After introducing the data sources used for this report, the next section discusses the accuracy of estimates and describes the statistical procedures used.
 

Sources of Data

    Table B1 presents some key information for each of the data sets used in the report, including the survey year, target population, response rates, and sample sizes. The remainder of the section briefly describes each data set and provides directions for obtaining more information. The exact wording of the interview questions used to construct the indicators are presented in table B2.
 

National Household Education Survey (NHES)

    The National Household Education Survey (NHES) is a data collection system of the that provides descriptive data on the condition of education in the United States. It has been conducted in 1991, 1993, 1995, and 1996. For each year, the survey covered two substantive components addressing education-related topics. One topic that the 1993 survey focused on was school safety and discipline, covering information on the school learning environment, discipline policy, safety at school, victimization, availability and use of alcohol/drugs, and alcohol/drug education. Unlike traditional student- or school-based data collections, the NHES collected data from households. The data collection involved a three-stage process. First, using random digit dialing (RDD) telephone survey methods, a representative sample of households in the 50 states and the District of Columbia was selected. Within these households, individuals who met predetermined criteria were then screened. Finally, eligible persons were given detailed or extended interviews by computer-assisted telephone interview (CATI) procedures. Two groups of individuals completed interviews for the School Safety and Discipline component of NHES:93: 12,680 parents of children enrolled in grades 3 through 12, and 6,504 students enrolled in grades 6 through 12. This report focuses only on the responses of students in grades 6 through 12; the overall weighted student response rate was 68 percent. The item nonresponse rate was generally low, and items with missing data were imputed. As a result, no missing data remain in the data set. For additional information about the School Safety and Discipline component of NHES:93, refer to J.M. Brick, M. Collins, M.J. Nolin, P. Ha, M. Levinsohn, and K. Chandler, 1994, National Household Education Survey of 1993, School Safety and Discipline Data File User's Manual (NCES 94-193), or contact:

    Kathryn A. Chandler

    1990 K Street NW
    Washington, DC 20006
    Telephone: (202) 502-7486
    E-mail: Kathryn.Chandler@ed.gov
 

Schools and Staffing Survey (SASS)

    This report draws upon data on teacher victimization from the 1993-94 Schools and Staffing Survey (SASS:93-94), which provides national- and state-level data on public and private schools, principals, school districts, and teachers. The 1993-94 survey was the third in a series of cross-sectional school-focused surveys, following ones conducted in 1990-91 and 1987-88. It consisted of four sets of linked questionnaires, including surveys of schools, the principals of each selected school, a subsample of teachers within each school, and public school districts. Data were collected by multistage sampling. Stratified by state, control, type, association membership, and grade level (for private schools), schools were sampled first. Approximately 9,900 public schools and 3,300 private schools were selected to participate in the 1993-94 SASS. Within each school, teachers were further stratified into one of five teacher types in the following hierarchy: 1) Asian or Pacific Islander; 2) American Indian, Aleut, or Eskimo; 3) bilingual/ESL; 4) new teachers; and 5) experienced teachers. Within each teacher stratum, teachers were selected systematically with equal probability. Approximately 56,700 public school teachers and 11,500 private school teachers were sampled.

    This report focuses on teachers' responses. The overall weighted response rates were 84 percent for public school teachers and 73 percent for private school teachers. In the Public School Teacher Questionnaire, 91 percent of the items had a response rate of 90 percent or more, and in the Private School Teacher Questionnaire, 89 percent of the items had this level of response. Values were imputed for questionnaire items that should have been answered but were not. For additional information about SASS, refer to R. Arbramson, C. Cole, S. Fondelier, B. Jackson, R. Parmer, and S. Kaufman, 1996, 1993-94 Schools and Staffing Survey: Sample Design and Estimation (NCES 96-089), or contact:

    Kerry Gruber

    1990 K Street NW
    Washington, DC 20006
    Telephone: (202) 502-7349
    E-mail: Kerry.Gruber@ed.gov
 

National School-Based Youth Risk Behavior Survey (YRBS)

    The National School-Based Youth Risk Behavior Survey (YRBS) is one component of the Youth Risk Behavior Surveillance System (YRBSS), an epidemiological surveillance system that was developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. This report uses 1993, 1995, and 1997 YRBS data.

    The YRBS used a three-stage cluster sampling design to produce a nationally representative sample of 9th- through 12th-grade students in the United States. The target population consisted of all public and private school students in grades 9 through 12 in the 50 states and the District of Columbia. The first-stage sampling frame included selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of black and Hispanic students in the PSU. These PSUs are either large counties or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size. Schools with substantial numbers of black and Hispanic students were sampled at relatively higher rates than all other schools. The final stage of sampling consisted of randomly selecting within each chosen school at each grade 9 through 12 one or two intact classes of a required subject, such as English or social studies. All students in selected classes were eligible to participate. Approximately 16,300, 10,900, and 16,300 students were selected to participate in the 1993 survey, the1995 survey, and the 1997 survey, respectively.

    The overall response rate was 70 percent for the 1993 survey, 60 percent for the 1995 survey, and 69 percent for the 1997 survey. The weights were developed to adjust for nonresponse and the oversampling of black and Hispanic students in the sample. The final weights were normalized so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections. For additional information about the YRBS, contact:

    Laura Kann
    Division of Adolescent and School Health
    National Center for Chronic Disease Prevention and Health Promotion
    Centers for Disease Control and Prevention, Mailstop K-33
    4770 Buford Highway NE
    Atlanta, Georgia 30341
    Telephone: (404) 488-5330
 

Fast Response Survey System: Principal/School Disciplinarian Survey on School Violence

    The Principal/School Disciplinarian Survey was conducted through the NCES Fast Response Survey System (FRSS) during the spring and summer of 1997. Generally, the FRSS is a survey system designed to collect small amounts of issue-oriented data with minimal burden on respondents and within a relatively short time frame. The FRSS Principal/School Disciplinarian Survey focused on incidents of specific crimes/offenses and a variety of specific discipline issues in public schools. The survey was conducted with a nationally representative sample of regular public elementary, middle, and high schools in the 50 states and the District of Columbia. Special education, alternative and vocational schools, schools in the territories, and schools that taught only prekindergarten, kindergarten, or adult education were not included in the sample.

    The sample of public schools was selected from the 1993-94 NCES Common Core of Data (CCD) Public School Universe File. The sample was stratified by instructional level, locale, and school size. Within the primary strata, schools were also sorted by geographic region and by percent minority enrollment. The sample sizes were then allocated to the primary strata in rough proportion to the aggregate square root of the size of enrollment of schools in the stratum. A total of 1,415 schools were selected. Among them, 11 schools were found no longer to be in existence, and 1,234 schools completed the survey. In April 1997, questionnaires were mailed to school principals, who were asked to complete the survey or to have it completed by the person most knowledgeable about discipline issues at the school. The raw response rate was 88 percent (1,234 schools divided by the 1,404 eligible schools in the sample). The weighted overall response rate was 89 percent, and item nonresponse rates ranged from 0 percent to 0.9 percent. The weights were developed to adjust for the variable probabilities of selection and differential nonresponse and can be used to produce national estimates for regular public schools in the 1996-97 school year. For more information about the FRSS: Principal/School Disciplinarian Survey on School Violence, contact:

    Shelley Burns

    1990 K Street NW
    Washington, DC 20006
    Telephone: (202) 502-7319
    E-mail: Shelley.Burns@ed.gov
 

National Crime Victimization Survey (NCVS)

    The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics by the Bureau of the Census, is the nation's primary source of information on crime victimization and the victims of crime. Initiated in 1972 and redesigned in 1992, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, household burglary, and motor vehicle theft experienced by Americans and their households each year. The survey measures crimes reported as well as those not reported to police.

    The NCVS sample consists of about 55,000 households, selected using a stratified, multi-stage cluster design. In the first stage, the primary sampling units (PSU's), consisting of counties or groups of counties, are selected. In the second stage, smaller areas, called Enumeration Districts (ED's) were selected from each sampled PSU. Finally, from selected ED's, clusters of four households, called segments, were selected for interview. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for housing units constructed after the decennial Census. Within each sampled household, Census Bureau personnel interviewed all household members ages 12 and older to determine whether they had been victimized by the measured crimes during the 6 months preceding the interview. About 90,000 persons ages 12 and older are interviewed each 6 months. Households remain in sample for 3 years and are interviewed 7 times at 6-month intervals. The initial interview at each sample unit is used only to bound future interviews to establish a time frame to avoid duplication of crimes uncovered in these subsequent interviews. After their seventh interview households are replaced by new sample households. The NCVS has consistently obtained a response rate of about 95 percent at the household level. During the study period, the completion rates for persons within households were about 91 percent. Thus, final response rates were about 86 percent. Weights were developed to permit estimates for the total U.S. population 12 years and older. For more information about the NCVS, contact:

    Michael R. Rand
    Victimization Statistics
    U.S. Bureau of Justice Statistics
    810 7th Street NW
    Washington, DC 20531
    Telephone: (202) 616-3494
    E-mail: randm@ojp.usdoj.gov
    Internet: www.ojp.usdoj.gov/bjs/
 

School Crime Supplement (SCS)

    Created as a supplement to the NCVS and co-designed by the and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey was conducted in 1989 and 1995 to collect additional information about school-related victimizations on a national level. The survey was designed to assist policymakers as well as academic researchers and practitioners at the federal, state, and local levels so that they can make informed decisions concerning crime in schools. The SCS asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, or on the way to or from school. Additional questions not included in the NCVS were also added to the SCS, such as those concerning preventive measures used by the school, students' participation in afterschool activities, students' perceptions of school rules, the presence of weapons and street gangs in school, and the availability of drugs and alcohol in school, as well as attitudinal questions relating to fear of victimization in school.

    In both 1989 and 1995, the SCS was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the sampling design). Within these households, the eligible respondents for the SCS were those household members who were between the ages of 12 and 19, had attended school at any time during the 6 months preceding the interview, and were enrolled in a school that would help them advance toward eventually receiving a high school diploma. These persons were asked the supplemental questions in the SCS only after completing their entire NCVS interview. A total of 10,449 students participated in the 1989 SCS, and 9,954 in the 1995 SCS. In the 1989 and 1995 SCS, the household completion rates were 97 percent and 95 percent, respectively, and the student completion rates were 86 percent and 78 percent, respectively. Thus, the overall SCS response rate (calculated by multiplying the household completion rate by the student completion rate) was 83 percent in 1989 and 74 percent in 1995. Response rates for most survey items were high-mostly over 95 percent of all eligible respondents. The weights were developed to compensate for differential probabilities of selection and nonresponse. The weighted data permit inferences about the 12- to 19-year-old student population who were enrolled in schools in 1989 and 1995. For more information about SCS, contact:

    Kathryn A. Chandler

    1990 K Street NW
    Washington, DC 20006
    Telephone: (202) 502-7486
    E-mail: Kathryn.Chandler@ed.gov
 

Monitoring the Future (MTF)

    Monitoring the Future (MTF): A Continuing Study of American Youth is an annual, ongoing survey conducted by the University of Michigan's Institute for Social Research to study changes in important values, behaviors, and lifestyle orientations of contemporary American youth. During the spring of each year beginning with the class of 1975, a large, nationally representative sample of high school seniors in the United States has been selected. The selected students are first administered the core questionnaire on drug use and demographics, and then randomly divided into six subgroups, each receiving one form of the questionnaire with a different subset of questions, addressing such topics as their attitudes toward education, social problems, occupational aims, marital and family plans, or deviant behavior and victimization.

    The sample selection involves three stages. The first stage selects geographic areas or primary sampling units (PSUs). These PSUs are developed by the Sampling Section of the Survey Research Center for use in the Center's nationwide interview studies. In the second stage, schools within PSUs are selected with a probability proportionate to the size of their senior class. In the third stage, up to about 400 seniors within each selected school are sampled. Each year, about 130 schools participate in the survey, and from these schools, about 16,000 high school seniors complete questionnaires. These students are divided into six subsamples consisting of an average of 2,700 respondents, and each subsample is administered a different form of the questionnaire. Since the inception of the study, the participation rate among schools has been between 60 and 80 percent, and the student response rate has been between 77 and 86 percent. For more information about Monitoring the Future, contact:

    Survey Research Center
    Institute for Social Research
    The University of Michigan
    Ann Arbor, MI 48109
 

Data Source for School-Associated Violent Deaths

    This report draws upon data concerning school-associated violent deaths from an article entitled "School-Associated Violent Deaths in the United States, 1992 to 1994," published in the Journal of the American Medical Association in 1996.5 Using a descriptive case study methodology, the study was the first nationwide investigation of violent deaths associated with schools conducted in the United States. A "school-associated violent death" was defined as a homicide or suicide in which the fatal injury occurred on the campus of a functioning elementary or secondary school in the United States, while the victim was on the way to or from regular class sessions at such a school, or while the victim was attending or traveling to or from an official school-sponsored event. The cases included the deaths of students and staff members as well as nonstudents. The investigation focused on deaths that occurred from July 1, 1992 through June 30, 1994.

    A total of 105 school-associated violent deaths were identified by the following sequential procedures: 1) tracking fatalities through a newspaper clipping service and informal voluntary reports from state and local education officers; 2) searching two computerized newspaper and broadcast media databases; 3) interviewing local press, law enforcement officers, or school officials who were familiar with each case; and 4) once cases were identified, obtaining further information about the deaths from official sources.

 

Accuracy of Estimates

    The accuracy of any statistic is determined by the joint effects of "nonsampling" and "sampling" errors. Both types of error affect the estimates presented in this report. Several sources can contribute to nonsampling errors. For example, members of the population of interest are inadvertently excluded from the sampling frame; sampled members refuse to answer some of the survey questions (item nonresponse) or all of the survey questions (questionnaire nonresponse); mistakes are made during data editing, coding, or entry; the responses that respondents provide differ from the "true" responses; or measurement instruments such as tests or questionnaires fail to measure the characteristics they are intended to measure. Although nonsampling errors due to questionnaire and item nonresponse can be reduced somewhat by the adjustment of sample weights and imputation procedures, correcting nonsampling errors or gauging the effects of these errors is usually difficult.

    Sampling errors occur because observations are made on samples rather than on entire populations. Surveys of population universes are not subject to sampling errors. Estimates based on a sample will differ somewhat from those that would have been obtained by a complete census of the relevant population using the same survey instruments, instructions, and procedures. The standard error of a statistic is a measure of the variation due to sampling; it indicates the precision of the statistic obtained in a particular sample. In addition, the standard errors for two sample statistics can be used to estimate the precision of the difference between the two statistics and to help determine whether the difference based on the sample is large enough so that it represents the population difference.

    Most of the data used in this report were obtained from complex sampling designs rather than a simple random design. In these sampling designs, data were collected through stratification, clustering, unequal selection probabilities, or multistage sampling. These features of the sampling usually result in estimated statistics that are more variable (that is, have larger standard errors) than they would have been if they had been based on data from a simple random sample of the same size. Therefore, calculation of standard errors requires procedures that are markedly different from the ones used when the data are from a simple random sample. The Taylor series approximation technique or the balanced repeated replication (BRR) method was used to estimate most of the statistics and their standard errors in this report. Table B3 lists the various methods used to compute standard errors for different data sets.

    Standard error calculation for data from the National Crime Victimization Survey, the School Crime Supplement, and Monitoring the Future relied on a different procedure. For statistics based on the NCVS and the SCS data, standard errors were derived from a formula developed by the Census Bureau, which consists of three generalized variance function (gvf) constant parameters that represent the curve fitted to the individual standard errors calculated using the Jackknife Repeated Replication technique. The formulas used to compute the adjusted standard errors associated with percentages or population counts can be found in table B3.

    For the statistics based on the Monitoring the Future data, their standard errors were derived from the published tables of confidence intervals in appendix A (pp. 313-322) of Monitoring the Future: Questionnaire Responses from the Nation's High School Seniors, 1995, by Lloyd D. Johnston, Jerald G. Bachman, and Patrick M. O'Malley, Survey Research Center, Institute for Social Research, the University of Michigan, 1997. Generally, the table entries, when added to and subtracted from the observed percentage, establish the 95 percent confidence interval. The appendix presents specific guidelines for using the tables of confidence intervals and conducting statistical tests for the difference between two percentages.

 

Statistical Procedures

    The comparisons in the text have been tested for statistical significance to ensure that the differences are larger than might be expected due to sampling variations. Unless otherwise noted, all statements cited in the report are statistically significant at the .05 level. Several test procedures were used, depending upon the type of data being analyzed and the nature of the statement being tested. The primary test procedure used in this report was the Student's t statistic, which tests the difference between two sample estimates, for example, between males and females. The formula used to compute the t statistic is as follows:

    where E1 and E2 are the estimates to be compared and se1 and se2 are their corresponding standard errors. Note that this formula is valid only for independent estimates. When the estimates are not independent (for example, when comparing a total percentage with that for a subgroup included in the total), a covariance term (i.e., 2*se1*se2) must be added to the denominator of the formula

    Once the t value was computed, it was compared with the published tables of values at certain critical levels, called alpha levels. For this report, an alpha value of 0.05 was used, which has a t value of 1.96. If the t value was larger than 1.96, then the difference between the two estimates was statistically significant at the 95 percent level.

    When multiple comparisons between more than two groups were made, for example, between racial/ethnic groups, a Bonferroni adjustment to the significance level was used to ensure that the significance level for the tests as a group was at the .05 level. Generally, when multiple statistical comparisons are made, it becomes increasingly likely that an indication of a population difference is erroneous. Even when there is no difference in the population, at an alpha of .05, there is still a 5 percent chance of concluding that an observed t value representing one comparison in the sample is large enough to be statistically significant. As the number of comparisons increase, the risk of making such an erroneous inference also increases. The Bonferroni procedure corrects the significance (or alpha) level for the total number of comparisons made within a particular classification variable. For each classification variable, there are (K*(K-l)/2) possible comparisons (or nonredundant pairwise combinations), where K is the number of categories. The Bonferroni procedure divides the alpha level for a single t test by the number of possible pairwise comparisons in order to produce a new alpha level that is corrected for the fact that multiple contrasts are being made. As a result, the t value for a certain alpha level (e.g., .05) increases, which makes it more difficult to claim that the difference observed is statistically significant.

    Finally, a linear trend test was used when a statement describing a linear trend, rather than the differences between two discrete categories, was made. This test allows one to examine whether, for example, the percentage of students using drugs increased (or decreased) over time or whether the percentage of students who reported being physically attacked in school increased (or decreased) with their age. Based on a regression with, for example, student's age as the independent variable and whether a student was physically attacked as the dependent variable, the test involves computing the regression coefficient (b) and its corresponding standard error (se). The ratio of these two (b/se) is the test statistic t. If t is greater than 1.96, the critical value for one comparison at the .05 alpha level, the hypothesis that there is a linear relationship between student's age and being physically attacked is not rejected.


FOOTNOTE:

[5] For detailed information about how the data were collected and analyzed, see S.P. Kachur et al., "School-Associated Violent Deaths in the United States, 1992 to 1994," Journal of the American Medical Association 275 (22) (1996): 1729-1733.Back


  NCES Help Page Appendix A. School Practices and Policies Related to Safety and Discipline Table of Contents List of Tables Appendix C. Glossary of Terms

Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics - http://nces.ed.gov
U.S. Department of Education