A two-stage sampling process was used to select public school districts for the FRSS District Survey on Safe, Disciplined, and Drug-Free Schools. First, a stratified sample of 890 public schools was drawn from the 1988-89 list of public schools compiled by the National Center for Education Statistics (NCES). This file contains about 85,000 listings and is part of the NCES Common Core of Data (CCD) School Universe. Regular, vocational education, and alternative schools in the 50 states and District of Columbia were included in the survey universe, while special education schools were excluded from the frame prior to sampling. Schools not operated by local education agencies and those including only prekindergarten or kindergarten were also excluded. With these exclusions, the final sampling frame consisted of approximately 81,100 eligible schools. The schools were stratified by type of locale (city, urban fringe, town, rural) and level of instruction (elementary, secondary, and combined schools). Within each of the 12 strata, schools were sorted first by state, then district (within each state), and then enrollment size (within each district). Next schools were selected with probabilities proportionate to the square root of the number of full-time-equivalent (FTE) teachers in the school.
The sampling of schools, in turn, identified the 790 districts to be included in the district survey. Districts comprised of schools that appeared in two or more school strata had multiple chances of selection. The overall probability of selecting a district was approximately proportional to the size of the district.
In mid-April 1991, questionnaires (see Appendix B) were mailed to districts in the sample. Telephone followup of nonrespondents was initiated in late May; data collection was completed by the beginning of July. A response rate of 94 percent (739 districts) was obtained (see Table B). Item nonresponse ranged from 0.0 percent to 2.5 percent.
The response data were weighted to produce national estimates. The weights were designed to adjust for the variable probabilities of selection and differential enrollment. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.
The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in collection of the data. These errors can sometimes bias the data. Nonsampling errors may include such problems as the differences in the respondents" interpretation of the meaning of the questions; memory effects; misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.
To minimize the potential for nonsampling errors, the questionnaire was pretested with superintendents from districts like those that completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics, as well as the Office of Educational Research and Improvement, the Office of the Undersecretary, and the Drug Planning and Outreach Staff, Office of Elementary/Secondary Education, in the Department of Education. Manual and machine editing of the questionnaires were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Imputations for item nonresponse were not implemented, as item nonresponse rates were less than 5 percent (for most items, nonresponse rates were less than 1 percent). Data were keyed with 100 percent verification.
The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors can be used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of public school districts that conducted a student alcohol, drug, or tobacco use survey in the last two years is 61 percent, and the estimated standard error is 2.9 percent. The 95 percent confidence interval for the statistic extends from 61- (2.9 times 1.96) to 61 + (2.9 times 1.96), or from 55 to 67 percent.
Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variance of the statistic (e.g., Wolter, 1985, Chapter 4). To construct the replications, 30 stratified subsamples of the full sample were created and then dropped one at a time to define 30 jackknife replicates (e.g.; Wolter, 1985, page 183). A proprietary computer program (WESVAR), available at Westat, Inc., was used to calculate the estimates of standard errors. The software runs under IBM/OS and VAX/VMS systems.
The survey was performed under contract with Westat, Inc., using the Fast Response Survey System (FRSS). Westat's Project Director was Elizabeth Farris, and the Survey Managers were Wendy Mansfield and Sheila Heaviside. Judi Carpenter was the NCES Project Officer. The data requestor was Mary Frase, Data Development Division, NCES; outside consultants were Oliver Moles, Office of Research, Office of Educational Research and Improvement, and Kimmon Richards, Planning and Evaluation Service, Office of Policy and Planning.
The report was reviewed by Rita Altman, Associate Superintendent, School District of Philadelphia; Floraline Stevens, AERA Fellow, Director of Research and Evaluation, Los Angeles Unified School District; and Alfred Tuchfarber, Institute for Policy Research, University of Cincinnati. Within NCES, report reviewers were John Grymes, Data Development Division, and John Matthews, Education Assessment Division.
For more information about the Fast Response Survey System or the Surveys on Safe, Disciplined, and Drug-Free Schools, contact Judi Carpenter, Office of Educational Research and Improvement, National Center for Education Statistics, 555 New Jersey Avenue NW, Washington, DC 20208-5651, telephone (202) 219-1333.
The WESVAR Procedures, 1989. Rockville, MD: Westat, Inc.
Wolter, K. 1985. Introduction to Variance Estimation. Springer-Verlag.