The sampling frame for the FRSS Survey on Advanced Telecommunications in U.S. Private Schools, K-12 was constructed from the 1993-94 NCES Private School Survey (PSS) Universe File. The complete file contains approximately 26,000 schools including over 8,000 Catholic schools, 12,000 schools with religious affiliations other than Catholic, and about 5,500 nonsectarian schools. By level, the file contains about 15,600 elementary, 2,500 secondary, and 8,000 combined schools.
A private school was defined as a school not in the public system that provides instruction for any of grades 1-12 (or comparable ungraded levels) where the instruction was not provided in a private home. All regular private elementary, middle, secondary, and combined schools in the 50 states and the District of Columbia were included in the sampling frame. Special education, vocational, and alternative schools, and schools that taught only pre-kindergarten, kindergarten, or adult education were excluded from the frame prior to sampling. With these exclusions, the final sampling frame consisted of approximately 22,000 eligible private schools enrolling over 4.6 million students (see appendix A).
The sample was stratified by instructional level (elementary, secondary, and combined) and then by type of orientation (Catholic, other religious, and nonsectarian) within level to define six primary strata. Within each primary stratum, schools were sorted by size of enrollment (less than 150, 150-299, and 300 or more), geographic region (northeast, southeast, central, and west), metropolitan status (city, urban fringe, town and rural) and percent minority enrollment (less than 6 percent, 6-20 percent, 21-49 percent, and 50 percent or more). The sample sizes were then allocated to the primary strata in rough proportion to the aggregate square root of the enrollment of schools in the stratum. The use of the square root of enrollment to determine the sample allocation was expected to be reasonably efficient for estimating both school-level characteristics (e.g., percent of schools on the Internet) and quantitative measures correlated with enrollment (e.g., the number of students enrolled in schools on the Internet). Further, the sample sizes were large enough to permit analyses of the questionnaire items (along one dimension) by three types of affiliation, three instructional levels, three enrollment sizes, and by the four urbanicity classes, four regions, and four levels of minority enrollment (see appendix A).
In October, 1995, Survey Questionnaires (see appendix C) were mailed to 999 private school heads. The Heads of Schools were asked to forward the questionnaire to the computer technology coordinator or to whomever was most knowledgeable about the availability and use of advanced telecommunications at the school. The accompanying instructions requested that the school complete the self-administered questionnaire and return it by mail. Only ten percent of the questionnaires sent to private schools were completed by someone with the title of computer or technology coordinator and one percent were returned by library/media specialists. Most were completed by the school head (45 percent) or other staff-typically a teacher who was considered knowledgeable about the school's telecommunications capabilities (44 percent). Telephone followup was conducted with schools that did not complete the survey by mail. Fifty-two percent of the questionnaires were received by mail or fax, and 48 percent were obtained by telephone.
Of the 999 schools in the sample, 12 were found to be out of the scope of the study (because of closing), leaving 987 eligible schools in the sample. Data collection was completed in January, 1996. The survey response rate was 88 percent (873 schools that completed questionnaires divided by 987 eligible schools in the sample). The weighted response rate was 87 percent. Item nonresponse ranged from 0.0 to 2.5 percent.
The responses were weighted to produce national estimates for regular private schools. The sample weights were the inverse probability of selection adjusted for nonresponse-designed to adjust for the variable probabilities of selection and differential nonresponse. The findings of this report are estimates based on the sample selected and, consequently, are subject to sampling variability.
The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in collection of the data. These errors, when present, may result in biased data. Nonsampling errors may include such problems as the differences in the respondents' interpretation of the meaning of the question; memory effects; misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.
To minimize the potential for nonsampling errors, the questionnaire was pretested with private school heads and computer/technology coordinators like those in the survey population. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and terms and to eliminate ambiguous items or instructions. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics and the Office of Nonpublic Education. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Imputations for item nonresponse were not implemented, as item nonresponse rates were very low (less than 2.5 percent). Data were keyed with 100 percent verification.
The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard error below to 1.96 standard errors above a particular statistic would include the true value 95 percent of the time . For example, the estimated percentage of private schools reporting that they have access to the Internet is 25 percent, and the estimated standard error is 1.4 percentage points. The 95 percent confidence interval for the statistic extends from [25 - (1.4 times 1.96)] to [25 + (1.4 times 1.96)], or from 22.3 to 27.7 percent.
Estimates of standard errors were computed using a technique known as jackknife replication, which accounts for the complexities of the sample design. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate.
The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variance of the statistic (see Wolter 1985, Chapter 4). To construct the replication, 40 stratified subsamples of the full sample were created and then dropped one at a time to define 40 jackknife replicates. A computer program (WESVAR) available from Westat, Inc., was used to calculate the estimates of standard errors. The software runs under IBM/OS and VAX/VMS systems.
The survey was conducted and analyses performed by Westat, Inc., using the NCES Fast Response Survey System (FRSS). Westat's Project Director was Elizabeth Farris, and the Associate Project Director and Survey Manager was Sheila Heaviside. Judi Carpenter, Shelley Burns, and Edith McArthur were the NCES Project Officers. The survey was requested by Jack Klenk and Michelle Doyle of the Office of Non-Public Education, U.S. Department of Education. The Survey Questionnaire was adapted from an instrument developed by Westat in conjunction with Gerald Malitz at NCES and used to obtain data from public schools in 1994. The following individuals reviewed the private school instrument and survey methods:
This report was reviewed by the following individuals:
For more information about the Fast Response Survey System or the Survey of Advanced Telecommunications in U.S. Private Schools, K- 12, contact Shelley Burns, National Center for Education Statistics, Office of Data Development and Longitudinal Studies, 555 New Jersey Avenue, NW, Washington, DC 20208-5651, telephone (202) 219-1463.