A two-stage sampling process was used to select teachers for the FRSS National Assessment of Vocational Education Teacher Survey. At the first stage, a stratified subsample of 395 public secondary schools with 11th and 12th grades was selected from the national sample selected for the National Assessment of Vocational Education. The National Assessment of Vocational Education sampling frame contains over 16,800 secondary schools with 11th and 12th grades. Schools without 1lth or 12th grades were excluded from the frame prior to sampling. A total of 3,130 eligible schools were selected for the National Assessment of Vocational Education, of which 395 were included in the FRSS survey.
The sample was stratified by type of district (regular versus vocational) and type of school (comprehensive versus vocational). Within each of the major strata schools were sorted by size and region (northeast, central, southeast, and west). The allocation of the sample to the major strata was made in a manner that was expected to be reasonably efficient for national estimates, as well as for estimates for major subclasses. Schools within a stratum were sampled with probabilities proportionate to the estimated number of teachers in the school.
At the second stage, the 395 schools in the sample were contacted and asked to provide a list of all vocational and academic teachers in specified areas in order to draw the teacher sample. Teacher list collection was conducted during spring 1992. Eligible academic teachers included all teachers teaching math, science, English, social studies, and languages at the 9th to 12th grade levels. All vocational teachers teaching occupational vocational education courses at the 9th to 12th grade level were also included. Teachers employed full or part time at the school were included. Excluded from the list were itinerant teachers (unless their homebase was the sampled school), substitute teachers, special education teachers, and teachers teaching only nonoccupational vocational courses, physical education, or music. A list of 15,000 secondary teachers was compiled and a final sample of 2,376 teachers was drawn. The selection of teachers was designed to permit separate estimates of teachers" responses by major subclasses including type of teacher (vocational or academic) and type of school (vocational or comprehensive). On average, six to seven teachers were sampled from each school. The survey data were weighted to reflect these sampling rates (probability of selection) and were adjusted for nonresponse.
The number of vocational education teachers may differ from those obtained in the National Center for Education Statistics" Schools and Staffing Survey (SASS) because of differences in sampling and definitions of eligibility between the two surveys.
Of the 395 public secondary schools drawn in the first stage of sampling, 31 schools were found to be out of the scope of the study (because they had closed or they did not offer 11th and 12th grades). Of the remaining 364 schools, 355 provided complete lists of eligible vocational and academic teachers. The school level response was 98 percent (355 responding schools divided by the 364 eligible schools in the sample).
In October 1992, questionnaires (see appendix B) were mailed to 1,464 vocational and 912 academic secondary teachers at their schools. Teachers were asked to complete the questionable in reference to the first class taught in the teacher's primary teaching assignment on October 1, 1992. Three hundred and five teachers were found to be out of scope (no longer at the school or otherwise not eligible), leaving 2,071 eligible teachers in the sample. Telephone followup of nonrespondents was initiated in November, and data collection was completed in January 1993. The teacher-level response was 93 percent (1 ,924 teachers completed the questionnaire divided by 2,071 eligible teachers in the sample). The overall study response rate was 91 percent (98 rate of school response multiplied by the 93 percent response rate at the teacher level). Item nonresponse ranged from 0.0 to 1.9 percent.
The response data were weighted to produce national estimates. The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. A final poststratification adjustment was made so that the weighted teacher counts equaled the corresponding Common Core of Data (CCD) frame counts within cells defined by school size, metropolitan status, and region. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.
The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in collection of the data all of which can bias the data. Nonsampling errors may include such problems as the differences in the respondents" interpretation of the meaning of the questions; memory effects; misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.
To minimize the potential for nonsampling errors, the questionnaire was pretested with teachers like those who completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics and the Office of Research in OERI. Manual and machine editing of the questionnaires were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Imputations for item nonresponse were not implemented, as item nonresponse rates were less than 3 percent (for nearly all items, nonresponse rates were less than 1 percent). Data were keyed with 100 percent verification.
The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of vocational teachers who taught full time is 97 percent, and the estimated standard error is 0.8 percent. The 95 percent confidence interval for the statistic extends from 97- (0.8 x 1.96) to 97 + (0.8 x 1.96) or from 95.4 to 98.5.
Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variance of the statistic (see Wolter, 1985, Chapter 4). To construct the replications, 30 stratified subsamples of the full sample were created and then dropped 1 at a time to define 30 jackknife replicates (see Wolter, 1985, page 183). A proprietary computer program (WESVAR), available at Westat, Inc., was used to calculate the estimates of standard errors. The software runs under IBM/OS and VAX/VMS systems.
The survey was performed under contract with Westat, Inc., using the Fast Response Survey System (FRSS). FRSS was established in 1975 by NCES. It was designed to collect small amounts of issue-oriented data quickly and with minimum burden on respondents. Over 40 surveys have been conducted through FRSS. Recent FRSS reports (available through the Government Printing Office) include the following:
Westat's project director was Elizabeth Farns, and the survey manager was Sheila Heaviside. Judi Carpenter was the NCES project officer. The data requesters were David Boesel and Lisa Hudson, Office of Research, OERI.
This report was reviewed by the following individuals.
For more information about the Fast Response Survey System or the National Assessment of Vocational Education Teacher Survey, contact Judi Carpenter, Elementary/Secondary Education Statistics Division, Special Surveys and Analysis Branch, National Center for Education Statistics, Office of Educational Research and Improvement, 555 New Jersey Avenue NW, Washington DC 20208-5651, telephone (202)219-1333.
The WESVAR Procedures. 1989. Rockville, MD: Westat, Inc.
Wolter, K. 1985. Introduction to Variance Estimation. Springer-Verlag.