The SASS questionnaires were revised in preparation for the 2003–04 round of SASS. The questionnaires continue to measure the same five major policy issues: teacher shortage and demand; characteristics of elementary and secondary teachers; teacher workplace conditions; characteristics of school principals; and school programs and policies. New items about teachers' career paths, parental involvement, school safety, and institutional support for information literacy have been added.
The sampling frame for the public school sample was the 2001–02 Common Core of Data (CCD) school file. CCD is a universe file that includes all elementary and secondary schools in the United States. Schools operated by the Department of Defense or those that offered only kindergarten or pre-kindergarten or adult education were excluded from the SASS sample.
The list frame used the 2001–02 Private School Universe Survey (PSS)list, updated with association lists. An area frame supplement was based on the canvassing of private schools within specific geographical areas.
A separate universe of schools funded by the Bureau of Indian Affairs (BIA) was drawn from the Program Education Directory maintained by the BIA. To avoid duplicates in the BIA files, BIA schools in the CCD school file were treated as public schools.
The sample design for the School Survey met the objectives for SASS and took into consideration the response burden for schools. The main design objective of the School Survey was to provide estimates of school characteristics by the following key analytical domains: the nation; elementary and secondary levels by public and private sectors; BIA schools and schools with a student population which is at least 20 percent American Indian or Alaska Native; school levels of public schools by state; and private schools by association group, region and school level.
Another objective was to balance the requirements of the samples in SASS. The 2003–04 SASS sampled schools first and LEAs afterward. To obtain a suitable teacher sample, schools were selected with a probability proportionate to the square root of the number of teachers. Teachers within schools will sampled at a rate that makes the overall selection probability approximately constant within strata, subject to the constraints of sampling at least one and no more than 20 teachers per school. The SASS sample design also sought to control sample overlap between SASS and other NCES school surveys.
The U.S. Census Bureau performed the data collection and began by performing an address verification operation in June 2003 to verify school names and addresses. Advance letters were sent to the sampled LEAs in September and advance postcards requesting appointments with Census Bureau field representatives were sent to sampled schools in September and October. Verification of school data was conducted by telephone and the use of computer-assisted personal interviewing (CAPI). Collection of teacher lists by field representatives bean in October and at the same time, all remaining questionnaires except for the teacher questionnaires were distributed among school staff. Follow-up to collect the questionnaires and to distribute teacher questionnaires was handled by field representatives on a flow basis. Follow-up for non-responding schools and teachers was conducted by in-person pickups or by verifying that the respondent had sent the questionnaires in by mail.
The U.S. Census Bureau performed the data processing. Each questionnaire was coded according to its status—for example, whether the questionnaire contains a completed interview, a respondent refused to complete it, a school district merged with another district, or a school closed. The next step was to make a preliminary determination of each case’s interview status, i.e., whether it is an interview, a non-interview, or if the respondent was out-of-scope) for example, if a sampled school had closed). Information from the CAPI instrument was also used to determine the preliminary status of questionnaires, particularly to determine if the school or other respondent was in-scope (eligible for the survey) or not. A computer pre-edit program generated a list of cases where problems occurred as defined by edit specifications, depending on each survey. This operation consisted of a range check, a consistency edit, and a blanking edit.
After the completion of the range check, consistency edit, and blanking edit, the records were put through another edit. That edit made the final determination of whether the case was eligible for the survey, and if so, whether there were sufficient data for the case to be classified as an interview. A final interview status recode value was assigned to each case as the result of the edit.
SASS used five methods to impute values for questionnaire items that respondents did not answer. These were: (1) using data from other items on the questionnaire, (2) extracting data from a related component of SASS, (3) extracting data from the sampling frame (CCD or PSS), (4) extracting data from the record of a sample case with similar characteristics (commonly known as the “hot deck” method for imputing item response, or (5) if no suitable donor could be found, clerically determining a reasonable response.
Weighting of the sample units was carried out to produce national, regional, and state estimates for public schools, districts, principals, teachers, and school libraries. Private schools, principals, and teachers were weighted to produce national, regional, and affiliation estimates. The weighting procedures used in the Schools and Staffing Survey have three purposes: to take into account the school’s selection probability; to reduce biases that may result from unit nonresponse; and to make use of available information from external sources to improve the precision of sample estimates.
Weighted response rates are defined as the number of in-scope responding questionnaires divided by the number of in-scope sample cases, using the basic weight (inverse of the probability of selection) of the record. There are two sampling stages for teachers; first, the school-level collection of the teacher listing form, and then, the teacher level. When both stages are multiplied together, that is the overall weighted response rate. For all other components, only one sampling stage was involved; therefore, for these components, the weighted overall response rate and the weighted response rate are the same.
|Component||Sample size||Weighted response rate|
|Teacher Listing Form||10,202||
|Library Media Center||10,202||
|Teacher Listing Form||3,622||
|Teacher Listing Form||166||
|Library Media Center||166||
SASS conducted a reinterview of about 8 percent of schools and percent of principals in the sample. Questionnaires were sent three to four weeks after the original interview and were sent back in the mail. Reinterview results were analyzed using one index of inconsistency for items with a dichotomous response that is accounted for by response variance. A separate index of inconsistency was used for response items with more than two response categories.