Read brief descriptions of the methods and procedures used for the 2015-16 NTPS. Please refer to the reports that accompany each data set for more complete information about the following categories: Questionnaire Design, Sampling Frames, Sample Design, Data Collection, Data Ending, Imputation, Weighting, Response Rates, and Manuals and Technical Reports.
The 2015–16 NTPS consisted of three questionnaires: principal questionnaire, school questionnaire, and teacher questionnaire. The 2015–16 NTPS included public schools only (both traditional and charter), while earlier SASS administrations collected data from both public and private schools. Questionnaires were designed to include both core modules (i.e., sections that will be asked every 2 years in every NTPS administration) and rotating modules (i.e., sections that will be asked every 4 years in alternating NTPS administration). The questionnaires can be found here: https://nces.ed.gov/surveys/ntps/question1516.asp
The sampling frame for public schools was an adjusted version of the 2013–14 Common Core of Data (CCD), which reflects the population of public schools in the 2013–14 school year. CCD includes traditional public schools, public charter schools, DoD-operated domestic military base schools, Bureau of Indian Education-funded schools, and special purpose schools, such as special education, vocational, and alternative schools. Schools outside of the United States, schools that teach only prekindergarten, kindergarten, or postsecondary students, and administrative units that do not offer teacher-provided classroom instruction were deleted from the CCD frame prior to sampling for NTPS. Public schools that closed in school year 2013–14 or were not yet opened were not included. Prior to stratification and sampling, CCD schools were collapsed to match the NTPS definition of a school.
The sampling frame for the teacher questionnaires consisted of lists of who worked at schools selected for the NTPS sample. Teachers were defined as any staff who taught a regularly scheduled class to students in grades K–12 or comparable ungraded levels. Teacher Listing Forms (TLFs) were collected from sampled schools by mail or online, via clerical look-up, or through vendor purchase. Schools were asked to provide teachers’ full- or part-time teaching status and their subject matter taught, and the sample of teachers was selected from all sampled schools for which a Teacher Listing Form was completed.
All principals from sampled schools were also surveyed for NTPS.
There were two key differences between the SASS and NTPS in the survey structure. First, unlike SASS, the 2015–16 NTPS is not explicitly designed to produce state-level estimates. Second, private schools were not included in NTPS data collection for the 2015–16 cycle.
In addition, for the 2011–12 SASS, regular public schools were separated into four strata (primary, middle, high, and combined schools) while charter schools were separated into three strata (elementary, secondary, and combined). For the 2015–16 NTPS, all regular public schools and charter schools used the same four strata for grade level.
In the 2011–12 SASS, teachers were placed into strata for sampling based on years of experience, with four experience levels as follows: first year; 2-3 years; 4-19 years; and 20+ years. This stratification was used to sort after control number, with teacher subject used later in the sort order. For the 2015–16 NTPS, experience as a teacher did not factor into the sort order. Instead, teachers were placed into strata based on a combination of subject taught (Math, Science, English/Language Arts, Social Studies, Other) and teacher order within the teacher listing for the school. This process led to a diversification of the sort order with respect to these variables.
The 2015–16 NTPS used a combination of mail-based methodology and Internet reporting for questionnaires, with telephone and in-person field follow-up. An advance letter was mailed to sampled schools during the summer of 2015 to verify school addresses and eligibility. Subsequently, a package containing school and principal surveys and explanatory information was mailed to sampled schools. The Census telephone center called sampled schools to verify school information, establish a survey coordinator, and follow up on the Teacher Listing Form (TLF), which served as the teacher list frame. Sampled teachers were mailed questionnaires on a flow basis. Field follow–up was conducted for schools that had not returned the TLF. Schools were called from Census telephone centers to remind the survey coordinator to have staff complete and return all forms. Sampled principals and teachers were called from the telephone centers to attempt to complete the questionnaire with them over the phone. Field follow–up was conducted for schools and teachers that had not returned their questionnaires.
The U.S. Census Bureau conducted the data processing. Each questionnaire was coded according to its response status—for example, whether the questionnaire contained a completed interview, a respondent refused to complete it, or a school closed. The next step was to make a preliminary determination of each case's interview status, i.e., whether it was an interview, a non–interview, or if the respondent was ineligible for the survey.
Once the data were compiled, a computer program conducted a series of quality control checks, such as range checks, consistency edits, and blanking edits, and generated a list of cases where problems occurred in each survey. After the completion of these checks, the program made a final determination of whether the case was eligible for the survey, and if so, whether there were sufficient data for the case to be classified as an interview. As a result, a final interview status recode value was assigned to each case.
The NTPS used two main approaches to impute data. First, donor respondent methods, such as hot-deck imputation, were used. Second, if no suitable donor case could be matched, the few remaining items were imputed using mean or mode from groups of similar cases to impute a value to the item with missing data. Finally, in rare cases for which imputed values were inconsistent with existing questionnaire data or out of the range of acceptable values, Census Bureau analysts looked at the items and tried to determine an appropriate value.
Weighting of the sample units was carried out to produce national estimates for public schools, principals, and teachers. The weighting procedures used in NTPS had three purposes: to take into account the school's selection probability; to reduce biases that may result from unit nonresponse; and to make use of available information from external sources to improve the precision of sample estimates.
Weighted response rates are defined as the number of in–scope responding questionnaires divided by the number of in–scope sampled cases, using the base weight (inverse of the probability of selection) of the record. There are two sampling stages for teachers; first, the school–level collection of the Teacher Listing Form (TLF) from sampled schools, and then, sampling of teachers from the TLF. When both stages are multiplied together, the product is the overall weighted response rate. For principals and schools, only one sampling stage was involved; therefore, for these components, the weighted overall response rate and the weighted response rate are the same. The weighted response rates for each component are shown below.
|Weighted unit and overall response rates using initial base weight, by survey: 2015–16|
|Public School Principal
|Public School Teacher Listing Form
|Public School Teacher
|† Not applicable.|
|NOTE: Response rates were weighted using the inverse of the probability of selection (initial base weight).|
|SOURCE: U.S. Department of Education, National Center for Education Statistics, National Teacher and Principal Survey (NTPS), “Public School, Public School Principal, and Public School Teacher Documentation Data Files,” 2015–16.|