Skip Navigation
small NCES header image

Appendix B: Methodology and Technical Notes

Overview of the Beginning Teacher Longitudinal Study

The Beginning Teacher Longitudinal Study (BTLS) is sponsored by the National Center for Education Statistics (NCES) of the Institute of Education Sciences on behalf of the U.S. Department of Education and is conducted by the Census Bureau. BTLS is a national study of a cohort of beginning public school teachers who were initially interviewed as part of the 2007–08 Schools and Staffing Survey (SASS). SASS is the largest survey of public, private, and Bureau of Indian Education (BIE)-funded Kindergarten–12 school districts, schools, teachers, and administrators in the United States today. It provides extensive data on the characteristics and qualifications of teachers and principals, teacher hiring practices, professional development, class size, and other conditions in schools across the nation.

BTLS first began in the 2007–08 school year as part of SASS, and follow-ups were conducted in the 2008–09 school year (together with the Teacher Follow-up Survey [TFS]) and the 2009–10 school year (as a stand-alone data collection). Collection is currently being conducted for the 2010–11 school year, and collections are expected to continue for a minimum of five waves. BTLS includes all beginning public school teachers who participated in the 2007–08 SASS, including teachers who subsequently left K–12 teaching, teachers who remained in the Pre-K–12 teaching profession, and teachers who returned to the profession. Beginning teachers who were sampled for SASS but did not respond to the survey could not be included in the data collection of subsequent BTLS waves. Beginning teachers were initially identified through a question on the SASS Teacher Questionnaire. Their beginning year of teaching was confirmed in subsequent waves.

Beginning public school teachers are teachers who began teaching in 2007 or 2008 in a traditional public or public charter school that offered any of grades K–12 or comparable ungraded levels. These teachers included regular full- and part-time teachers, itinerant teachers, and long-term substitutes as well as any administrators, support staff, librarians, or other professional staff who taught at least one regularly scheduled class in the 2007–08 school year (excluding library skills classes).

To access additional general information on SASS, or to view electronic copies of the questionnaires, go to the SASS home page (http://nces.ed.gov/surveys/sass). For additional information on specific BTLS-related topics discussed here, consult Tourkin et al. (forthcoming). For additional information on the 2007–08 SASS methodology, seeTourkin et al. (2010).

Sampling Frames and Sample Selection
Teachers sampled for the BTLS are part of the SASS teacher sample, which is based on the SASS school sample. Because SASS and BTLS are so interrelated, the description of sampling frames and sample selection begins with SASS and then moves on to BTLS.

SASS Public Schools.The foundation for the 2007–08 SASS public school frame was the preliminary 2005–06 Common Core of Data (CCD)1 Nonfiscal School Universe Data File. The CCD includes regular and nonregular schools (special education, alternative, vocational, or technical), public charter schools, and BIE schools. Due to their small sample size, teachers from BIE schools are not eligible for the BTLS; therefore, BIE schools are not discussed in this report. The sampling frame was adjusted from the CCD in order to fit the definition of a school eligible for SASS. For the SASS sampling frame, a school was defined as an institution, or part of an institution, that provides classroom instruction to students; has one or more teachers to provide instruction; serves students in one or more of grades 1–12 or the ungraded equivalent; and is located in one or more buildings apart from a private home. It was possible for two or more schools to share the same building; in this case, they were treated as different schools if they had different administrators (i.e., principal or school head).

The SASS definition of a school was generally similar to the CCD definition, with some exceptions. Whereas SASS is confined to the 50 states plus the District of Columbia, the CCD includes the other jurisdictions and Department of Defense schools (overseas and domestic). The CCD also includes some schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent (whereas these schools are excluded from SASS). In some instances, schools in the CCD are essentially administrative units that may oversee entities that provide classroom instruction or they may only provide funding and oversight.

CCD schools with the same location, address, and phone number were collapsed during the SASS frame building on the assumption that the respondent would consider them to be one school. Because SASS allows schools to define themselves on the school questionnaire, Census Bureau staff observed that schools generally report as one entity in situations where the administration of two or more schools in the CCD is the same. A set of rules was applied in certain states to determine in which instances school records should be collapsed; when they were, the student and teacher counts, grade ranges, and names as reported to the CCD were all modified to reflect the change.

Finally, additional school records were added to the sampling frame. Most of these records were for career technical centers (CTCs) or alternative, special education, or juvenile justice facilities in California, Pennsylvania, New York, Arizona, Connecticut, and the District of Columbia. For a detailed list of frame modifications, see Tourkin et al. (2010). After the adding, deleting, and collapsing of school records, the SASS public school sampling frame consisted of 90,410 traditional public schools and 3,850 public charter schools.

The SASS sample is a stratified probability proportionate to size (PPS) sample. All schools underwent multiple levels of stratification. The sample was allocated so that national-, regional-, and state-level elementary and secondary school estimates and national-level combined public school estimates could be made. The sample was allocated to each state by grade range (elementary, secondary, and combined) and school type (traditional public, public charter, BIE-funded, and schools with high–American Indian enrollment). For a full description of the allocation procedure, see Tourkin et al. (2010). Within each stratum, schools were systematically selected using a PPS algorithm. The measure of size used for the schools was the square root of the number of full-time-equivalent teachers reported or imputed for each school during the sampling frame creation. Any school with a measure of size greater than the sampling interval (the inverse of the rate at which the sample is selected) was included in the sample with certainty and thus automatically excluded from the probability sampling operation. This means that schools with an unusually high number of teachers relative to other schools in the same stratum were automatically included in the sample. If the pattern of probabilities (i.e., the sum of the probabilities of schools within school district and grade level) did not guarantee a sampled school for that school district, then the school with the highest probability of selection was included in the sample with certainty. This guaranteed that all school districts in these states would have at least one school in the sample. This produced a public school sample of 9,810 schools in the 2007–08 SASS (450 high–American Indian enrollment schools, 370 public charter schools, 20 CTC schools, and 8,970 other traditional public schools). For a more detailed explanation of PPS sampling, consult Cochran (1977).

SASS Teachers. Teachers in SASS are defined as staff who teach regularly scheduled classes to students in any of grades K–12. Teacher rosters (i.e., Teacher Listing Forms) were collected from sampled schools, primarily by mail, and compiled at the Census Bureau. This compilation was done on an ongoing basis throughout the roster collection period. Along with the names of teachers, respondents at the sampled schools were asked to provide information about each teacher’s teaching experience (1–3 years, 4–19 years, and 20 or more years), teaching status (full or part time), and subject matter taught (special education, general elementary, math, science, English/language arts, social studies, vocational/technical, or other), as well as whether the teacher was expected to be teaching at the same school in the following year.

Sampling was also done on an ongoing basis throughout the roster collection period. Schools were first allocated an overall number of teachers to be selected within each school stratum. The Census Bureau then stratified teachers into five teacher types within each sampled school: (1) new teachers expected to stay at their current school, (2) mid-career and highly experienced teachers expected to stay at their current school, (3) new teachers expected to leave their current school, (4) mid-career teachers expected to leave their current school, and (5) highly experienced teachers expected to leave their current school.

Sampling rates for teachers varied among the strata listed above. All teachers in categories 3–5 were oversampled at different rates. So that a school would not be overburdened by sampling too large a proportion of its teachers, the maximum number of teachers per school was set at 20. About 13 percent of the eligible public schools did not provide teacher lists. For these schools, no teachers were selected. Within each teacher stratum in each school, teachers were selected systematically with equal probability.

BTLS Teachers. All SASS traditional public or public charter school teachers who responded to the SASS Teacher Questionnaire and reported their first year of teaching as being 2007 or 2008 were included in the BTLS sample. About 2,100 teachers were initially included. Note that 2,100 is a rounded unweighted count of respondents.

Data Collection Procedures

The 2007–08 SASS data for teachers who began teaching in 2007 or 2008 is the first wave of BTLS data. The first wave collection utilized a primarily mail-based methodology with telephone and field follow-up. At the beginning of data collection, the¬† Census Bureau telephone centers attempted to establish a survey coordinator at each school.2 Nonrespondents were contacted by telephone interviewers or field representatives. The 2007–08 SASS included several questionnaire components, which collected data from schools, school districts, principals, library media centers (public and BIE-funded schools only), and teachers. The BTLS cases were identified during the teacher collection, and their SASS data constituted the BTLS first wave. The SASS teacher data collection began in August 2007 and ended in June 2008. For complete details regarding the SASS, refer to Tourkin et al. (2010).

The Census Bureau conducted the second wave of BTLS together with the TFS during the 2008–09 school year. However, BTLS teachers used the longitudinal versions (TFS-2L and TFS-3L) of the questionnaires, which contained more questions than the TFS questionnaires. The second wave included those who indicated that they began teaching in either 2007 or 2008 in a public school during the first wave. The second wave data were primarily collected using an internet instrument. During data collection, the Census Bureau discovered that 101 teachers misreported their first year of teaching in the 2007–08 SASS and had actually begun teaching prior to 2007. These cases were removed from the BTLS sample. Telephone follow-up efforts were conducted to resolve cases with this discrepancy or to collect the missing data, as well as to encourage participation or to collect data over the phone from nonrespondents. Throughout the telephone follow-up, paper questionnaires were mailed upon request. Paper questionnaires were mailed in June 2009 to all teachers who had not yet completed the survey. The TFS data collection began in February 2009 and ended in August 2009. For more details regarding the TFS, refer to Graham et al. (2011).

The Census Bureau conducted the third wave of the BTLS during the 2009–10 school year. This wave is the third data collection from respondents who reported 2007 or 2008 as their first year of teaching in the 2007–08 SASS Teacher Questionnaire. The third wave of BTLS data were collected using a single internet instrument, so that current teachers (stayers, movers, and returners) and former teachers (leavers) all responded to the same questionnaire. Their current/former and stayer/mover/leaver/returner statuses were determined by skip patterns built into the internet instrument. Telephone follow-up efforts were conducted to encourage participation or to collect BTLS data over the phone from nonrespondents. After data collection, the Census Bureau determined that five cases had been misclassified as beginning teachers and were later removed from the data file. Approximately 1,990 teachers were included in the BTLS sample. Note that 1,990 is a rounded unweighted count of respondents. The data collection period for the third wave began in January 2010 and ended in June 2010. All questionnaires used to collect data for the BTLS are available on the BTLS website: http://nces.ed.gov/surveys/btls/. For more details on data collection for the BTLS, refer to Tourkin et al.(forthcoming).

Data Processing and Imputation

The BTLS first wave data was collected on the Teacher Questionnaire (Form SASS-4A) during the 2007–08 SASS. Once the BTLS first wave data collection was completed the Census Bureau captured the data from completed questionnaires.3 All BTLS first wave data processing was conducted within the single SASS Teacher Questionnaire Data File.4

The Census Bureau applied a series of computer edits to identify and fix inconsistencies and impute items that were still "not answered" after taking into account item responses that were blank due to a questionnaire skip pattern. Once the data underwent all stages of computer edits, imputation,5 and review, the BTLS First Wave Data File was created.

The second wave of the BTLS was conducted together with the 2008–09 TFS. Data were collected primarily using an internet instrument, but paper questionnaires were also used. Once the data collection was completed, the Census Bureau electronically captured the data from completed paper questionnaires and combined them with data from the internet instrument. Data processing was conducted separately within each questionnaire.6 A series of computer edits were then run on the data to identify and correct inconsistencies, delete extraneous entries in situations where skip patterns were not followed correctly, or assign the "not answered" code to items that should have been answered but were not. A final interview status code was then assigned to each case. Once the Census Bureau analysts reviewed all data, they created the edited BTLS Second Wave Data File in preparation for the next stage of data processing–imputation. For further details about the TFS, refer to Graham et al. (2011).

The third wave of BTLS was collected as its own entity during the 2009–10 school year. Data were collected using an internet instrument only. The processing of data from completed internet instruments was processed separately within each survey respondent type.7 A series of computer edits were then run on the data to identify and correct inconsistencies and delete extraneous entries in situations where skip patterns were not followed correctly or to assign the "not answered" code to items that should have been answered but were not. Once the Census Bureau reviewed all data, they created the edited BTLS Third Wave Data File in preparation for the next stage of data processing–imputation. Data collected from retrospective respondents were added into the second wave data file. As a result, these retrospective respondents represent 8.1 percent of the weighted total of 2008–09 current teachers (11.3 percent of the movers) and 8.6 percent of the weighted total of 2008–09 former teachers.

Data from the first, second, and third waves of BTLS are released together as one data file called the BTLS First Through Third Wave Preliminary Data File, and were released once processing for the three waves was complete. This allowed for the final stage of data processing—cross-wave imputation. Only a select set of items were identified as key, or important for reporting or analysis, and imputed. All other items are subject to missing data. The imputed data for selected items were removed from the first wave and then reimputed on the basis of the case’s responses to items from subsequent waves of the BTLS, whenever possible. If data were not available from subsequent waves, then the existing imputed value remained. For further details about the SASS, refer to Tourkin et al. (2010). Several variables in each BTLS wave were identified as "key variables," or important reporting or analytical variables, and were imputed (or reimputed, in the case of the BTLS First Wave data) once the edited BTLS Second and Third Wave Data Files were created and fully reviewed. During the imputation stage of processing, two main approaches were used to fill "not answered" items with data. In one approach, called "cross-wave imputation," data were imputed from the same case from either the preceding or the subsequent BTLS wave whenever possible; cross-wave imputation was used for all three waves of BTLS data. The second method of imputation is known as "weighted sequential hot deck imputation," during which data were imputed using items from other cases that had certain predetermined characteristics in common, while also keeping the means and distributions of the full set of data, including imputed values, consistent with those of the unimputed respondent data. Weighted sequential hot deck imputation was used for only the BTLS second and third wave data.

After the imputation of the key variables was completed, data from the three waves were then combined into one three-wave BTLS file for release. The data file used to produce this report is viewed as preliminary because it will be reweighted after the data collection of the fourth wave is complete. For more details regarding data processing for BTLS, refer to Tourkin et al.(forthcoming).

Response Rates

Unit response rate. The unit response rate is the rate at which the sampled units respond by substantially completing the questionnaire. Unit response rates can be calculated as unweighted or weighted. Whether or not a teacher was a first-year teacher was not known prior to the collection of the SASS teacher data, only whether each teacher was reported to have 1 to 3 years of experience, 4 to 19 years, or 20 or more years of teaching experience. The response rates presented in this section are those of the 2007–08 SASS public school teachers reported to have 1 to 3 years of experience, not just the first-year teachers included in the BTLS. The unweighted response rates are the number of 2007–08 SASS public school teachers reported to have 1 to 3 years of experience who substantially completed the questionnaire divided by the number of eligible (in-scope) sampled units, which include respondents plus nonrespondents but excludes ineligible (out-of-scope) units. The weighted response rates are the base-weighted number of cases that substantially completed the questionnaire divided by the base-weighted number of eligible cases. The base weight for each sampled unit is the initial basic weight multiplied by the sampling adjustment factor.

Overall response rate. The overall response rate represents the response rate to the survey, taking into consideration each stage of data collection. For a teacher to be eligible for the SASS, it was necessary for the school to have completed the Teacher Listing Form during the 2007–08 SASS data collection, which provided a sampling frame for teachers at that school. The overall response rate for the BTLS first wave is the product of the survey response rates: (SASS Teacher Listing Form response rate) x (SASS public school teachers with 1 to 3 years of experience response rate). The overall response rate for the second and third waves are the product of three factors: (SASS Teacher Listing Form response rate) x (SASS public school teachers with 1 to 3 years of experience response rate) x (BTLS wave response rate).

Table B-1 summarizes the unweighted and base-weighted unit response and overall response rates for the BTLS.

Unit nonresponse bias analysis. NCES Statistical Standard 4-4 requires analysis of unit nonresponse bias for any survey stage with a base-weighted response rate of less than 85 percent. Even though the BTLS achieved or almost achieved an 85 percent base-weighted response rate in all stages, all waves of BTLS data files were evaluated for potential bias. Comparisons between the eligible respondents (respondents plus non-respondents) and the respondents were made before and after the noninterview weighting adjustments were applied in order to evaluate the extent to which the adjustments reduced or eliminated nonresponse bias. The following section explains the methodology and summarizes the conclusions.

As outlined in appendix B of the NCES Statistical Standards (U.S. Department of Education 2003), the degree of nonresponse bias is a function of two factors: the nonresponse rate and how much the respondents and nonrespondents differ on survey variables of interest. The mathematical formulation to estimate bias for a sample mean of variable y is as follows:

forumula

Relative bias was estimated for variables known for respondents and nonrespondents. For the first wave, first-year teachers were not identifiable from the sampling frame, although teachers in the first 3 years of their career were identified on the Teacher Listing Form. Therefore, a nonresponse bias analysis on 2007–08 SASS public school teachers with 1 to 3 years of experience was carried out as a proxy for the BTLS first wave. For this analysis, the following variables were available: teacher main subject, full-time/part-time status, charter status, school grade level, percent of K–12 students approved for free or reduced-price lunches, school enrollment, school urbanicity, school magnet status, percent Hispanic enrollment, percent Asian enrollment, percent Black enrollment, percent Native American enrollment, percent White enrollment, and school Title I eligibility status. For the second and third waves, and the longitudinal datasets, there are extensive data available for all teachers from the 2007–08 SASS sampling frame and teacher data files. The variables used are presented in exhibit B-1.

The following steps were followed to compute the relative bias. First, the nonresponse bias was estimated and tested to determine if the bias is significant at the .05 level. Second, noninterview adjustments were computed, and the variables listed above were included in the nonresponse models. The noninterview adjustments, which are included in the weights, were designed to significantly reduce or eliminate unit nonresponse bias for variables included in the models. Third, after the weights were computed, any remaining bias was estimated for the variables listed above and statistical tests were performed to check the remaining significant nonresponse bias. For this comparison, nonresponse bias was calculated as the difference between the base-weighted sample mean and the nonresponse-adjusted respondent mean, which evaluates the effectiveness of each noninterview adjustment in mitigating nonresponse bias. Table B-2 contains summary statistics of the findings.

As shown in table B-2, for 2007–08 SASS public school teachers with 1 to 3 years of experience, both mean and median estimated percent relative bias decreased after the weighting adjustment, but variable categories significantly biased increased to about 5 percent. For the second wave respondents, about 7 percent of the variable categories were significantly biased before nonresponse weighting adjustments, and about 3 percent were significantly biased after adjustments. For the second wave including retrospective respondents, the percent of the variable categories that were significantly biased after noninterview weighting adjustments decreased (about 9 percent versus 6 percent); also the mean relative bias was reduced. For the third wave respondents, the percentage of the variable categories that were significantly biased before and after the weighting adjustments decreased from about 10 percent to 6 percent. Likewise, the longitudinal weights showed the weighting adjustments reduced significantly biased variable categories from about 8 percent to 5 percent. The longitudinal weighting including retrospective cases reduced significantly biased variable categories from about 10 percent to 6 percent. In general, the weighting adjustments eliminated some, but not all, significant bias. For detailed information and results for the unit bias analysis of the BTLS, see Tourkin et al. (forthcoming). For further details about the bias analysis conducted on the Teacher Listing Form, refer to Tourkin et al. (2010).

Item response rates. Item response rates indicate the percentage of respondents who answered a given survey question or item. Weighted item response rates are produced by dividing the number of sampled cases responding to an item by the number of sampled cases eligible to answer the item and adjusting by either the base or final weight. The base weight for each sampled unit is the initial basic weight multiplied by the sampling adjustment factor. The final weight for each sampled unit is the base weight adjusted for unit nonresponse and then ratio adjusted to the frame total.

Table B-3 provides a brief summary of the base- and final-weighted item response rates for BTLS public school teachers in the first, second, and third waves. The nonresponse bias analysis conducted at the item level revealed no substantial evidence of item bias in the data files. For further information on the nonresponse bias analysis and item response rates for BTLS, see Tourkin et al.(forthcoming).

Weighting
The general purpose of weighting is to scale up the sample estimates to represent the target survey population. For the BTLS first wave, weights are obtained directly from the 2007–08 SASS, since all interviewed beginning teachers in SASS were eligible for BTLS. The final weight for the first wave is TFNLWGT, which is called W1TFNLWGT on BTLS. For BTLS second and third waves, an initial basic weight (the inverse of the sampled teacher's probability of selection) is used as the starting point. Then, a weighting adjustment is applied that reflects the impact of the SASS teacher weighting procedure. Next, a nonresponse adjustment factor is calculated and applied using data that are known about the respondents and nonrespondents from the sampling frame. Finally, a ratio adjustment factor is calculated and applied, which adjusts the sample totals to frame totals in order to reduce sampling variability. The product of the factors listed above are the final cross-sectional weights for the second and third waves of BTLS, which appear in the data file as W2AFWT (applies to second wave respondents) and W2RAFWT (applies to ¬†respondents and retrospective respondents) for the second wave, and W3AFWT for the third wave. For longitudinal analysis over the 3-year collection period, W3LWGT is provided. Longitudinal weights should be used when change over time within a single population is being examined by using more than one wave of data. For further information on weighting, see Tourkin et al.(forthcoming).

The weights used in the tables in this report may vary by table and within table. For table 1, weights vary by row. Row 1 is calculated using W1TFNLWGT, row 2 is calculated using W2RAFWT, and row 3 is calculated using W3AFWT. For table 2, weights vary by data column. Estimates in data columns 1 through 3 are calculated using W2RAFWT, while estimates in data columns 4 through 6 are calculated using W3AFWT. W3AFWT is used to calculate all data columns in table 3. The weights in table 4 vary by data column. Data column 1 is calculated using W1TFNLWGT, data column 2 is calculated using W2RAFWT, and data column 3 is calculated using W3AFWT. The weights in table 5 also vary by data column. Data column 1 is calculated using W2AFWT and data column 2 is calculated using W3AFWT. The corresponding replicate weights for each final weight were used to calculate the corresponding standard errors for each table. Statistical Analysis Software (SAS) (9.2) was used to compute the statistics for this report.

Variance Estimation

In surveys with complex sample designs, such as SASS or BTLS, direct estimates of sampling errors that assume a simple random sample will typically underestimate the variability in the estimates. The SASS sample design and estimation include procedures that deviate from the assumption of simple random sampling, such as stratifying the school sample, oversampling new teachers, and sampling with differential probabilities. Therefore, to accurately estimate variance, users must employ special calculations.

One method of calculating sampling errors to reflect these aspects of the complex sample design of SASS is replication. Replication methods involve constructing a number of subsamples (i.e., replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variance of the statistic. The BTLS data file includes one set of 88 replicate weights for each cross-sectional and longitudinal weight designed to produce variance estimates. The replicate weights for cross-sectional analysis are W1TREPWT1–W1TREPWT88 for the first wave, W2ARWT1–W2ARWT88 and W2RARWT1–W2RARWT88 (includes retrospective respondents) for the second wave, and W3ARWT1–W3ARWT88 for the third wave. For longitudinal analysis over the 3-year collection period, the replicate weights are W3LRWGT1–W3LRWGT88.

Reliability of Data

The BTLS First Through Third Wave Preliminary Data File is a preliminary data file. It is considered preliminary for two reasons. First, due to the longitudinal nature of BTLS, data collected in subsequent waves have been and will be used to adjust previously missing, imputed or inaccurate values. Thus, data collected in the fourth and fifth waves may lead to changes in first, second, or third wave data. Second, first wave weights were developed prior to learning that seven additional members of the sample did not meet the definition of a beginning teacher—five did not start teaching in 2007 or 2008, and two were not teachers of regularly scheduled classes. As a result of obtaining this new information during third wave processing, these cases (representing 0.27 percent of the first wave weighted population) were removed. The subsequent waves have not yet been reweighted and will be reweighted before the next preliminary data release in 2012. The 2012 release will also include data from the fourth wave. The final dataset will be released in 2013. It will replace the preliminary datasets and will be accompanied by expanded documentation. For more information about the data collection and processing, please see Tourkin et al. (forthcoming).

BTLS estimates are based on samples. The sample estimates may differ somewhat from the values that would be obtained from administering a complete census using the same questionnaires, instructions, and enumerators. The difference occurs because a sample survey estimate is subject to two types of error: nonsampling and sampling. Estimates of the magnitude of the BTLS sampling error, but not the nonsampling error, can be derived or calculated. Nonsampling errors are attributed to many sources, including definitional difficulties, the inability or unwillingness of respondents to provide correct information, differences in the interpretation of questions, inability to recall information, errors made in collection (e.g., in recording or coding the data), errors made in processing the data, and errors made in estimating values for missing data. Quality control and edit procedures were used to reduce errors made by respondents, coders, and interviewers.

Top


1 For more information about the CCD, see http://nces.ed.gov/ccd.
2 The role of the survey coordinator was to be the main contact person at the school. A survey coordinator's duties included facilitating data collection by passing out questionnaires to the appropriate staff, reminding the staff to complete them, and collecting the questionnaires to return to the Census Bureau.
3 The 2007–08 SASS consisted of nine questionnaires: School District Questionnaire, Principal Questionnaire, Private School Principal Questionnaire, School Questionnaire, Private School Questionnaire, Public School Questionnaire (With District Items), Teacher Questionnaire, Private School Teacher Questionnaire, and School Library Media Center Questionnaire. The BTLS includes only teachers who taught in a public school (traditional or charter) in the 2007–08 school year; therefore, the only SASS questionnaire type that will be discussed is the Teacher Questionnaire.
4 After all data processing of the SASS Teacher Questionnaire data was completed, the BTLS First Wave data file was created, which includes only those public school teachers who began teaching in 2007 or 2008; all other respondents were omitted from the BTLS First Wave Data File.
5 SASS data files are fully imputed; therefore, the BTLS First Wave Data File began as a fully imputed data file since the data were collected on the 2007–08 SASS Teacher Questionnaire. The imputation that occurred for the BTLS first wave during SASS data processing was specific to that wave and did not occur during data processing for the BTLS second and third waves.
6 There are two questionnaires that compose the BTLS second wave. Both questionnaires are for 2007–08 SASS public school teacher respondents who began teaching in 2007 or 2008. The Questionnaire for Current Teachers (form TFS-3L) collects information on sampled teachers who currently teach students in any of grades Pre-K–12 and the Questionnaire for Former Teachers (form TFS-2L) collects information about sampled teachers who left the Pre-K–12 teaching profession after the 2007–08 school year. Processing specifications used for BTLS data were slightly different from those used for TFS data.
7 The BTLS third wave internet instrument contained a single survey with a variety of questionnaire paths based on whether a respondent was a current or former teacher during the second and third waves of the BTLS.


Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.