Were you contacted about the 2024-25 TFS/PFS? Please contact ntps@census.gov or 1-888-595-1338 for more information.
Questions about the survey? Click here for TFS/PFS participant information!
NTPS Methodological Research
Below is a collection of papers and presentations that summarize methodological research using NTPS data. Bolded authors indicate NCES staff.
- Spiegelman, M., Zotti, A., Merlin, J. (2024). Supplementing a Paper Questionnaire with Web and Two-way Short Message Service (SMS) Surveys. Journal of Survey Statistics and Methodology, Volume 12, Issue 3, June 2024, Pages 697–711.
- Zotti, A. and Spiegelman, M. (2024). Would an Email Subject Line Phrased in Any Other Way Read As Sweet? . Proceedings of the American Association for Public Opinion Research Annual Conference, Atlanta, GA.
- Spiegelman, M., Zotti, A., and (2024). Experimental Monetary Incentives in an Establishment Survey: Using Prepaid, Cash Incentives with K-12 School Gatekeepers and Employees. Proceedings of the American Association for Public Opinion Research Annual Conference, Atlanta, GA.
- Spiegelman, M., Zotti, A. (2023). You Didn't Answer Our Survey, but What About this Text? Converting Hard-to-reach Respondents Through Text Messaging. Proceedings of the Federal Committee on Statistical Methodology 2023 Research and Policy Conference, Hyattsville, MD.
- Alaoui, S., Hunter-Zinck, H., Etudo, U., Avenilla, L., Zotti A., Kolli Y., Campanello P., Mathur, A. (2023). A Generic and Automated Staff Scraping Tool for School Webpages. Proceedings of the Federal Committee on Statistical Methodology 2023 Research and Policy Conference, Hyattsville, MD.
- Spiegelman, M. and Zotti, A. (2023). Can We Have a Moment of Your Time? Respondent Mode Preferences between Short Paper, Web, and Two-Way SMS Surveys? Proceedings of the American Association for Public Opinion Research Annual Conference, Philadelphia, PA.
- Zukerberg, A., Zotti, A., and Spiegelman, M. (2023). Are you getting my text? Proceedings of the American Association for Public Opinion Research Annual Conference, Philadelphia, PA.
- Merlin, J., Zotti, A. (2022). Is A Picture Worth a Thousand Words?: Impact of Infographics on Response Rates in a Federal Survey. Proceedings of the Federal Committee on Statistical Methodology 2022 Research and Policy Conference, Washington, DC.
- Avenilla, L. (2022). Web Scraping to Improve Establishment Employee Surveys - the Case of the National Teacher and Principal Survey. Proceedings of the Federal Committee on Statistical Methodology 2022 Research and Policy Conference, Washington, DC.
- Spiegelman, M., Zotti, A. (2022). Is This Information Correct? Assessing The Burden And Data Quality Tradeoffs Of Using Extant Data. Proceedings of the 2022 Federal Computer Assisted Survey Information Collection Workshops, Virtual.
- Katz, J. Kephart, K., Luck, J. Holzberg, J. (2021). How Should We Text You? Designing and Testing Text Messages for the 2021–22 Teacher Follow-Up Survey (TFS) and Principal Follow-Up Survey (PFS). Proceedings of the Federal Committee on Statistical Methodology 2021 Research and Policy Conference, Virtual.
- Spiegelman, M., Zotti, A. (2021). Yes, I consent to receive text messages: Conducting follow-up text surveys with principals and teachers. Proceedings of the Federal Committee on Statistical Methodology 2021 Research and Policy Conference, Virtual.
- Varela, K., Zotti, A. (2021). Getting Beyond the Front Office is Half the Battle: Multiple Levels of Incentives in a Two-Stage School Establishment Survey. Proceedings of the Sixth Annual International Conference on Establishments Statistics, Virtual.
- Spiegelman, M., Varela, K., Zotti, A. (2021). The Path of Least Resistance: Changing Mode to Boost Response Rates in an Establishment Survey. Proceedings of the Sixth Annual International Conference on Establishments Statistics, Virtual.
- Kephart, K., Katz, J., Luck, J., Holzberg, J. (2021). Using Remote Cognitive Testing to Modify Questions about Education during Coronavirus Pandemic. Proceedings of the American Association for Public Opinion Research Annual Conference, Virtual.
- Spiegelman, M., Kephart, K., Katz, J. (2021). What is the Average Daily Attendance at Your (Virtual) School? Understanding Data From Schools, Principals, and Teachers During a Pandemic. Proceedings of the American Association for Public Opinion Research Annual Conference, Virtual.
- Zukerberg, A., Zotti, A., & Cox, S. (2019). Better Late Than Never? The Use of An Adaptive Incentive with Nonrespondents. Proceedings of the American Association for Public Opinion Research Annual Conference, Toronto, Canada.
- Spiegelman, M., Okon, A., Thomas, T., Escoto, S. (2019). Who Works Here? Rostering School Staff with Vendor-Assisted Lists. Proceedings of the American Association for Public Opinion Research Annual Conference, Toronto, Canada.
- Varela, K., Zotti., A. (2019). The Effects of Providing Incentives on Multiple Levels on Response and Data Quality. Proceedings of the American Association for Public Opinion Research Annual Conference, Toronto, Canada.
- Zotti, A. (2019). Using Predictive Models to Assign Treatment Groups for NTPS 2017-18 Teacher Incentives Experiment. Proceedings of the Annual Federal Computer Assisted Survey Information Collection Workshops, Washington, DC.
- Varela, K., Zotti., A. (2018). Using Response Propensity Models to Equally Disperse 2nd Stage Sampled Cases Across Incentive Treatment Groups. Proceedings of the American Association for Public Opinion Research Annual Conference, Denver, CO.
- Spiegelman, M., Sheppard, D., Brummet, Q. (2018). Evaluation of Vendor School and Teacher Lists for the 2015-16 National Teacher and Principal Survey. Proceedings of the Federal Committee on Statistical Methodology 2018 Research and Policy Conference, Washington, DC.
- Redline, C., Zukerberg, A., Owens, C., Ho, A. (2016). Instructions in Self-administered Survey Questions: Do They Improve Data Quality or Just Make the Questionnaire Longer? Proceedings of the American Association for Public Opinion Research Annual Conference, Austin, TX.
- Redline, C., Zukerberg, A., Rizzo, L., Riddles, M. (2015). Hope Springs Eternal: Will a Probability Sample of Schools and Principals Respond by Web and Provide Email Addresses? Proceedings of the American Association for Public Opinion Research Annual Conference, Hollywood, FL.
- Riddles, M., Marker, D., Rizzo, L, Wiley, E., Zukerberg, A. (2015). Adaptive Design for the National Teacher Principal Survey. Proceedings of the American Association for Public Opinion Research Annual Conference, Hollywood, FL.
- Grady, S., Hansen, R. (2014). Integrating Administrative Data with Survey Data: A Total Survey Error Case Study using Education Data from the U.S. Department of Education. Proceedings of the 2014 International Total Survey Error Workshop, Washington, DC.
- Wang, Y. and Hill, J. (2011). Impact of Differential Incentive Amounts on Early and Final Survey Response Rates. Proceedings of the American Association for Public Opinion Research Annual Conference, Phoenix, AZ.
- Battle, D. and Coopersmith, J. (2010). Revision to Teacher Nonresponse Bias Analysis. Proceedings of the American Association for Public Opinion Research Annual Conference, Chicago, IL.
- Aritomi, P. and Hill, J. (2010). Mode Effect Analysis: Paper respondents vs. Web respondents in the 2004–05 Teacher Follow-up Survey. Proceedings of the American Association for Public Opinion Research Annual Conference, Chicago, IL.
- Tourkin, S., Parmer, R., Cox, S., and Zukerberg, A. (2005). (Inter) Net Gain? Experiments to Increase Response. In JSM Proceedings, Survey Research Methods Section. Alexandria, VA: American Statistical Association. 3995 – 4001.
- Zukerberg, A., Soderborg, A., Warner, T., Parmer, R., and Tourkin, S. (2005). Too Much of a Good Thing? Working Through Establishment Gatekeepers. In JSM Proceedings, Survey Research Methods Section. Alexandria, VA: American Statistical Association. 4028– 4030.
Supplementing a Paper Questionnaire with Web and Two-way Short Message Service (SMS) Surveys
When deciding which modes to offer, researchers consider cost, known respondent contact information, and potential mode effects. For a short survey on employment, we evaluate the effect of adding one of two new electronic data collection modes to a mailed questionnaire. We sent a survey to principals who previously responded to the National Center for Education Statistics' (NCES) National Teacher and Principal Survey (NTPS) asking about their current job status. This questionnaire, known as the Principal Follow-up Survey (PFS), has typically been administered as a short paper form that is mailed to NTPS respondents. In 2022, the PFS introduced two new modes of completion, and principals were randomly assigned to receive: (i) a paper form only; (ii) a paper form, as well as emails with a direct link to complete a web survey; or (iii) a paper form, as well as invitations by text message to complete an automated two-way short message service text survey by responding to texted "yes/no" questions. This article compares overall response rates and time-to-response by mode to determine respondent preferences for completing short surveys. Adding either electronic mode significantly increased response rates and decreased the number of days in which completed surveys were received, compared to offering only a paper questionnaire. Although email and text messages are both forms of electronic communication that may be accessible on a smartphone, the added text message survey resulted in higher response rates than the added web survey. This suggests that respondents interact differently with emails and text messages they receive and that offering an option to complete a survey by text message can increase the speed and efficiency of data collection for short surveys. View full article here.
Would an Email Subject Line Phrased in Any Other Way Read As Sweet?
With surveys' reliance on electronic communications to contact respondents, it's becoming increasingly important to stand out in the inbox crowd. To test the effectiveness of the tone and content of different email subject lines on the recipient's likelihood to access and respond to the survey by clicking the embedded URL, we tested a variety of subject lines on emails to school principals and teachers at different points in data collection. The subject lines tested fell into three tonal categories, including "Descriptive," "Altruistic," and "Persuasive," and were crossed with the inclusion or not of a "respond by" date for each subject line. School staff were randomly assigned across the various treatment groups. To assess the effectiveness of these email subject lines, metrics such as the email's "open" rate, the email's "click-through" rate, and the survey response rate will be calculated. The open rate will be determined through the available paradata which indicates if a recipient opened the email or left it unread. The click-through rate will be determined by the email source indicator embedded into the URLs included in each email contact. Survey response will also be calculated at multiple time points, including in the week following when the email is sent to note any significant increases that can be attributed to the different emails and at the end of data collection to assess if any of the treatments led to a significantly different final response rate. View the presentation slides here.
Experimental Monetary Incentives in an Establishment Survey: Using Prepaid, Cash Incentives with K-12 School Gatekeepers and Employees
Survey incentives have been studied extensively when used in household surveys, but these results may not apply to surveys of establishments and their employees. For example, it may be more difficult to reach your targeted respondent within an establishment, and employees may have different motivations to complete a survey when contacted at their workplace than at their home. We conducted several experiments within K-12 public and private schools to determine strategies that might be effective when surveying a specific type of employee: teachers. For the U.S. Department of Education's National Teacher and Principal Survey (NTPS), we tested the use of small, prepaid cash incentives in a variety of ways: (1) included in our initial outreach to teachers; (2) included in both our initial outreach to teachers and, if needed, a second, larger incentive was sent later during data collection; and (3) included in our initial outreach to a "gatekeeper," who we asked to distribute materials to sampled teachers. For each of these experiments, we discuss whether incentives increased response rates overall or for key subgroups, increased the representativeness of our respondents, or prompted sampled teachers to respond more quickly. These results may benefit similar studies that are considering using incentives, and the broader design can inform strategies for testing the use of incentives in other types of establishments. View the presentation slides here.
You Didn't Answer Our Survey, but What About this Text? Converting Hard-to-reach Respondents Through Text Messaging
As response rates decrease, new data collection methods may help persuade particularly hard-to-reach sample members to respond to surveys. In this experiment, we attempt to convert reluctant respondents by introducing text messages to other contact methods, and by offering a short, easy-to-answer survey. This experiment was conducted during the 2022 Teacher Follow-up Survey to the 2020-21 National Teacher Principal Survey, sponsored by the National Center for Education Statistics. For sample members who had not completed either a web survey or paper questionnaire after receiving 6 e-mails and 4 mailed packages, we randomly assigned them to either: receive an additional (5th) mailed package; receive a text message with a web survey URL and user ID; or receive a text message inviting them to answer a short two-way SMS text survey by responding to texted yes/no questions, in lieu of the full questionnaire. We compare whether introducing a new mode of contacting and surveying respondents, late in the data collection process, can persuade reluctant respondents to complete the questionnaire. In addition, we examine whether reluctant respondents who engage with the short two-way SMS survey can then be persuaded to complete the full survey. View the presentation slides here.
A Generic and Automated Staff Scraping Tool for School Webpages
The National Teacher and Principal Survey (NTPS) collects information on elementary to high school principals and teachers across the United States. The survey occurs every two years and consists of an initial survey and a follow-up survey. Due to declining response rates, NTPS is investigating alternative data sources to supplement survey responses. To address this need, we used web scraping and a custom pipeline to identify, download, and extract information from school staff roster webpages. The pipeline consists of several steps. First, we submit salted samples of school addresses to the Google Places API to gather school websites and then crawl the returned sites to identify staff directory pages. We then use pretrained and custom developed named entity taggers in combination with the automated identification of HTML tag motifs, to consistently extract teacher information, like names and subjects, from a page. Finally, we represent each page's HTML elements as a graph and conduct graph traversals to link a teacher's name with their corresponding subject and email. We report on the volume of information scraped and validation against manually curated datasets to demonstrate the successful development of a generalizable web scraping program that provides data to augment the NTPS survey responses. View the presentation slides here.
Can We Have a Moment of Your Time? Respondent Mode Preferences between Short Paper, Web, and Two-Way SMS Surveys?
Respondents' preferred survey completion mode may vary by survey length, content, and burden. When deciding what modes to offer, researchers additionally consider cost, known respondent contact information, and potential mode effects. For a short survey on employment activities, we compare two new electronic data collection modes to a mailed questionnaire. We sent a short Principal Follow-up Survey (PFS) to principals who previously responded to the National Center for Education Statistics' (NCES) National Teacher and Principal Survey (NTPS) to ask about their current job status (whether they are still a principal at the same school, at a different school, or are no longer working as a principal). This questionnaire has typically been administered as a one-page paper questionnaire that is mailed to respondents. In 2022, the PFS introduced two new modes of completion, and principals were randomly assigned to receive: (1) a paper form only; (2) a paper form, as well as e-mails with a direct link to complete a web survey, or (3) a paper form, as well as invitations by text message to complete a two-way SMS text survey by responding to texted yes/no questions. This presentation compares overall response rates, time-to-respond, and completion mode in order to determine respondent preferences for completing short surveys. View the presentation slides here.
Are you getting my text?
For many years, researchers have recommended varying the type of contact to increase response. Recently, multi-mode surveys have begun to use text messaging as a way to reach potential respondents. As part of a multi-mode follow up survey of teachers, those who had provided a cell phone number and consented to receive text messages, were sent a text message with a survey link in place of a mail reminder sent to other treatment groups. Another group was sent the text later in the non-response follow up. Finally, to look at the effectiveness of two-way SMS texting to collect status information, a third group, as well as all non-respondents at the end of data collection received a short two question SMS text survey at the end of data collection. The study design allows us to look at the implications of including text messaging at different stages of a multi-mode data collection strategy and whether this strategy can replace mailed follow up materials. This methodology brief will look at the impact of the texted link on timeliness of response and response rates, as well as the data collection stage at which texting links is most effective. View the presentation slides here.
Is A Picture Worth a Thousand Words?: Impact of Infographics on Response Rates in a Federal Survey
The decline in federal survey response rates in recent years is well documented and challenges agencies' ability to report reliable estimates on subpopulations of interest. Previous research has indicated that providing potential respondents with personalized data during recruitment efforts can increase response rates, though with mixed effects. However, most experiments only measured high level response rates and lacked detail on the effect of personalization on the recruitment of respondents in key subgroups. The 2021-22 Teacher Follow-up Survey (TFS), a follow-up study to the 2020-21 National Teacher and Principal Survey (NTPS), conducted an infographic experiment to measure the effect of mailed general and personalized infographics on TFS response rates. Each infographic presents findings from prior collections with plain language explanations about the importance and utility of the survey data. Response rates amongst some subgroups show mixed effects, with both increased and decreased response rates for different types of respondents. This presentation will investigate the effect of each infographic condition by selected teacher or school characteristics to determine whether the infographics were more persuasive for some subgroups than others. These results could have practical implications for survey organizations seeking to use personalized outreach materials as a recruitment strategy. View the presentation slides here.
Web Scraping to Improve Establishment Employee Surveys - the Case of the National Teacher and Principal Survey
As part of the Census Bureau's goal to use "big data" or "administrative records" to augment the reach of traditional data sources, researchers are utilizing data science methods to explore how alternative data sources can be used to supplement survey data collection. These data science methods include web scraping, or the process of systematically collecting information publicly available on the internet, as an alternative to collecting the same information directly from a survey respondent. Researchers at the Center for Optimization and Data Science (CODS) at the Census Bureau are using web scraping to collect teacher and associated school data from the websites of schools sampled for the National Teacher and Principal Survey (NTPS). Historically, selected schools are asked to provide a roster of teachers, which is used to select teachers to receive a teacher questionnaire. In the academic year following when NTPS is conducted, teachers are also contacted to complete the Teacher Follow-up Survey (TFS), and schools are then asked to provide an updated teacher roster. With web scraping, our goal is to use the school's existing internet presence to obtain teacher roster information for both the NTPS and the TFS, thereby minimizing the response burden on school staff and helping the survey team adjust sampling methodology and recruitment efforts in a more timely and cost effective manner. View the presentation slides here.
Is This Information Correct? Assessing The Burden And Data Quality Tradeoffs Of Using Extant Data.
Providing survey respondents with pre-filled data, either from extant data sources or from previous rounds of a longitudinal survey, can reduce respondent burden. However, this may reduce data quality if respondents choose to satisfice or otherwise provide low-quality data. For the National Teacher and Principal Survey (NTPS), schools are asked to provide a roster of eligible teachers that form the sampling frame for a Teacher Questionnaire. Schools in the 2020-21 collection were randomly assigned to receive either a blank roster or a pre-filled list of teachers, built from commercial data sources, and asked to make any appropriate corrections or updates. This presentation quantifies tradeoffs in respondent burden, data quality, and the downstream impacts of this form of dependent interviewing. For example, we compare roster response rates, the numbers of rostered staff and eligible rostered staff, and the overall impact of this tradeoff when surveying sampled teachers where schools are asked to verify pre-filled, extant data. View the presentation slides here.
How Should We Text You? Designing and Testing Text Messages for the 2021–22 Teacher Follow-Up Survey (TFS) and Principal Follow-Up Survey (PFS).
There is interest in adding text messaging as a contact and/or a response mode to many surveys. However, good mobile phone numbers are not always available, and there are constraints around texting people who have not opted into receiving text messages. As a result, there are unanswered questions about how to implement text messaging into a data collection strategy. The National Center for Education Statistics and Census Bureau have an opportunity to add a text messaging mode to the data collection strategy for the 2021–22 Teacher Follow-Up Survey (TFS) and Principal Follow-Up Survey (PFS). These are follow-up surveys administered to teachers and principals who complete the 2020–21 National Teacher and Principal Survey (NTPS). In the NTPS, respondents can consent to receive text messages for follow-up. In the upcoming TFS/PFS a sample of respondents will be assigned to participate in the survey by navigating to a web link embedded in the text messages or by answering the questions via two-way SMS. Prior to the fielding of the 2021–22 TFS/PFS, we conducted remote cognitive and usability testing of the messages, including using Qualtrics text messaging capabilities and mobile screensharing. We will discuss our methodology for testing and review participants' impressions of the text messages. Findings will help inform best practices as more surveys use text messaging in data collection. View the presentation slides here.
Yes, I consent to receive text messages:Conducting follow-up text surveys with principals and teachers.
The U.S. Department of Education’s National Teacher and Principal Survey (NTPS) collects data from schools, principals, and teachers. Select administrations are followed by the Principal Follow-up Survey (PFS) and Teacher Follow-up Survey (TFS) to measure staff attrition, that is, whether a principal or teacher is a stayer (same job at the same school), mover (at a different school), or leaver (no longer in the profession) during the following school year. The TFS also includes a longer survey for both current and former teachers. The NTPS asks responding principals and teachers to provide contact information, including cellphone numbers, and the 2020-21 NTPS asked respondents to check a box indicating “I consent to receive text messages for follow-up purposes only.” For the upcoming PFS and TFS, consenting principals and teachers may be contacted to complete a text message survey, and teachers may receive a link to complete their longer web surveys. This presentation discusses who consents to receive text messages, the experimental design of this texting operation, and evaluation metrics to determine whether employment status can be successfully collected by text message. View the presentation slides here.
Getting Beyond the Front Office is Half the Battle: Multiple Levels of Incentives in a Two-Stage School Establishment Survey.
Decreasing response rates and increasing data collection costs have forced survey organizations to consider new data collection strategies (Groves and Heeringa, 2006). For the 2017-18 data collection cycle, the National Teacher and Principal Survey (NTPS) explored the use of unconditional incentives in an effort to increase teacher response rates and overall sample balance. The incentives experiment was designed to test the effectiveness of incentivizing the school coordinators, who act as gatekeepers in making contact with teachers at their school, in addition to incentivizing the teachers. NTPS samples more than one teacher in a school, and the coordinator is a school staff member that is responsible for distributing the questionnaires to sampled teachers and encouraging them to respond.
This experiment occurred in two phases based on how early schools provided a list of teachers to sample from. In schools that provided a list early in the data collection operation, only a teacher incentive of $5, $10, or $0 was sent, where the teachers receiving $0 served as the control group. In schools that provided the teacher list late, both teachers and the school coordinator were eligible for an incentive of $5, $10, or $0. This experiment investigated the effects on teacher response rates when incentivizing the school-level coordinator, in conjunction with incentivizing the teachers directly. Cost-benefit analysis and R-indicators were also used to evaluate the effects of the various incentives on budget and data quality. View the presentation slides here.
The Path of Least Resistance: Changing Mode to Boost Response Rates in an Establishment Survey.
Sequential, mixed mode survey designs that begin with relatively inexpensive self-administered modes (for example, paper or web questionnaires) before transitioning to relatively more expensive modes that involve interviewers (for example, telephone or in-person interviews) may reduce total survey costs. However, respondents who are reluctant or otherwise difficult to collect data from may not respond without interviewer intervention. This research evaluates the impact of an early field intervention for hard-to-reach establishments. For the 2017-18 National Teacher and Principal Survey (NTPS), sponsored by the U.S. Department of Education's National Center for Education Statistics (NCES), sampled private schools were categorized as either "high priority" or "low priority" based on their response propensity and importance for reporting. High priority schools were randomly assigned to either be contacted by field representatives early in the data collection period ("early-to-field"), or to follow a sequential mixed mode design in which contact was first made by mail and e-mail, then by phone, and finally by in-person visits ("regular field"). This presentation evaluates the impact of the early-to-field treatment on priority schools by comparing the response rates for different NTPS survey components between schools assigned to each data collection path. In addition, the costs of the early-to-field and regular field operations and cost-per-completed case are estimated in order to better understand the data quality and financial trade-offs of early field operations in a mixed-mode survey design. View the presentation slides here.
Using Remote Cognitive Testing to Modify Questions about Education during Coronavirus Pandemic.
COVID-19 has had a significant impact on K-12 education and therefore has presented unique challenges for the measurement of education during a time in which capturing these changes is extremely important. The response of schools to the ongoing coronavirus pandemic has varied widely over time and by region, further complicating measurement. In response to this challenge, we modified some of our typical question development and testing approaches and remotely cognitive tested and deployed questions on several education surveys. In this paper, we provide an overview of these efforts, which include quick remote cognitive testing with participants from geographically diverse areas. As part of a research project planned in fall 2019, we were conducting in-person cognitive testing of new content with school staff on virtual schooling for the National Teacher and Principal School Survey when the pandemic began. We quickly pivoted to remote cognitive testing to adjust probes and develop new questions that capture what schools did for distance learning. The questions are currently being fielded for the 2020–21 school year. We further adapted these questions to be asked of parents for a different survey, the Current Population Survey School Enrollment Supplement, and then tested them with the parents of K-12 students in late May/June for fielding in October 2020. In both of these projects, we used iterative, remote cognitive testing to provide maximum flexibility. We also leveraged the ability to recruit geographically diverse participants via remote testing. In this research, we will share our lessons learned about crafting and testing questions to measure the new and dynamic phenomenon of widespread distance learning during the coronavirus pandemic. We will also give an overview of school staff and parental experiences with distance learning in the spring of 2020. View the presentation slides here.
What is the Average Daily Attendance at Your (Virtual) School? Understanding Data From Schools, Principals, and Teachers During a Pandemic.
The U.S. Department of Education's National Teacher and Principal Survey (NTPS) collects data from public and private schools, principals, and teachers about the state of education. The NTPS is a repeated cross-sectional survey, allowing for analysis of changes over time for questions that have been asked in multiple administrations. The collection is self-administered, and respondents are primarily contacted by mail and e-mail with in-person follow up. For the 2020-21 NTPS, the coronavirus pandemic introduced challenges in collecting data from respondents, asking relevant questions, interpreting survey items, and reporting meaningful results. Because many schools had staff working remotely, data collection approaches had to be rethought. Several new items were added to each questionnaire to capture school and teacher experiences during the spring of 2020, when schools began shutting down due to the coronavirus. The questionnaires also include concepts that are typically well-understood by respondents, but which may have different interpretations for schools operating in virtual or hybrid environments, for example, attendance or instructional time may be complicated concepts for schools that have changed their operating procedures during the 2020-21 school year. As a result, one additional item was added to each questionnaire to capture the respondent's status at the time of data collection (fully in-person, hybrid, or fully virtual), and cognitive interviews were conducted during data collection to assess how interpretations of otherwise standard items may have changed. This presentation will focus on the findings of the cognitive interviews and discuss how this information, combined with the new status question, will allow analysts to better contextualize changes over time and to better understand the educational impacts of the coronavirus pandemic. View the presentation slides here.
Better Late Than Never? The Use of An Adaptive Incentive with Nonrespondents.
This research looked at the effectiveness of a targeted cash incentive offered late in the survey process on response rates. The National Teacher and Principal Survey, like many repeated surveys, has experienced declining response rates over time. In designing the 2017-18 NTPS collection, we incorporated two distinct incentive experiments. The first experiment utilized a traditional non contingent cash incentive ($5 or $10) offered with the initial mailing of information on completing a teacher questionnaire online. A control group in this experiment received no incentive. The second experiment was a 'boost' or additional incentive offered at the third questionnaire mailing to subgroups selected because their response rates going into the third mailing were below reportable levels. The respondents selected for the treatment group were sent either $10 or $20 cash in the third mailing. A control group received no additional incentive. Thus, some respondents received no incentive, while others may have received a total of $30 in incentives across two mailings. This presentation will discuss how respondents were selected for the boost incentive and the effectiveness of the incentive across groups that had received a prior incentive and those who had not received a prior incentive. View the presentation slides here.
Who Works Here? Rostering School Staff with Vendor-Assisted Lists.
For surveys of employees, assistance is often needed from an establishment to provide a roster of eligible respondents. The National Teacher and Principal Survey (NTPS) has previously formed its teacher sampling frame from rosters provided by sampled schools and, when no roster is provided, by sampling from commercial data purchased from a vendor. For the 2017-18 NTPS, this strategy was expanded to include dependent rosters in order to decrease respondent burden and improve the accuracy of the vendor data; some sampled schools were sent rosters pre-populated with vendor data and asked to make any necessary changes (adding eligible teachers, removing ineligible or out of scope teachers, correcting details). For household surveys, previous research shows that listers may default to confirming a prepopulated list of housing units and fail to add missing or delete ineligible addresses (Eckman and Kreuter 2011). When rostering individuals within a household, respondents may use their own judgment rather than strictly adhering to inclusion and exclusion criteria (Tourangeau et al. 1997). However, there is limited research on how coverage error from rostering may affect establishment surveys. This analysis compares the number of teachers listed under each method to the number of teachers reported on the NTPS School Questionnaire and examines ineligibility rates among sampled teachers to evaluate the quality of each listing method. The behavior of schools that received prepopulated rosters is analyzed, to determine whether they made changes to their lists and, if so, the types of changes made (additions or deletions). Finally, response rates from teachers sampled from school-completed rosters, pre-populated rosters, and vendor data are compared to determine the benefits of using vendor lists and pre-populated rosters on overall teacher response rates. View the presentation slides here.
The Effects of Providing Incentives on Multiple Levels on Response and Data Quality.
Decreasing response rates and increasing data collection costs have forced survey organizations to consider new data collection strategies (Groves and Heeringa, 2006). For the 2017-18 data collection cycle, the National Teacher and Principal Survey (NTPS) explored the use of unconditional incentives in an effort to increase teacher response rates and overall sample balance. The teacher incentives experiment was designed to test the effectiveness of incentives sent to teachers and/or school-level coordinators. NTPS samples more than one teacher in a school, and the coordinator is a school staff member who helps get the questionnaires to teachers.
This experiment occurred in two phases based on how early schools provided a list of teachers to sample from. In schools that provided a list early in the data collection operation, only a teacher incentive of $5, $10, or $0 was sent, where the teachers receiving $0 served as the control group. In schools that provided the teacher list late or did not provide one at all, both teachers and a school-level coordinator were eligible for an incentive of $5, $10, or $0. This experiment investigated the effects on response when incentivizing teachers directly, incentivizing the school-level coordinator, and a combination of both. Cost-benefit analysis and R-indicators were also used to evaluate the effects of the various incentives on budget and data quality. View the presentation slides here (Note AAPOR credentials are required to view this PDF).
Using Predictive Models to Assign Treatment Groups for NTPS 2017–18 Teacher Incentives Experiment.
For the 2017–18 data collection cycle, the National Teacher and Principal Survey (NTPS) explored the use of unconditional incentives in an effort to increase teacher response rates and overall sample balance. The teacher incentives experiment was designed to test the effectiveness of incentives sent to teachers and/or school-level coordinators. The experiment was conducted in two phases, based on whether or not a school returned the Teacher Listing Form (TLF) early. NTPS samples more than one teacher in a school, and the coordinator is a school staff member who helps get the questionnaires to teachers. The teacher sampling process for NTPS is a two-stage design and is conducted on a flow basis. First, a school coordinator from a sampled school must return the TLF, which includes the names and subject matters for every teacher within the school. From the TLF, teachers are then sampled weekly for the teacher questionnaire. In an effort to equally disperse the teachers across all treatment groups, logistic modeling was used to create response propensity models to predict the likelihood that a school will return the TLF, which allows for teachers to then be sampled from that school. Results from the prior cycle of NTPS were used to inform the models. The models will utilize information known prior to data collection to predict which schools are likely to return the TLF. Based on the results of these models, schools were assigned to eight incentive treatment groups. View the presentation slides here.
Using Response Propensity Models to Equally Disperse 2nd Stage Sampled Cases across Incentive Treatment Groups.
Decreasing response rates and increasing data collection costs have forced survey organizations to consider new data collection strategies (Groves, 2006). Adaptive design has emerged as a framework for tailoring contact strategies to cases, including the use of incentives (Mercer, 2015). For the 2017-18 data collection cycle, the National Teacher and Principal Survey (NTPS), will be conducting a teacher incentives experiment in an effort to increase teacher response rates and overall sample balance. A combination of teacher incentives and school-level incentives will be assigned at the school-level, meaning all teachers within one school will receive the same incentive amount. The experiment will also occur in two phases. In the first phase, only teachers will receive an incentive and in the second phase, both teachers and a school-level contact will receive an incentive.
The teacher sampling process for NTPS is a two-stage design and is conducted on a flow basis. First, a school coordinator from a sampled school must return the Teacher Listing Form (TLF), which includes the names and subject matters for every teacher within the school. From the TLF, teachers are then sampled weekly for the teacher questionnaire. In an effort to equally disperse the teachers across all treatment groups, logistic modeling was used to calculate response propensities to predict the likelihood that a school coordinator will return the TLF, allowing for teachers to then be sampled from that school. Results from the prior cycle of NTPS will be used to inform the models, which will utilize information known prior to data collection to predict which schools are likely to return the TLF. Based on the results of these models, schools will be assigned to eight incentive treatment groups. View the presentation slides here (Note AAPOR credentials are required to view this PDF).
Evaluation of Vendor School and Teacher Lists for the 2015-16 National Teacher and Principal Survey.
This research analyzes the quality of commercial vendors as a potential sampling frame replacement or supplement for the National Teacher and Principal Survey (NTPS), a system of related school, principal, and teacher questionnaires sponsored by the National Center for Education Statistics (NCES) and collected by the U.S. Census Bureau. Sampled schools are asked to complete a School Questionnaire and Principal Questionnaire, as well as a Teacher Listing Form (TLF) that rosters eligible teachers and forms the sampling frame for the Teacher Questionnaire. Analysis first explores the coverage and eligibility rates from 3 different commercial vendors of school and teacher lists for the predecessor survey to the NTPS, followed by a detailed examination of the coverage, eligibility, match, and completeness rates of commercial teacher lists for the 2014-15 NTPS Pilot Test.
For the 2015–16 NTPS, teachers from schools that did not complete a TLF were sampled from commercial lists when vendor data were available. While teachers sampled from vendor lists were less likely to complete the teacher questionnaire than teachers sampled from school-reported rosters, the inclusion of teachers from vendor lists improved the overall survey response rate and will be continued in future NTPS administrations. View the presentation slides here.
Instructions in Self-administered Survey Questions: Do They Improve Data Quality or Just Make the Questionnaire Longer?
Pre-testing techniques utilized in the development of production self-administered questionnaires, such as cognitive interviewing, often identify items where respondents misinterpret or are unclear about the meaning of terms in a question. Typically, this finding results in a recommendation to add instructions to an item, which has the detrimental effect of lengthening the questionnaire. Previous experimental research has shown that instructions have an effect on the estimates when the instructions counter the way many people naturally tend to think about a concept. For example, an instruction to exclude sneakers from a count of shoes will reduce the estimate of shoes because many respondents tend to think of sneakers as shoes. In addition, previous research has shown that instructions placed before questions are more effective than those placed after. However, few studies have looked empirically at whether or not instructions that are the product of actual production pre-testing techniques are similarly effective or useful, and worth the extra length they create. Nor have many other factors been examined that might influence the effectiveness of instructions. To examine these issues further, we report on an experiment that was administered to a nationally representative sample by web. Production questions and instructions were selected from a national teacher survey. In addition, questions and instructions were intentionally created to counter teachers' natural conceptions of terms. These items were compared to a control group with no instructions. Utilizing a factorial experimental design, we also varied three factors that were predicted to alter the effectiveness of instructions: their location, format, and wording. Although the findings of this experiment are clearly generalizable to the web, arguably, these findings extend to mail surveys too. View the presentation slides here.
Hope Springs Eternal: Will a Probability Sample of Schools and Principals Respond by Web and Provide Email Addresses.
Meta-analyses have shown that response rates for Web surveys tend to be lower on average compared to other modes and that surveys that offer paper and Web modes concurrently also result in lower response rates overall. Even a study of mode preferences showed that offering only a Web survey, while appealing to those who prefer that mode, lowered overall response rates when compared to a mail survey only. Still, the Web is thought of as having advantages over paper that merits its continuous consideration. Recent research in the American Community Survey demonstrated that if not given a choice, people respond by Web. Success stories such as this and the potential advantages of the Web continue to stoke the idea that the Web may yet become a viable survey mode. As people become more adept at using computers, they may become more likely to respond by Web. It follows, therefore, that school administrators and principals, who are more likely to have access to and be adept at using computers, may be earlier adopters of the Web. In this paper, we report on the results of a factorial experiment designed to determine if institutional respondents to three surveys (a teacher listing form, school survey, and principal survey) are indeed more likely to respond by Web than mail. We address whether the advantages attributed to the Web manifest themselves in these surveys and examine respondent compositions by mode, as potential indicators of non-response bias. Finally, administrators were asked in the teacher listing form to provide email addresses for their teachers, so that these teachers could later be contacted by email to participate in a Web survey. We report on the effect that asking for these email addresses has on response rates as well. View the presentation slides here.
Adaptive Design for the National Teacher Principal Survey.
Statistical agencies are frequently confronted with the trade-offs between timeliness (relevance) and accuracy. Waiting for the last responses and quality reviews can improve accuracy but delay production of the data sets and analyses, reducing their relevance to users. The National Center for Education Statistics has conducted the quadrennial Schools and Staffing Survey (SASS) since the 1980s. Beginning with the 2015-16 school year SASS will be replaced with a new biennial National Teacher Principal Survey (NTPS). As part of the design for the NTPS, we are reviewing response patterns and paradata collected during the 2011-12 SASS to develop an adaptive design for the new study. Adaptations may include when to switch data collection modes, when to stop overall data collection, and revisions to methods for contacting respondents. We will therefore simultaneously examine multiple components of Total Survey Error, including nonresponse bias, mode effects, and relevance (time lag from reference period to publication). This presentation will discuss the different approaches considered in the analysis and provide a framework for other studies considering adaptive design approaches. View the presentation slides here.
Integrating Administrative Data with Survey Data: A Total Survey Error Case Study using Education Data from the U.S. Department of Education.
No abstract available. View the presentation slides here.
Impact of Differential Incentive Amounts on Early and Final Survey Response Rates.
In order to determine an optimal incentive amount for the Beginning Teacher Longitudinal Study (BTLS), sponsored by National Center for Education Statistics, an experimental study was carried out during the 2009–10 administration of BTLS to test the relationship between differential incentive amounts and response rates. The results showed that a higher incentive ($20 vs. $10) was associated with both higher early (one month since the beginning of the collection and before the telephone follow-up) and final response rates. Roughly 2,000 teachers in BTLS cohort were randomly assigned to one of two experimental groups – a $10 group, or a $20 group. Teachers received the cash incentives in mail around the same time they received the email to the online BTLS instrument. Comparisons were made between the two incentive groups on the number of interviews before the telephone follow-up date, the number of final interviews, and the number of completed surveys using chi-square tests for association between incentive amounts and different outcome variables. The result shows that 49 percent of the teachers in the $10 group and 56 percent in the $20 group completed the survey or the required items of the survey by the telephone follow-up date. The chi-square test result shows a significant relationship between the number of early study interviews and the incentive amount (chi-square = 10.3463, 1 d.f., p = .0013). By the end of the data collection, 86 percent of the participants in the $10 group and 90 percent in the $20 group were counted as the study interviews. The chi-square test result shows a significant relationship between the number of final study interviews and the incentive amount (chi-square = 7.6216, 1 d.f., p = .0058). However, the chi-square test result shows no significant association between the completeness of the BTLS survey and the incentive amount. View the presentation slides here.
Revision to Teacher Nonresponse Bias Analysis
To examine different methods of nonresponse bias the authors tested a revised nonresponse bias analysis using the 2007-08 Schools and Staffing Survey (SASS), sponsored by the National Center for Education Statistics (NCES), and compared the outcomes to the previous method used for the survey. The new analysis gave direct, quantifiable measurements of bias between respondents and the sample population using frame characteristics. Chi-square tests were used to compare the distribution of the characteristics between the respondent and sample populations. The previous method used a set of criteria to determine whether or not differences between the respondents and the sample frame were significant, but did not produce a quantifiable measure of bias. The authors found the new method yielded similar results for nonresponse bias as the previous method but offered several improvements. The quantified measurements of bias allowed direct comparison between different frame characteristics, as well as the ability to summarize bias levels across the various frame characteristics utilized. The new method also facilitated comparisons of bias before and after nonresponse adjustments and could be completed in less time with greater efficiency. No presentation slides are available.
Mode Effect Analysis: Paper respondents vs. Web respondents in the 2004–05 Teacher Follow-up Survey.
In order to address concerns that changes in future collection modes could impact the consistency of Teacher Followup Survey (TFS) estimates over time, the authors conducted a mode effect analysis on the 2004–05 administration. Although predominately a paper collection, the 2004–05 sampling design incorporated a small web-collection component. Using the experiment data for secondary analysis, the authors tested the 2004–05 TFS estimates for the possibility of mode effects. The more recent 2008–09 TFS data collection used paper surveys mainly to convert nonrespondents and for a small group of teacher who did not provide an e-mail address and were not sent the web survey. This study aims at exploring potential differences in teachers' responses between those who used the web questionnaire and those who opted for the paper questionnaire. The authors tested mode effects using six survey questions with different characteristics and levels of complexity. Selected questions with multiple items and multiple rating categories were transformed to be analyzed as estimated measures of differentiation. A two-stage Heckman-type instrumental variable (IV) regression model was used for these analyses. The first stage models whether teachers with certain characteristics were more prone to choose the web instrument than the paper instrument. The second stage of the model determines whether having used the web or paper instrument affected the quality of the survey responses, using the IV web-choice estimated from stage one. Regression results indicate that using the web-based instrument does not lead to lower quality or different survey responses compared to paper-based responses. As a result of the findings that support the initial hypothesis that no mode effects are observed on the 2004-05 TFS, the authors conclude that changes to data collection methodologies in the recently collected web-based 2008–09 TFS are unlikely to create long-term inconsistency issues. No presentation slides are available.
(Inter) Net Gain? Experiments to Increase Response.
This paper focuses on using monetary incentives to increase overall response and Internet response when both mail and Internet choices are offered. Previous research has indicated that offering an Internet option does not increase total response rates for mail out questionnaires, but there are methods that can increase Internet response over mail response. Given the advantages of an Internet administration, Census and NCES wanted to encourage Internet response for the 2004–05 Teacher Follow-up Survey (TFS). At the same time, incentives were offered to increase response. An experiment designed to assess the impact of incentives on overall response and on Internet response was embedded into the administration of the TFS. The experiment looked at three different Internet treatments: 1) initially providing only the Internet option, 2) providing the Internet option initially and informing respondents that a paper questionnaire is forthcoming, and 3) no Internet option. Half of each of these groups was provided a $10 gift card incentive at the time of first contact. This paper will compare the relative impact of each method on the response rates, and make recommendations for other surveys interested in encouraging Internet response and/or using pre-paid incentives. View the presentation slides here.
Too Much of a Good Thing? Working Through Establishment Gatekeepers.
In the Schools and Staffing Survey (SASS), school districts (Local Education Agencies, LEAs) function as gatekeepers for the schools within them. An experiment was conducted during the 2003–2004 SASS in three Census Bureau Regional Offices (ROs) to determine the best way to handle district contacts. Approximately half of the school districts in each office were contacted by phone before the survey was conducted to find out what information was needed prior to approving the survey. If information or formal application was required, it was prepared and sent to the district shortly after the call. A standard pre-notice letter was sent to the other half of districts at the start of data collection. This paper reports on the impact that pre-contacting districts has on school response and makes recommendations for handling establishment gatekeepers. View the presentation slides here.