Surveys & Programs
- Annual Reports
- National Assessments
- International Assessments
- Early Childhood
- Elementary/ Secondary
- Data Systems, Use, & Privacy
Data & Tools
- Downloads Microdata/Raw Data
- Online Analysis
- School and College Search
- Comparison Tools
- Questionnaire Tools
- Geographic Tools
- Other Tools
- Fast Facts
- News & Events
- Publications & Products
- About Us
NTPS Methodological Research
Below is a collection of papers and presentations that summarize methodological research using NTPS data. Bolded authors indicate NCES staff.
- Spiegelman, M., Zotti, A., Cox, S. (2022). How Do you Describe Yourself in the Workplace? Asking Teachers about their Sexual Orientation and Gender Identity in a School Survey. Proceedings of the Federal Committee on Statistical Methodology 2022 Research and Policy Conference, Washington, DC.
- Merlin, J., Zotti, A. (2022). Is A Picture Worth a Thousand Words?: Impact of Infographics on Response Rates in a Federal Survey. Proceedings of the Federal Committee on Statistical Methodology 2022 Research and Policy Conference, Washington, DC.
- Avenilla, L. (2022). Web Scraping to Improve Establishment Employee Surveys - the Case of the National Teacher and Principal Survey. Proceedings of the Federal Committee on Statistical Methodology 2022 Research and Policy Conference, Washington, DC.
- Spiegelman, M., Zotti, A. (2022). Is This Information Correct? Assessing The Burden And Data Quality Tradeoffs Of Using Extant Data. Proceedings of the 2022 Federal Computer Assisted Survey Information Collection Workshops, Virtual.
- Katz, J. Kephart, K., Luck, J. Holzberg, J. (2021). How Should We Text You? Designing and Testing Text Messages for the 2021–22 Teacher Follow-Up Survey (TFS) and Principal Follow-Up Survey (PFS). Proceedings of the Federal Committee on Statistical Methodology 2021 Research and Policy Conference, Virtual.
- Spiegelman, M., Zotti, A. (2021). Yes, I consent to receive text messages: Conducting follow-up text surveys with principals and teachers. Proceedings of the Federal Committee on Statistical Methodology 2021 Research and Policy Conference, Virtual.
- Spiegelman, M., Varela, K., Zotti, A. (2021). The Path of Least Resistance: Changing Mode to Boost Response Rates in an Establishment Survey. Proceedings of the Sixth Annual International Conference on Establishments Statistics, Virtual.
- Kephart, K., Katz, J., Luck, J., Holzberg, J. (2021). Using Remote Cognitive Testing to Modify Questions about Education during Coronavirus Pandemic. Proceedings of the American Association for Public Opinion Research Annual Conference, Virtual.
- Spiegelman, M., Kephart, K., Katz, J. (2021). What is the Average Daily Attendance at Your (Virtual) School? Understanding Data From Schools, Principals, and Teachers During a Pandemic. Proceedings of the American Association for Public Opinion Research Annual Conference, Virtual.
- Zukerberg, A., Zotti, A., & Cox, S. (2019). Better Late Than Never? The Use of An Adaptive Incentive with Nonrespondents. Proceedings of the American Association for Public Opinion Research Annual Conference, Toronto, Canada.
- Spiegelman, M., Okon, A., Thomas, T., Escoto, S. (2019). Who Works Here? Rostering School Staff with Vendor-Assisted Lists. Proceedings of the American Association for Public Opinion Research Annual Conference, Toronto, Canada.
- Varela, K., Zotti., A. (2019). The Effects of Providing Incentives on Multiple Levels on Response and Data Quality. Proceedings of the American Association for Public Opinion Research Annual Conference, Toronto, Canada.
- Zotti, A. (2019). Using Predictive Models to Assign Treatment Groups for NTPS 2017-18 Teacher Incentives Experiment. Proceedings of the Annual Federal Computer Assisted Survey Information Collection Workshops, Washington, DC.
- Varela, K., Zotti., A. (2018). Using Response Propensity Models to Equally Disperse 2nd Stage Sampled Cases Across Incentive Treatment Groups. Proceedings of the American Association for Public Opinion Research Annual Conference, Denver, CO.
- Spiegelman, M., Sheppard, D., Brummet, Q. (2018). Evaluation of Vendor School and Teacher Lists for the 2015-16 National Teacher and Principal Survey. Proceedings of the Federal Committee on Statistical Methodology 2018 Research and Policy Conference, Washington, DC.
- Redline, C., Zukerberg, A., Owens, C., Ho, A. (2016). Instructions in Self-administered Survey Questions: Do They Improve Data Quality or Just Make the Questionnaire Longer? Proceedings of the American Association for Public Opinion Research Annual Conference, Austin, TX.
- Redline, C., Zukerberg, A., Rizzo, L., Riddles, M. (2015). Hope Springs Eternal: Will a Probability Sample of Schools and Principals Respond by Web and Provide Email Addresses? Proceedings of the American Association for Public Opinion Research Annual Conference, Hollywood, FL.
- Riddles, M., Marker, D., Rizzo, L, Wiley, E., Zukerberg, A. (2015). Adaptive Design for the National Teacher Principal Survey. Proceedings of the American Association for Public Opinion Research Annual Conference, Hollywood, FL.
- Grady, S., Hansen, R. (2014). Integrating Administrative Data with Survey Data: A Total Survey Error Case Study using Education Data from the U.S. Department of Education. Proceedings of the 2014 International Total Survey Error Workshop, Washington, DC.
- Wang, Y. and Hill, J. (2011). Impact of Differential Incentive Amounts on Early and Final Survey Response Rates. Proceedings of the American Association for Public Opinion Research Annual Conference, Phoenix, AZ.
- Battle, D. and Coopersmith, J. (2010). Revision to Teacher Nonresponse Bias Analysis. Proceedings of the American Association for Public Opinion Research Annual Conference, Chicago, IL.
- Aritomi, P. and Hill, J. (2010). Mode Effect Analysis: Paper respondents vs. Web respondents in the 2004–05 Teacher Follow-up Survey. Proceedings of the American Association for Public Opinion Research Annual Conference, Chicago, IL.
- Tourkin, S., Parmer, R., Cox, S., and Zukerberg, A. (2005). (Inter) Net Gain? Experiments to Increase Response. In JSM Proceedings, Survey Research Methods Section. Alexandria, VA: American Statistical Association. 3995 – 4001.
- Zukerberg, A., Soderborg, A., Warner, T., Parmer, R., and Tourkin, S. (2005). Too Much of a Good Thing? Working Through Establishment Gatekeepers. In JSM Proceedings, Survey Research Methods Section. Alexandria, VA: American Statistical Association. 4028– 4030.
How do you Describe Yourself in the Workplace? Asking Teachers about their Sexual Orientation and Gender Identity in a School Survey
Many federal surveys, including the Census Household Pulse Survey, National Crime Victimization Survey, National Health Interview Survey, and High School Longitudinal Study, ask respondents about their sexual orientation and/or their gender identity (SOGI). These questions allow respondents to accurately describe themselves, statistical agencies to fully describe their populations, and researchers to explore differential outcomes for these groups. Past research has found that respondents are comfortable with these questions when contacted at their home, but there is limited research to address whether respondents will answer these questions when contacted at their workplace. In 2022, the U.S. Department of Education's National Teacher and Principal Survey (NTPS) conducted a split-ballot test to determine whether public school teachers would be willing to answer SOGI questions when contacted at their schools. This presentation discusses the results of cognitive interviews conducted in 2018 and 2022 that asked school staff about question wording, their comfort answering these questions, and their colleagues' perceived comfort. For the split ballot test, we compare response rates and breakoff rates when SOGI questions are included or omitted. When SOGI questions are included, we examine item response rates against other demographic items, the performance of a web survey verification question when teachers report different answers for their gender and sex assigned at birth, and write-in responses and other respondent feedback. View the presentation slides here.
The decline in federal survey response rates in recent years is well documented and challenges agencies' ability to report reliable estimates on subpopulations of interest. Previous research has indicated that providing potential respondents with personalized data during recruitment efforts can increase response rates, though with mixed effects. However, most experiments only measured high level response rates and lacked detail on the effect of personalization on the recruitment of respondents in key subgroups. The 2021-22 Teacher Follow-up Survey (TFS), a follow-up study to the 2020-21 National Teacher and Principal Survey (NTPS), conducted an infographic experiment to measure the effect of mailed general and personalized infographics on TFS response rates. Each infographic presents findings from prior collections with plain language explanations about the importance and utility of the survey data. Response rates amongst some subgroups show mixed effects, with both increased and decreased response rates for different types of respondents. This presentation will investigate the effect of each infographic condition by selected teacher or school characteristics to determine whether the infographics were more persuasive for some subgroups than others. These results could have practical implications for survey organizations seeking to use personalized outreach materials as a recruitment strategy. View the presentation slides here.
Web Scraping to Improve Establishment Employee Surveys - the Case of the National Teacher and Principal Survey
As part of the Census Bureau's goal to use "big data" or "administrative records" to augment the reach of traditional data sources, researchers are utilizing data science methods to explore how alternative data sources can be used to supplement survey data collection. These data science methods include web scraping, or the process of systematically collecting information publicly available on the internet, as an alternative to collecting the same information directly from a survey respondent. Researchers at the Center for Optimization and Data Science (CODS) at the Census Bureau are using web scraping to collect teacher and associated school data from the websites of schools sampled for the National Teacher and Principal Survey (NTPS). Historically, selected schools are asked to provide a roster of teachers, which is used to select teachers to receive a teacher questionnaire. In the academic year following when NTPS is conducted, teachers are also contacted to complete the Teacher Follow-up Survey (TFS), and schools are then asked to provide an updated teacher roster. With web scraping, our goal is to use the school's existing internet presence to obtain teacher roster information for both the NTPS and the TFS, thereby minimizing the response burden on school staff and helping the survey team adjust sampling methodology and recruitment efforts in a more timely and cost effective manner. View the presentation slides here.
Providing survey respondents with pre-filled data, either from extant data sources or from previous rounds of a longitudinal survey, can reduce respondent burden. However, this may reduce data quality if respondents choose to satisfice or otherwise provide low-quality data. For the National Teacher and Principal Survey (NTPS), schools are asked to provide a roster of eligible teachers that form the sampling frame for a Teacher Questionnaire. Schools in the 2020-21 collection were randomly assigned to receive either a blank roster or a pre-filled list of teachers, built from commercial data sources, and asked to make any appropriate corrections or updates. This presentation quantifies tradeoffs in respondent burden, data quality, and the downstream impacts of this form of dependent interviewing. For example, we compare roster response rates, the numbers of rostered staff and eligible rostered staff, and the overall impact of this tradeoff when surveying sampled teachers where schools are asked to verify pre-filled, extant data. View the presentation slides here.
How Should We Text You? Designing and Testing Text Messages for the 2021–22 Teacher Follow-Up Survey (TFS) and Principal Follow-Up Survey (PFS).
There is interest in adding text messaging as a contact and/or a response mode to many surveys. However, good mobile phone numbers are not always available, and there are constraints around texting people who have not opted into receiving text messages. As a result, there are unanswered questions about how to implement text messaging into a data collection strategy. The National Center for Education Statistics and Census Bureau have an opportunity to add a text messaging mode to the data collection strategy for the 2021–22 Teacher Follow-Up Survey (TFS) and Principal Follow-Up Survey (PFS). These are follow-up surveys administered to teachers and principals who complete the 2020–21 National Teacher and Principal Survey (NTPS). In the NTPS, respondents can consent to receive text messages for follow-up. In the upcoming TFS/PFS a sample of respondents will be assigned to participate in the survey by navigating to a web link embedded in the text messages or by answering the questions via two-way SMS. Prior to the fielding of the 2021–22 TFS/PFS, we conducted remote cognitive and usability testing of the messages, including using Qualtrics text messaging capabilities and mobile screensharing. We will discuss our methodology for testing and review participants' impressions of the text messages. Findings will help inform best practices as more surveys use text messaging in data collection. View the presentation slides here.
Yes, I consent to receive text messages:Conducting follow-up text surveys with principals and teachers.
The U.S. Department of Education’s National Teacher and Principal Survey (NTPS) collects data from schools, principals, and teachers. Select administrations are followed by the Principal Follow-up Survey (PFS) and Teacher Follow-up Survey (TFS) to measure staff attrition, that is, whether a principal or teacher is a stayer (same job at the same school), mover (at a different school), or leaver (no longer in the profession) during the following school year. The TFS also includes a longer survey for both current and former teachers. The NTPS asks responding principals and teachers to provide contact information, including cellphone numbers, and the 2020-21 NTPS asked respondents to check a box indicating “I consent to receive text messages for follow-up purposes only.” For the upcoming PFS and TFS, consenting principals and teachers may be contacted to complete a text message survey, and teachers may receive a link to complete their longer web surveys. This presentation discusses who consents to receive text messages, the experimental design of this texting operation, and evaluation metrics to determine whether employment status can be successfully collected by text message. View the presentation slides here.
Sequential, mixed mode survey designs that begin with relatively inexpensive self-administered modes (for example, paper or web questionnaires) before transitioning to relatively more expensive modes that involve interviewers (for example, telephone or in-person interviews) may reduce total survey costs. However, respondents who are reluctant or otherwise difficult to collect data from may not respond without interviewer intervention. This research evaluates the impact of an early field intervention for hard-to-reach establishments. For the 2017-18 National Teacher and Principal Survey (NTPS), sponsored by the U.S. Department of Education's National Center for Education Statistics (NCES), sampled private schools were categorized as either "high priority" or "low priority" based on their response propensity and importance for reporting. High priority schools were randomly assigned to either be contacted by field representatives early in the data collection period ("early-to-field"), or to follow a sequential mixed mode design in which contact was first made by mail and e-mail, then by phone, and finally by in-person visits ("regular field"). This presentation evaluates the impact of the early-to-field treatment on priority schools by comparing the response rates for different NTPS survey components between schools assigned to each data collection path. In addition, the costs of the early-to-field and regular field operations and cost-per-completed case are estimated in order to better understand the data quality and financial trade-offs of early field operations in a mixed-mode survey design. View the presentation slides here.
COVID-19 has had a significant impact on K-12 education and therefore has presented unique challenges for the measurement of education during a time in which capturing these changes is extremely important. The response of schools to the ongoing coronavirus pandemic has varied widely over time and by region, further complicating measurement. In response to this challenge, we modified some of our typical question development and testing approaches and remotely cognitive tested and deployed questions on several education surveys. In this paper, we provide an overview of these efforts, which include quick remote cognitive testing with participants from geographically diverse areas. As part of a research project planned in fall 2019, we were conducting in-person cognitive testing of new content with school staff on virtual schooling for the National Teacher and Principal School Survey when the pandemic began. We quickly pivoted to remote cognitive testing to adjust probes and develop new questions that capture what schools did for distance learning. The questions are currently being fielded for the 2020–21 school year. We further adapted these questions to be asked of parents for a different survey, the Current Population Survey School Enrollment Supplement, and then tested them with the parents of K-12 students in late May/June for fielding in October 2020. In both of these projects, we used iterative, remote cognitive testing to provide maximum flexibility. We also leveraged the ability to recruit geographically diverse participants via remote testing. In this research, we will share our lessons learned about crafting and testing questions to measure the new and dynamic phenomenon of widespread distance learning during the coronavirus pandemic. We will also give an overview of school staff and parental experiences with distance learning in the spring of 2020. View the presentation slides here.
What is the Average Daily Attendance at Your (Virtual) School? Understanding Data From Schools, Principals, and Teachers During a Pandemic.
The U.S. Department of Education's National Teacher and Principal Survey (NTPS) collects data from public and private schools, principals, and teachers about the state of education. The NTPS is a repeated cross-sectional survey, allowing for analysis of changes over time for questions that have been asked in multiple administrations. The collection is self-administered, and respondents are primarily contacted by mail and e-mail with in-person follow up. For the 2020-21 NTPS, the coronavirus pandemic introduced challenges in collecting data from respondents, asking relevant questions, interpreting survey items, and reporting meaningful results. Because many schools had staff working remotely, data collection approaches had to be rethought. Several new items were added to each questionnaire to capture school and teacher experiences during the spring of 2020, when schools began shutting down due to the coronavirus. The questionnaires also include concepts that are typically well-understood by respondents, but which may have different interpretations for schools operating in virtual or hybrid environments, for example, attendance or instructional time may be complicated concepts for schools that have changed their operating procedures during the 2020-21 school year. As a result, one additional item was added to each questionnaire to capture the respondent's status at the time of data collection (fully in-person, hybrid, or fully virtual), and cognitive interviews were conducted during data collection to assess how interpretations of otherwise standard items may have changed. This presentation will focus on the findings of the cognitive interviews and discuss how this information, combined with the new status question, will allow analysts to better contextualize changes over time and to better understand the educational impacts of the coronavirus pandemic. View the presentation slides here.
This research looked at the effectiveness of a targeted cash incentive offered late in the survey process on response rates. The National Teacher and Principal Survey, like many repeated surveys, has experienced declining response rates over time. In designing the 2017-18 NTPS collection, we incorporated two distinct incentive experiments. The first experiment utilized a traditional non contingent cash incentive ($5 or $10) offered with the initial mailing of information on completing a teacher questionnaire online. A control group in this experiment received no incentive. The second experiment was a 'boost' or additional incentive offered at the third questionnaire mailing to subgroups selected because their response rates going into the third mailing were below reportable levels. The respondents selected for the treatment group were sent either $10 or $20 cash in the third mailing. A control group received no additional incentive. Thus, some respondents received no incentive, while others may have received a total of $30 in incentives across two mailings. This presentation will discuss how respondents were selected for the boost incentive and the effectiveness of the incentive across groups that had received a prior incentive and those who had not received a prior incentive. View the presentation slides here.
For surveys of employees, assistance is often needed from an establishment to provide a roster of eligible respondents. The National Teacher and Principal Survey (NTPS) has previously formed its teacher sampling frame from rosters provided by sampled schools and, when no roster is provided, by sampling from commercial data purchased from a vendor. For the 2017-18 NTPS, this strategy was expanded to include dependent rosters in order to decrease respondent burden and improve the accuracy of the vendor data; some sampled schools were sent rosters pre-populated with vendor data and asked to make any necessary changes (adding eligible teachers, removing ineligible or out of scope teachers, correcting details). For household surveys, previous research shows that listers may default to confirming a prepopulated list of housing units and fail to add missing or delete ineligible addresses (Eckman and Kreuter 2011). When rostering individuals within a household, respondents may use their own judgment rather than strictly adhering to inclusion and exclusion criteria (Tourangeau et al. 1997). However, there is limited research on how coverage error from rostering may affect establishment surveys. This analysis compares the number of teachers listed under each method to the number of teachers reported on the NTPS School Questionnaire and examines ineligibility rates among sampled teachers to evaluate the quality of each listing method. The behavior of schools that received prepopulated rosters is analyzed, to determine whether they made changes to their lists and, if so, the types of changes made (additions or deletions). Finally, response rates from teachers sampled from school-completed rosters, pre-populated rosters, and vendor data are compared to determine the benefits of using vendor lists and pre-populated rosters on overall teacher response rates. View the presentation slides here.
Decreasing response rates and increasing data collection costs have forced survey organizations to consider new data collection strategies (Groves and Heeringa, 2006). For the 2017-18 data collection cycle, the National Teacher and Principal Survey (NTPS) explored the use of unconditional incentives in an effort to increase teacher response rates and overall sample balance. The teacher incentives experiment was designed to test the effectiveness of incentives sent to teachers and/or school-level coordinators. NTPS samples more than one teacher in a school, and the coordinator is a school staff member who helps get the questionnaires to teachers.
This experiment occurred in two phases based on how early schools provided a list of teachers to sample from. In schools that provided a list early in the data collection operation, only a teacher incentive of $5, $10, or $0 was sent, where the teachers receiving $0 served as the control group. In schools that provided the teacher list late or did not provide one at all, both teachers and a school-level coordinator were eligible for an incentive of $5, $10, or $0. This experiment investigated the effects on response when incentivizing teachers directly, incentivizing the school-level coordinator, and a combination of both. Cost-benefit analysis and R-indicators were also used to evaluate the effects of the various incentives on budget and data quality. View the presentation slides here (Note AAPOR credentials are required to view this PDF).
For the 2017–18 data collection cycle, the National Teacher and Principal Survey (NTPS) explored the use of unconditional incentives in an effort to increase teacher response rates and overall sample balance. The teacher incentives experiment was designed to test the effectiveness of incentives sent to teachers and/or school-level coordinators. The experiment was conducted in two phases, based on whether or not a school returned the Teacher Listing Form (TLF) early. NTPS samples more than one teacher in a school, and the coordinator is a school staff member who helps get the questionnaires to teachers. The teacher sampling process for NTPS is a two-stage design and is conducted on a flow basis. First, a school coordinator from a sampled school must return the TLF, which includes the names and subject matters for every teacher within the school. From the TLF, teachers are then sampled weekly for the teacher questionnaire. In an effort to equally disperse the teachers across all treatment groups, logistic modeling was used to create response propensity models to predict the likelihood that a school will return the TLF, which allows for teachers to then be sampled from that school. Results from the prior cycle of NTPS were used to inform the models. The models will utilize information known prior to data collection to predict which schools are likely to return the TLF. Based on the results of these models, schools were assigned to eight incentive treatment groups. View the presentation slides here.
Using Response Propensity Models to Equally Disperse 2nd Stage Sampled Cases across Incentive Treatment Groups.
Decreasing response rates and increasing data collection costs have forced survey organizations to consider new data collection strategies (Groves, 2006). Adaptive design has emerged as a framework for tailoring contact strategies to cases, including the use of incentives (Mercer, 2015). For the 2017-18 data collection cycle, the National Teacher and Principal Survey (NTPS), will be conducting a teacher incentives experiment in an effort to increase teacher response rates and overall sample balance. A combination of teacher incentives and school-level incentives will be assigned at the school-level, meaning all teachers within one school will receive the same incentive amount. The experiment will also occur in two phases. In the first phase, only teachers will receive an incentive and in the second phase, both teachers and a school-level contact will receive an incentive.
The teacher sampling process for NTPS is a two-stage design and is conducted on a flow basis. First, a school coordinator from a sampled school must return the Teacher Listing Form (TLF), which includes the names and subject matters for every teacher within the school. From the TLF, teachers are then sampled weekly for the teacher questionnaire. In an effort to equally disperse the teachers across all treatment groups, logistic modeling was used to calculate response propensities to predict the likelihood that a school coordinator will return the TLF, allowing for teachers to then be sampled from that school. Results from the prior cycle of NTPS will be used to inform the models, which will utilize information known prior to data collection to predict which schools are likely to return the TLF. Based on the results of these models, schools will be assigned to eight incentive treatment groups. View the presentation slides here (Note AAPOR credentials are required to view this PDF).
Evaluation of Vendor School and Teacher Lists for the 2015-16 National Teacher and Principal Survey.
This research analyzes the quality of commercial vendors as a potential sampling frame replacement or supplement for the National Teacher and Principal Survey (NTPS), a system of related school, principal, and teacher questionnaires sponsored by the National Center for Education Statistics (NCES) and collected by the U.S. Census Bureau. Sampled schools are asked to complete a School Questionnaire and Principal Questionnaire, as well as a Teacher Listing Form (TLF) that rosters eligible teachers and forms the sampling frame for the Teacher Questionnaire. Analysis first explores the coverage and eligibility rates from 3 different commercial vendors of school and teacher lists for the predecessor survey to the NTPS, followed by a detailed examination of the coverage, eligibility, match, and completeness rates of commercial teacher lists for the 2014-15 NTPS Pilot Test.
For the 2015–16 NTPS, teachers from schools that did not complete a TLF were sampled from commercial lists when vendor data were available. While teachers sampled from vendor lists were less likely to complete the teacher questionnaire than teachers sampled from school-reported rosters, the inclusion of teachers from vendor lists improved the overall survey response rate and will be continued in future NTPS administrations. View the presentation slides here.
Instructions in Self-administered Survey Questions: Do They Improve Data Quality or Just Make the Questionnaire Longer?
Pre-testing techniques utilized in the development of production self-administered questionnaires, such as cognitive interviewing, often identify items where respondents misinterpret or are unclear about the meaning of terms in a question. Typically, this finding results in a recommendation to add instructions to an item, which has the detrimental effect of lengthening the questionnaire. Previous experimental research has shown that instructions have an effect on the estimates when the instructions counter the way many people naturally tend to think about a concept. For example, an instruction to exclude sneakers from a count of shoes will reduce the estimate of shoes because many respondents tend to think of sneakers as shoes. In addition, previous research has shown that instructions placed before questions are more effective than those placed after. However, few studies have looked empirically at whether or not instructions that are the product of actual production pre-testing techniques are similarly effective or useful, and worth the extra length they create. Nor have many other factors been examined that might influence the effectiveness of instructions. To examine these issues further, we report on an experiment that was administered to a nationally representative sample by web. Production questions and instructions were selected from a national teacher survey. In addition, questions and instructions were intentionally created to counter teachers' natural conceptions of terms. These items were compared to a control group with no instructions. Utilizing a factorial experimental design, we also varied three factors that were predicted to alter the effectiveness of instructions: their location, format, and wording. Although the findings of this experiment are clearly generalizable to the web, arguably, these findings extend to mail surveys too. View the presentation slides here.
Hope Springs Eternal: Will a Probability Sample of Schools and Principals Respond by Web and Provide Email Addresses.
Meta-analyses have shown that response rates for Web surveys tend to be lower on average compared to other modes and that surveys that offer paper and Web modes concurrently also result in lower response rates overall. Even a study of mode preferences showed that offering only a Web survey, while appealing to those who prefer that mode, lowered overall response rates when compared to a mail survey only. Still, the Web is thought of as having advantages over paper that merits its continuous consideration. Recent research in the American Community Survey demonstrated that if not given a choice, people respond by Web. Success stories such as this and the potential advantages of the Web continue to stoke the idea that the Web may yet become a viable survey mode. As people become more adept at using computers, they may become more likely to respond by Web. It follows, therefore, that school administrators and principals, who are more likely to have access to and be adept at using computers, may be earlier adopters of the Web. In this paper, we report on the results of a factorial experiment designed to determine if institutional respondents to three surveys (a teacher listing form, school survey, and principal survey) are indeed more likely to respond by Web than mail. We address whether the advantages attributed to the Web manifest themselves in these surveys and examine respondent compositions by mode, as potential indicators of non-response bias. Finally, administrators were asked in the teacher listing form to provide email addresses for their teachers, so that these teachers could later be contacted by email to participate in a Web survey. We report on the effect that asking for these email addresses has on response rates as well. View the presentation slides here.
Statistical agencies are frequently confronted with the trade-offs between timeliness (relevance) and accuracy. Waiting for the last responses and quality reviews can improve accuracy but delay production of the data sets and analyses, reducing their relevance to users. The National Center for Education Statistics has conducted the quadrennial Schools and Staffing Survey (SASS) since the 1980s. Beginning with the 2015-16 school year SASS will be replaced with a new biennial National Teacher Principal Survey (NTPS). As part of the design for the NTPS, we are reviewing response patterns and paradata collected during the 2011-12 SASS to develop an adaptive design for the new study. Adaptations may include when to switch data collection modes, when to stop overall data collection, and revisions to methods for contacting respondents. We will therefore simultaneously examine multiple components of Total Survey Error, including nonresponse bias, mode effects, and relevance (time lag from reference period to publication). This presentation will discuss the different approaches considered in the analysis and provide a framework for other studies considering adaptive design approaches. View the presentation slides here.
Integrating Administrative Data with Survey Data: A Total Survey Error Case Study using Education Data from the U.S. Department of Education.
No abstract available. View the presentation slides here.
In order to determine an optimal incentive amount for the Beginning Teacher Longitudinal Study (BTLS), sponsored by National Center for Education Statistics, an experimental study was carried out during the 2009–10 administration of BTLS to test the relationship between differential incentive amounts and response rates. The results showed that a higher incentive ($20 vs. $10) was associated with both higher early (one month since the beginning of the collection and before the telephone follow-up) and final response rates. Roughly 2,000 teachers in BTLS cohort were randomly assigned to one of two experimental groups – a $10 group, or a $20 group. Teachers received the cash incentives in mail around the same time they received the email to the online BTLS instrument. Comparisons were made between the two incentive groups on the number of interviews before the telephone follow-up date, the number of final interviews, and the number of completed surveys using chi-square tests for association between incentive amounts and different outcome variables. The result shows that 49 percent of the teachers in the $10 group and 56 percent in the $20 group completed the survey or the required items of the survey by the telephone follow-up date. The chi-square test result shows a significant relationship between the number of early study interviews and the incentive amount (chi-square = 10.3463, 1 d.f., p = .0013). By the end of the data collection, 86 percent of the participants in the $10 group and 90 percent in the $20 group were counted as the study interviews. The chi-square test result shows a significant relationship between the number of final study interviews and the incentive amount (chi-square = 7.6216, 1 d.f., p = .0058). However, the chi-square test result shows no significant association between the completeness of the BTLS survey and the incentive amount. View the presentation slides here.
To examine different methods of nonresponse bias the authors tested a revised nonresponse bias analysis using the 2007-08 Schools and Staffing Survey (SASS), sponsored by the National Center for Education Statistics (NCES), and compared the outcomes to the previous method used for the survey. The new analysis gave direct, quantifiable measurements of bias between respondents and the sample population using frame characteristics. Chi-square tests were used to compare the distribution of the characteristics between the respondent and sample populations. The previous method used a set of criteria to determine whether or not differences between the respondents and the sample frame were significant, but did not produce a quantifiable measure of bias. The authors found the new method yielded similar results for nonresponse bias as the previous method but offered several improvements. The quantified measurements of bias allowed direct comparison between different frame characteristics, as well as the ability to summarize bias levels across the various frame characteristics utilized. The new method also facilitated comparisons of bias before and after nonresponse adjustments and could be completed in less time with greater efficiency. No presentation slides are available.
Mode Effect Analysis: Paper respondents vs. Web respondents in the 2004–05 Teacher Follow-up Survey.
In order to address concerns that changes in future collection modes could impact the consistency of Teacher Followup Survey (TFS) estimates over time, the authors conducted a mode effect analysis on the 2004–05 administration. Although predominately a paper collection, the 2004–05 sampling design incorporated a small web-collection component. Using the experiment data for secondary analysis, the authors tested the 2004–05 TFS estimates for the possibility of mode effects. The more recent 2008–09 TFS data collection used paper surveys mainly to convert nonrespondents and for a small group of teacher who did not provide an e-mail address and were not sent the web survey. This study aims at exploring potential differences in teachers' responses between those who used the web questionnaire and those who opted for the paper questionnaire. The authors tested mode effects using six survey questions with different characteristics and levels of complexity. Selected questions with multiple items and multiple rating categories were transformed to be analyzed as estimated measures of differentiation. A two-stage Heckman-type instrumental variable (IV) regression model was used for these analyses. The first stage models whether teachers with certain characteristics were more prone to choose the web instrument than the paper instrument. The second stage of the model determines whether having used the web or paper instrument affected the quality of the survey responses, using the IV web-choice estimated from stage one. Regression results indicate that using the web-based instrument does not lead to lower quality or different survey responses compared to paper-based responses. As a result of the findings that support the initial hypothesis that no mode effects are observed on the 2004-05 TFS, the authors conclude that changes to data collection methodologies in the recently collected web-based 2008–09 TFS are unlikely to create long-term inconsistency issues. No presentation slides are available.
This paper focuses on using monetary incentives to increase overall response and Internet response when both mail and Internet choices are offered. Previous research has indicated that offering an Internet option does not increase total response rates for mail out questionnaires, but there are methods that can increase Internet response over mail response. Given the advantages of an Internet administration, Census and NCES wanted to encourage Internet response for the 2004–05 Teacher Follow-up Survey (TFS). At the same time, incentives were offered to increase response. An experiment designed to assess the impact of incentives on overall response and on Internet response was embedded into the administration of the TFS. The experiment looked at three different Internet treatments: 1) initially providing only the Internet option, 2) providing the Internet option initially and informing respondents that a paper questionnaire is forthcoming, and 3) no Internet option. Half of each of these groups was provided a $10 gift card incentive at the time of first contact. This paper will compare the relative impact of each method on the response rates, and make recommendations for other surveys interested in encouraging Internet response and/or using pre-paid incentives. View the presentation slides here.
In the Schools and Staffing Survey (SASS), school districts (Local Education Agencies, LEAs) function as gatekeepers for the schools within them. An experiment was conducted during the 2003–2004 SASS in three Census Bureau Regional Offices (ROs) to determine the best way to handle district contacts. Approximately half of the school districts in each office were contacted by phone before the survey was conducted to find out what information was needed prior to approving the survey. If information or formal application was required, it was prepared and sent to the district shortly after the call. A standard pre-notice letter was sent to the other half of districts at the start of data collection. This paper reports on the impact that pre-contacting districts has on school response and makes recommendations for handling establishment gatekeepers. View the presentation slides here.