Statistical Standards Program
Table of Contents Introduction 1. Development of Concepts and Methods 1-1 Intial Planning of Surveys 1-2 Publication and Production Planning 1-3 Computation of Response Rates 1-4 Codes and Abbreviations 1-5 Defining Race and Ethnicity Data 1-6 Discretionary Grant Descriptions 2. Planning and Design of Surveys 3. Collection of Data 4. Processing and Editing of Data 5. Analysis of Data / Production of Estimates or Projections 6. Establishment of Review Procedures 7. Dissemination of Data Glossary Appendix A Appendix B Appendix C Appendix D Publication information For help viewing PDF files, please click here |
DEVELOPMENT OF CONCEPTS AND METHODS |
SUBJECT: COMPUTATION AND REPORTING OF RESPONSE RATES NCES STANDARD: 1-3 PURPOSE: To ensure that response rates used to evaluate survey estimates are computed consistently across all NCES surveys. To calculate and report response rates that measure the proportion of the sample frame that is represented by the responding units in each study. KEY TERMS: cross-sectional, base weight, estimation, frame, item nonresponse, longitudinal, overall unit nonresponse, probability of selection, required response items, response rates, stage of data collection, strata, substitution, total nonresponse, unit nonresponse, and wave.
GUIDELINE 1-3-1A: Unweighted response rates may be used for monitoring
field operations (see Standard 3-3). STANDARD 1-3-2: Unit response rates (RRU) are calculated as the ratio of the weighted number of completed interviews (I) to the weighted number of in-scope sample cases (AAPOR, 2000). There are a number of different categories of cases that comprise the total number of in-scope cases: I = weighted number of completed interviews; The unit response rate represents a composite of these components: EXAMPLE: In a school-based survey, the numerator of the unit response rate is the number of responding schools. The denominator includes the number of responding schools plus the summation of the number of schools that refused to participate, the number of eligible schools that were nonrespondents for reasons other than refusal, and an estimate of the number of eligible schools from those with unknown eligibility. Note that in this school-based survey example, there are no cases reported in the category for the number of eligible schools that were not successfully contacted. In this case, eligibility can only be determined by contacting a respondent for the sampled school.
Where K = the number of stages and C denotes cross-sectional. There may be instances where fully accurate, current year frame data are available for all cases at each stage of a survey; in that case, the estimation of overall response rates could be improved. However, in the absence of current year frame data (as is usually the case), such improvements are not possible and the above formula should be used.
K = the last stage of data collection used in the analysis; J = the last wave in the analysis; IL = the weighted number of responding cases common to all waves in the analysis; W = respondents to the last wave in the analysis who were nonrespondents in at least one of the preceding waves in the analysis; and PRRUi = the product of the unit response rates for all but the last stage of data collection.
GUIDELINE 1-3-4A: The product of the unit response rate across all stages and waves used in an analysis is approximately equal to the equation for RROL.
In longitudinal analyses, the numerator of an item response rate includes cases that have data available for all waves included in the analysis and the denominator includes the number of respondents eligible to respond in all waves included in the analysis. In the case of constructed variables, the numerator includes cases that have available data for the full set of items required to construct the variable, and the denominator includes all respondents eligible to respond to all items in the constructed variable. EXAMPLE: In a survey of postsecondary faculty while all respondents are asked to report the number of hours spent teaching classes per week, only those who report actually teaching classes are asked about the number of hours spent teaching remedial classes (Ix). In this case, the denominator of the item response rate excludes faculty who do not teach classes (I - Vx). In the case of a longitudinal analysis, when all faculty are followed up in the next year to monitor time spent on teaching remedial classes, the numerator of the item response rate for this variable is the number of faculty who responded to this variable in both years. The denominator includes all who were asked in both years. Faculty job satisfaction is measured using a constructed variable that is the average of 3 separate items -- satisfaction with professional development, satisfaction with administration, and satisfaction with teaching assignment. Only full-time faculty members are eligible to answer the satisfaction items. The numerator includes all full-time faculty who answered all 3 satisfaction items and the denominator includes all full-time faculty who completed a faculty questionnaire.
EXAMPLE: In the event a random supplemental sample is fielded, all cases are included in the response rate-both the original and supplemental cases. Assume that six schools were sampled from a stratum, each with a base weight of 10. Four are respondents and two are nonrespondents. In addition, a supplemental sample of two schools was sampled from the stratum and was fielded in an attempt to compensate for the low initial rate of response. Both of the cases from the supplemental sample are respondents. Taking the combined sample into account, each fielded school has a base weight of 7.5. The response rate then is: ((7.5+7.5+7.5+7.5+7.5+7.5)/(7.5+7.5+7.5+7.5+7.5+7.5+7.5+7.5)) x 100 = 75%.
EXAMPLE: As an example of the case where substitutes are used, but not included in the response rate, assume that two schools were sampled from a stratum. One has a base weight of 20 and the other has a base weight of 10. The first school is a respondent, while the school with a base weight of 10 does not respond. However, a matched pair methodology was used to select two substitutes for each case in the original sample. After fielding the substitutes for the nonrespondent, the first substitute also did not respond, but the second substitute responded. Since we must ignore the substitutes, the response rate is: ((20)/(20+10) x100= 66.67%. In multiple stage sample designs, where substitution occurs only at the first stage, the first stage response rate must be computed ignoring the substitutions. Response rates for other sampling stages are then computed as though no substitution occurred (i.e. in subsequent stages, cases from the substituted units are included in the computations). If multiple stage sample designs use substitution at more than one stage, then the substitutions must be ignored in the computation of response rate at each stage where substitution is used.
The American Association for Public Opinion Research. (2000). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. Ann Arbor, MI: AAPOR. |