Skip Navigation
small NCES header image

Statistical Standards
Statistical Standards Program
Table of Contents
1. Development of Concepts and Methods

1-1 Intial Planning of Surveys
1-2 Publication and Production Planning
1-3 Computation of Response Rates
1-4 Codes and Abbreviations
1-5 Defining Race and Ethnicity Data
1-6 Discretionary Grant Descriptions

2. Planning and Design of Surveys
3. Collection of Data
4. Processing and Editing of Data
5. Analysis of Data / Production of Estimates or Projections
6. Establishment of Review Procedures
7. Dissemination of Data
Appendix A
Appendix B
Appendix C
Appendix D
Publication information

For help viewing PDF files, please click here



PURPOSE: To ensure that response rates used to evaluate survey estimates are computed consistently across all NCES surveys. To calculate and report response rates that measure the proportion of the sample frame that is represented by the responding units in each study.

KEY TERMS: cross-sectional, base weight, estimation, frame, item nonresponse, longitudinal, overall unit nonresponse, probability of selection, required response items, response rates, stage of data collection, strata, substitution, total nonresponse, unit nonresponse, and wave.

All response rates must be calculated using the sample base weights (i.e., the inverse of the probability of selection) when weighting is employed. Report the weighted unit response rates for each stage of data collection (e.g., schools, students, teachers, administrators), and for overall unit response rates. Report the range of total response rates for items included in each publication. Also, report specific item and total response rates when the item response rates fall below 70 percent (see Standards 2-1 and 2-2 for response rates and survey design issues, see Standard 3-2 on methods for achieving acceptable response rates, and see Standard 7-2 for response rate reporting requirements).

    GUIDELINE 1-3-1A: Unweighted response rates may be used for monitoring field operations (see Standard 3-3).

STANDARD 1-3-2: Unit response rates (RRU) are calculated as the ratio of the weighted number of completed interviews (I) to the weighted number of in-scope sample cases (AAPOR, 2000). There are a number of different categories of cases that comprise the total number of in-scope cases:

I = weighted number of completed interviews;
R = weighted number of refused interview cases;
O = weighted number of eligible sample units not responding for reasons other than refusal;
NC = weighted number of noncontacted sample units known to be eligible;
U = weighted number of sample units of unknown eligibility, with no interview; and
e = estimated proportion of sample units of unknown eligibility that are eligible.

The unit response rate represents a composite of these components:

RRU formula

    EXAMPLE: In a school-based survey, the numerator of the unit response rate is the number of responding schools. The denominator includes the number of responding schools plus the summation of the number of schools that refused to participate, the number of eligible schools that were nonrespondents for reasons other than refusal, and an estimate of the number of eligible schools from those with unknown eligibility. Note that in this school-based survey example, there are no cases reported in the category for the number of eligible schools that were not successfully contacted. In this case, eligibility can only be determined by contacting a respondent for the sampled school.

STANDARD 1-3-3: Overall unit response rates for cross-sectional analysis (RROC) are calculated as the product of two or more unit level response rates when a survey has multiple stages.

RRO^C = sum from i=1 to K of RRUi

Where K = the number of stages and C denotes cross-sectional.

There may be instances where fully accurate, current year frame data are available for all cases at each stage of a survey; in that case, the estimation of overall response rates could be improved. However, in the absence of current year frame data (as is usually the case), such improvements are not possible and the above formula should be used.

STANDARD 1-3-4: Special procedures are needed for longitudinal surveys where previous nonrespondents are eligible for inclusion in subsequent waves. The overall unit response rate used in longitudinal analysis (RROL) reflects the proportion of all eligible respondents in the sample who participated in all waves in the analysis, multiplied by the product of the response rates for all but the last stage of data collection used in the analysis. In some longitudinal surveys, some of the stages surveyed for the first wave are not resurveyed in subsequent waves, but the unit response rates for the earlier stages are components of the overall unit response rates for subsequent waves.

RRO^L formula
    K = the last stage of data collection used in the analysis;
    J = the last wave in the analysis;
    IL = the weighted number of responding cases common to all waves in the analysis;
    W = respondents to the last wave in the analysis who were nonrespondents in at least one of the preceding waves in the analysis; and
    PRRUi = the product of the unit response rates for all but the last stage of data collection.
    EXAMPLE: For an example in which the respondent in one stage is not resurveyed in subsequent waves, consider a teacher survey where states must be contacted to get a list of schools. This results in a first stage unit response rate for the school listing activity (RRU1). The schools must then be contacted to obtain a list of teachers. This results in a second stage unit response rate for the teacher listing activity (RRU2). Then, once a teacher sample is drawn from the lists, the teacher component of the survey has a third stage unit response rate for the responding teachers (RRU3). The product of the first, second, and third stage unit response rates is the overall response rate for teachers in the first wave of the data collection. To examine changes in job status, teachers are followed up in the second wave in the next school year (RRU4) and in the third wave the following year (RRU5). In an analysis that looks only at the results from the first and third waves, the response rate for teachers is the product of the response rate for the school listing function (RRU1), the response rate for the teacher listing function (RRU2), and the response rate for teachers eligible in both waves of the survey (i.e., the intersection of RRU3 and RRU5).

    GUIDELINE 1-3-4A: The product of the unit response rate across all stages and waves used in an analysis is approximately equal to the equation for RROL.

STANDARD 1-3-5: Item response rates (RRI) are calculated as the ratio of the number of respondents for whom an in-scope response was obtained (Ix for item x) to the number of respondents who are asked to answer that item. The number asked to answer an item is the number of unit level respondents (I) minus the number of respondents with a valid skip for item x (Vx). When an abbreviated questionnaire is used to convert refusals, the eliminated questions are treated as item nonresponse.

RRI^x = I^x / (I - V^x)

In longitudinal analyses, the numerator of an item response rate includes cases that have data available for all waves included in the analysis and the denominator includes the number of respondents eligible to respond in all waves included in the analysis.

In the case of constructed variables, the numerator includes cases that have available data for the full set of items required to construct the variable, and the denominator includes all respondents eligible to respond to all items in the constructed variable.

    EXAMPLE: In a survey of postsecondary faculty while all respondents are asked to report the number of hours spent teaching classes per week, only those who report actually teaching classes are asked about the number of hours spent teaching remedial classes (Ix). In this case, the denominator of the item response rate excludes faculty who do not teach classes (I - Vx).

    In the case of a longitudinal analysis, when all faculty are followed up in the next year to monitor time spent on teaching remedial classes, the numerator of the item response rate for this variable is the number of faculty who responded to this variable in both years. The denominator includes all who were asked in both years.

    Faculty job satisfaction is measured using a constructed variable that is the average of 3 separate items -- satisfaction with professional development, satisfaction with administration, and satisfaction with teaching assignment. Only full-time faculty members are eligible to answer the satisfaction items. The numerator includes all full-time faculty who answered all 3 satisfaction items and the denominator includes all full-time faculty who completed a faculty questionnaire.

STANDARD 1-3-6: Total response rates (RRTx) for specific items are calculated as the product of the overall unit response rate (RRO) and the item response rate for item x (RRIx).

RRT^x = RRO*RRI^x)
    EXAMPLE: The product of the overall response rate from a faculty survey (RRO) and the item response rate for income (RRIx) is the item-specific total response rate for faculty income.

STANDARD 1-3-7: To supplement a sample when too few cases are obtained one or more independent random samples of the population or sampling strata can be drawn and released. When this is done, the released samples must be used in their entirety. In this case, reported response rates must be based on the original and the added sample cases.

    EXAMPLE: In the event a random supplemental sample is fielded, all cases are included in the response rate-both the original and supplemental cases. Assume that six schools were sampled from a stratum, each with a base weight of 10. Four are respondents and two are nonrespondents. In addition, a supplemental sample of two schools was sampled from the stratum and was fielded in an attempt to compensate for the low initial rate of response. Both of the cases from the supplemental sample are respondents. Taking the combined sample into account, each fielded school has a base weight of 7.5. The response rate then is:

    ((7.5+7.5+7.5+7.5+7.5+7.5)/(7.5+7.5+7.5+7.5+7.5+7.5+7.5+7.5)) x 100 = 75%.

STANDARD 1-3-8: Substitutions may only be done using matched pairs that are selected as part of the initial sample selection. If substitutions are used to supplement a sample unit response rates must be calculated without the substituted cases included (i.e., only the original cases are used).

    EXAMPLE: As an example of the case where substitutes are used, but not included in the response rate, assume that two schools were sampled from a stratum. One has a base weight of 20 and the other has a base weight of 10. The first school is a respondent, while the school with a base weight of 10 does not respond. However, a matched pair methodology was used to select two substitutes for each case in the original sample. After fielding the substitutes for the nonrespondent, the first substitute also did not respond, but the second substitute responded. Since we must ignore the substitutes, the response rate is:

    ((20)/(20+10) x100= 66.67%.

In multiple stage sample designs, where substitution occurs only at the first stage, the first stage response rate must be computed ignoring the substitutions. Response rates for other sampling stages are then computed as though no substitution occurred (i.e. in subsequent stages, cases from the substituted units are included in the computations). If multiple stage sample designs use substitution at more than one stage, then the substitutions must be ignored in the computation of response rate at each stage where substitution is used.


The American Association for Public Opinion Research. (2000). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. Ann Arbor, MI: AAPOR.

Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey


No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics -
U.S. Department of Education