Skip Navigation

Statistical Standards
Statistical Standards Program
 
Table of Contents
 
Introduction
1. Development of Concepts and Methods
2. Planning and Design of Surveys
3. Collection of Data
4. Processing and Editing of Data

 
4-1 Data Editing and Imputation of Item Nonresponse
4-2 Maintaining Confidentiality
4-3 Evaluation of Surveys
4-4 Nonresponse Bias Analysis

5. Analysis of Data / Production of Estimates or Projections
6. Establishment of Review Procedures
7. Dissemination of Data
 
Glossary
Appendix A
Appendix B
Appendix C
Appendix D
 
Publication information

For help viewing PDF files, please click here
PROCESSING AND EDITING OF DATA

SUBJECT: EVALUATION OF SURVEYS

NCES STANDARD: 4-3

PURPOSE: To provide the necessary information for users of the survey data to understand the quality and limitations of the data and to provide information for planning future surveys or replications of the same survey. The evaluation should also include a systematic assessment of all sources of error for key statistics that will be studied or reported in NCES publications.

KEY TERMS: coverage error, edit, estimation, field test, frame, imputation, item nonresponse, key variables, longitudinal, nonsampling error, overcoverage, pretest, response rate, sampling error, stage of data collection, survey system, undercoverage, unit nonresponse, and variance.


STANDARD 4-3-1: All proposed and ongoing surveys conducted by NCES must include an evaluation component in the survey design plan. The survey evaluation must include the following:

  1. Range of potential sources of error;
     
  2. Measurement of the magnitude of sampling error and sources of the various types of nonsampling error expected to be a problem;
     
  3. Studies that identify factors associated with differential levels of error and assess procedures for reducing the magnitude of these errors;
     
  4. Assessment of the quality of the final estimates, including comparisons to external sources, and where possible, comparisons to prior estimates from the same data collection; and
     
  5. Technical report or series of technical reports summarizing results of evaluation studies; for example, a quality profile or total survey error model.

    GUIDELINE 4-3-1A: Review past surveys similar to the one being planned to determine what statistical evaluation data have been collected in prior surveys and any potential problems that have been identified. Based on this review, prepare a written summary of what is known about the sources and magnitude of error.

    GUIDELINE 4-3-1B: Indicate how each issue will be addressed, including the identification of required data internal and external to the study, a discussion of the comparisons that could be made, the experiments that may be built into the survey, and evaluation methods.

    GUIDELINE 4-3-1C: Watch for additional problem areas arising during the course of the survey and, where possible, collect and analyze appropriate data to assess the magnitude of the problem.

    GUIDELINE 4-3-1D: Analyze data from the survey evaluation prior to or concurrent with the analysis of the survey data so that the results of the evaluation can be taken into account when processing, analyzing, and interpreting the study data.

    GUIDELINE 4-3-1E: List 4-3-A may be used to help guide the development of evaluation plans during the survey planning stage and to develop a monitoring system for possible problems that may emerge during data collection and processing. The list identifies five categories of errors and enumerates potential sources of error within each category, methods to measure or evaluate them, and possible modifications for correcting them.


LIST 4-3-A: MEASURING AND EVALUATING ERROR


REFERENCES

Biemer, P.P., Groves, R.M., Lyberg, L.E., Mathiowetz, N.A., and Sudman, S. (eds.). (1991). Measurement Errors in Surveys. New York: John Wiley & Sons.

Bradburn, N.M., Sudman, S., and Associates. (1979). Improving Interviewing Methods and Questionnaire Design: Response Effects to Threatening Questions in Survey Research. San Francisco: Jossey-Bass.

Brick, J.M. and Kalton, G. (1996). "Handling Missing Data in Survey Research." Statistical Methodology in Medical Research, 5: 215-238.

Brick, J.M., Cahalan, M., Gray, L., and Severynse, J. (1994). A Study of Selected Nonsampling Errors in the 1991 Survey of Recent College Graduates. Washington, DC: U.S. Department of Education, National Center for Education Statistics (Technical Report NCES 95-640).

Federal Committee on Statistical Methodology. (1978). An Error Profile: Employment as Measured by the Current Population Survey. Washington, DC: U.S. Office of Management and Budget (Statistical Policy Working Paper 3).

Federal Committee on Statistical Methodology. (1988). Quality in Establishment Surveys. Washington, DC: U.S. Office of Management and Budget (Statistical Policy Working Paper 15).

Federal Committee on Statistical Methodology. (1990). Survey Coverage. Washington, DC: U.S. Office of Management and Budget (Statistical Policy Working Paper 17).

Forsman, G. and Schreiner, I. (1991). "The Design and Analysis of Reinterview: An Overview." In P. Biemer, R. Groves, L. Lyberg, N. Mathiowetz, and S. Sudman (eds.) Measurement Errors in Surveys. New York: John Wiley & Sons, 279-302.

Gonzalez, M.E., Ogus, J.L., Shapiro, G., and Tepping, B.J. (1975). "Standards for Discussion and Presentation of Errors in Survey and Census Data." Journal of the American Statistical Association, 70: 351, Part II.

Groves, R.M. and Couper, M.P. (1998). Nonresponse in Household Interview Surveys. New York: John Wiley & Sons.

Groves, R.M.(1989). Survey Errors and Survey Costs. New York: Wiley.

Groves, R.M., P.P. Biemer, L.E. Lyberg, J.T. Massey, W.L. Nicholls II, and J. Waksberg, (eds.). (1988). Telephone Methods for Surveys. New York: Wiley.

Jabine, T.B.; King, K.E.; and Petroni, R. J. (1990). Survey of Income and Program Participation Quality Profile. Washington, DC: U.S. Department of Commerce, Bureau of Census.

Kalton, G. (1998). Survey of Income and Program Participation (SIPP) Quality Profile. 3rd Edition. Working Paper #230. Washington, DC: U.S. Bureau of the Census.

Kulka, R. (1995). "The Use of Incentives to Survey 'Hard-to-Reach' Respondents: A Brief Review of Empirical Research and Current Research Practices." Seminar on New Directions in Statistical Methodology. Washington, DC: U.S. Office of Management and Budget (Statistical Policy Working Paper 23, 256-299).

Lessler, J.T. and Kalsbeek, W.D. (1992). Nonsampling Error in Surveys. New York: John Wiley & Sons.

Lyberg, L. and Dean, P(1992). "Methods for Reducing Nonresponse Rates: A Review." Paper prepared for presentation at the 1992 meeting of the American Association for Public Opinion Research.

Lyberg, L., Biemer, P., Collins, M, de Leeuw, E., Dippo, C., Schwarz, N., and Trewin, D. (eds.). (1998). Survey Measurement and Process Quality. New York: John Wiley & Sons.

Paxson, M. C., Dillman, D. and Tarnai, J. (1995). "Improving Response to Business Mail Surveys." In Cox et al. (eds.) Business Survey Methods. New York: John Wiley & Sons.

Salvucci, S., Walter, E., Conley, V., Fink, S., and Saba, M. (1997). Measurement Error Studies at the National Center for Education Statistics. Washington, DC: U.S. Department of Education, National Center for Education Statistics, NCES 97-464.

Sudman, S. and Bradburn, N. (1974). Response Effects in Surveys: A Review and Synthesis. Chicago, IL: Aldine.

Sudman, S., Bradburn, N., and Schwarz, N. (1996). Thinking about Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Jossey-Bass.

U.S. Bureau of the Census and U.S. Bureau of Labor Statistics.(2000). The Current Population Survey: Design and Methodology. Technical Paper 63. Washington, DC.

United Nations. (1982). National Household Survey Capability Programme, Non-Sampling Errors in Household Surveys: Sources, Assessment and Control. New York: United Nations Department of Technical Cooperation for Development and Statistical Office (preliminary version).