Quick Response Information System (QRIS)



FAST RESPONSE SURVEY SYSTEM (FRSS) SECTIONS:
5. DATA QUALITY AND COMPARABILITY

Sampling Error


FRSS estimates are based on the selected samples and, consequently, are subject to sampling variability. The standard error is a measure of the variability of estimates due to sampling. Jackknife replication is the method used to compute estimates of the standard errors.

Nonsampling Error

Nonsampling error describes variations in the estimates that may be caused by population coverage limitations and data collection, processing, and reporting procedures. The sources of nonsampling errors are typically problems like unit and item nonresponse, differences in respondents’ interpretations of the meaning of questions, response differences related to the particular time the survey was conducted, and mistakes made during data preparation. It is difficult to identify and estimate either the amount of nonsampling error or the bias caused by this error.

To minimize the potential for nonsampling error, FRSS surveys use a variety of procedures, including a pretest of the questionnaire with members of the population to be surveyed. The pretest provides the opportunity to check for consistency of interpretation of questions and definitions and to eliminate ambiguous items. The questionnaire and instructions are also extensively reviewed by NCES and the data requestor. In addition, extensive editing of the questionnaire responses is conducted to check the data for accuracy and consistency. Cases with missing, inconsistent, or out-of-range items are recontacted by telephone to resolve problems. Data entered for all surveys received by mail, fax, e-mail, or telephone are verified to ensure accuracy.

Top

Coverage error. FRSS surveys are subject to any coverage error present in the major NCES data files that serve as their sampling frames. Many FRSS surveys use CCD surveys as the sampling frame.

There is a potential for undercoverage bias associated with the absence of population units (e.g., schools) built between the time when the sampling frame is constructed and the time of the FRSS survey administration. Since teacher coverage depends on teacher lists sent by the schools, teacher coverage is assumed to be good.

Nonresponse error. Unit response for most FRSS surveys is 85 percent or higher. (See table FRSS-1.) Item nonresponse for most items is less than 1 percent. The weights are adjusted for unit nonresponse. Imputation is performed for items with an item response rate of less than 100 percent.

Measurement error. Errors may result from problems such as misrecording of responses; incorrect editing, coding, and data entry; different interpretations of definitions and the meaning of questions; memory effects; the timing of the survey; and the respondent’s inability to report certain data due to their recordkeeping system. Nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used. These types of experiments are not generally conducted by the FRSS.

Comparability
Some FRSS surveys have been repeated so that results can be compared over time. Examples of these surveys are listed below.

  • The FRSS survey on condition of public school facilities was conducted in 1999 and 2013 and many of the same data items were collected in both administrations.
  • The FRSS conducted surveys of telecommunications and Internet access in public schools during each year 1994 through 2003 and again in 2005. In addition, the telecommunications survey was conducted in private schools during 1995 and 1998-99.
  • The survey on dual credit and exam-based courses in public high schools was conducted during the 2002-03 school year and repeated in the 2010-11 school year.
  • Sets of surveys on arts education were conducted at the public elementary and secondary school levels during 1994, 1999, and 2009-10. The FRSS also conducted sets of surveys on arts education at the public school teacher level in 2000 and 2010.
  • A district-level survey on technology-based distance education courses for public school students was administered in 2002-03 and 2004-05. Two types of comparisons are possible with these FRSS data. The first type involves comparisons of the cross-sectional estimates for the two or more time periods. The second type of comparison provides longitudinal analysis of change between 2002-03 and 2004-05.

Occasionally, an FRSS survey is fielded to provide data that can be compared with data from another NCES survey. For example, the FRSS survey School Safety and Discipline: 2013-14 was designed to provide comparable data for a subset of items in the 2009-10 School Survey on Crime and Safety (SSOCS). In another example, the 1996 Survey on Family and School Partnerships in Public Schools, K-8 was designed to provide data that could be compared with parent data from the 1996 National Household Education Survey as well as with data from the Prospects Study, a congressionally mandated study of educational growth and opportunity from 1991 to 1994. A third example is the 2001 Survey on High School Guidance Counseling, which was designed to provide data that could be compared to data from the 1984 Administrator and Teacher Survey supplement to the High School and Beyond Longitudinal Study.

 

Table FRSS-1. Sample sizes and weighted response rates for recent FRSS surveys: Selected years, 2010–2017
Survey Sample size Weighted response rate
FRSS 108: Career and Technical Education Programs in Public School Districts, 2016–17 1800 86
FRSS 107: Programs and Services for High School English Learners, 2015–16 1700 89
FRSS 106: School Safety and Discipline, 2013–14 1600 85
FRSS 105: Condition of Public School Facilities, 2012–13 1800 90
FRSS 104: Dual Credit and Exam-Based Courses, 2010–11 1500 91
SOURCE: U.S. Department of Education, ED Data Inventory. Available at https://datainventory.ed.gov.

Top