Estimates produced using data from the NHES are subject to two types of errors: sampling errors and nonsampling errors. Nonsampling errors are errors made in the collection and processing of data. Sampling errors occur because the data are collected from a sample, rather than a census, of the population.
Nonsampling error is the term used to describe variations in the estimates that may be caused by population coverage limitations and data collection, processing, and reporting procedures. The sources of nonsampling errors are typically problems like unit and item nonresponse, the differences in respondents' interpretations of the meaning of survey questions, response differences related to the particular month or time of the year when the survey was conducted, the tendency for respondents to give socially desirable responses, and mistakes in data preparation.
In general, it is difficult to identify and estimate either the amount of nonsampling error or the bias caused by this error. For each NHES survey, efforts were made to prevent such errors from occurring and to compensate for them, where possible. For instance, during the survey design phase, cognitive interviews are conducted to assess respondents' knowledge of the survey topics, their comprehension of questions and terms, and the sensitivity of items.
The sample of households based on addresses selected for the NHES:2012 is just one of many possible samples that could have been selected from all households based on addresses. Therefore, estimates produced from this survey may differ from estimates that would have been produced from other samples. This type of variability is called sampling error because it arises from using a sample of households rather than all households.
The standard error is a measure of the variability due to sampling when estimating a statistic; standard errors for estimates presented in this report were computed using a jackknife replication method. Standard errors can be used as a measure of the precision expected from a particular sample. The probability that a complete census count would differ from the sample estimate by less than 1 standard error is about 68 percent. The chance that the difference would be less than 1.65 standard errors is about 90 percent and that the difference would be less than 1.96 standard errors is about 95 percent.
Standard errors for all of the estimates are presented in appendix C and can be used to produce confidence intervals. For example, an estimated 74 percent of students in kindergarten through grade 12 had a parent who reported attending a school or class event (table 2). This figure has an estimated standard error of 0.5. Therefore, the estimated 95 percent confidence interval for this statistic is approximately 73 to 75 percent [74 percent +/– (1.96 * 0.5)]. If repeated samples were drawn from the same population and confidence intervals were constructed for the percentage of students in kindergarten through grade 12 who had a parent who reported attending a school or class event, these intervals would contain the true population parameter 95 percent of the time.
In order to produce unbiased and consistent estimates of national totals, all of the responses in this report were weighted using the probabilities of selection of the respondents and other adjustments to account for nonresponse and coverage bias. The weight used in this First Look report is FPWT, which is the weight variable available in the PFI data file that is used to estimate the characteristics of the school-age children. In addition to weighting the responses properly, special procedures for estimating the standard errors of the estimates were employed because the NHES:2012 data were collected using a complex sample design. Complex sample designs result in data that violate some of the assumptions that are normally made when assessing the statistical significance of results from a simple random sample. For example, the standard errors of the estimates from these surveys may vary from those that would be expected if the sample were a simple random sample and the observations were independent and identically distributed random variables. The estimates and standard errors presented in this report were produced using SAS 9.2 software and the jackknife 1 (JK1) option as a replication procedure. Eighty replicate weights, FPWT1 to FPWT80, were used to compute sampling errors of estimates. These replicate weights are also available in the PFI data file.
In the NHES:2012 collection, an initial screener questionnaire was sent to all sampled households to determine whether any eligible children resided in the household. Screener questionnaires were completed by 99,590 households, for a weighted screener unit response rate of 73.8 percent. PFI questionnaires were completed for 17,563 (397 homeschooled and 17,166 enrolled) children, for a weighted unit response rate of 78.4 percent and an overall estimated unit response rate (the product of the screener unit response rate and the PFI unit response rate) of 57.8 percent.
The NHES:2012 included a bias analysis to evaluate whether nonresponse at the unit and item levels impacted the estimates. The term “bias” has a specific technical definition in this context: It is the expected difference between the estimate from the survey and the actual population value. For example, if all households were included in the survey (i.e., if a census was conducted rather than a sample survey), the difference between the estimate from the survey and the actual population value (which includes persons who did not respond to the survey) would be the bias due to unit nonresponse. Since NHES is based on a sample, the bias is defined as the expected or average value of this difference over all possible samples.
Unit nonresponse bias, or the bias due to the failure of some persons or households in the sample to respond to the survey, can be substantial when two conditions hold. First, the differences between the characteristics of respondents and nonrespondents must be relatively large. For example, consider estimating the percentage of students who have repeated a grade. If the percentage is nearly identical for both respondents and nonrespondents, then the unit nonresponse bias of the estimate will be negligible.
Second, the unit nonresponse rate must be relatively high. If the nonresponse rate is very low relative to the magnitude of the estimates, then the unit nonresponse bias in the estimates will be small, even if the differences in the characteristics between respondents and nonrespondents are relatively large. For example, if the unit nonresponse rate is only 2 percent, then estimates of totals that compose 20 or 30 percent of the population will not be greatly affected by nonresponse, even if the differences in these characteristics between respondents and nonrespondents are relatively large. If the estimate is for a small domain or subgroup (of about 5 or 10 percent of the population), then even a relatively low overall rate of nonresponse can result in important biases if the differences between respondents and nonrespondents are large.
Comparisons between the full sample population and respondent populations were made before and after the nonresponse weighting adjustments were applied to evaluate the extent to which the adjustments reduced any observed nonresponse bias. Chapter 10 of the NHES:2012 Data File User's Manual contains a detailed description of the nonresponse bias analysis. The NHES sampling frame variables were used for the unit nonresponse bias analysis for the screener and topical surveys. The analysis of unit nonresponse bias showed evidence of bias based on the distributions of the sample characteristics for the survey respondents compared to the full eligible sample. However, this bias was greatly reduced by the nonresponse weighting adjustments. In the post-adjusted screener estimates, the number of estimates showing measurable and practical differences was reduced in half. The percentage of estimates with measurable survey and sample differences greater than 1 percentage point was reduced from 7 to 3 percent for the PFI survey by the nonresponse weighting adjustments.
When key survey estimates generated with unadjusted and nonresponse adjusted weights were compared, only a small number of measurable differences were observed. This suggests that none of these variables were powerful predictors of unit response. Therefore, the unit nonresponse adjustment had limited effect on the potential bias, but it is also possible that there was little bias to be removed.
It is also possible that nonresponse bias may still be present in other variables that were not studied. For this reason, it is important to consider other methods of examining unit nonresponse bias. One such method is comparing NHES estimates to other sources. NHES estimates were compared with estimates from the American Community Survey, Current Population Survey, and prior NHES collections. Comparisons were made on common variables of interest—such as child's race/ethnicity, and sex; key questionnaire items; and parents' education and household income—to discover any indication of potential bias that may exist in the NHES:2012 data. The results from these comparisons indicate that NHES survey estimates are comparable to other data sources.