Early Childhood Longitudinal Study, Kindergarten Class of 2010–11 (ECLS–K:2011)



5. DATA QUALITY AND COMPARABILITY

Sampling Error


The estimators of sampling variances for the ECLS statistics take the ECLS complex sample design into account. Both replication and Taylor Series methods can be used to accurately analyze data from the studies. The paired jackknife replication method using replicate weights can be used to compute approximately unbiased estimates of the standard errors of the estimates. When using the Taylor Series method, a different set of stratum and first-stage unit (i.e., PSU) identifiers should be used for each set of weights. Both replicate weights and Taylor series identifiers are provided as part of the ECLS-K:2011 data files.

Design effects. An important analytic procedure is to compare the statistical efficiency of survey estimates from a complex sample survey such as the ECLS-K:2011 with estimates that would have been obtained had a simple random sample (SRS) of the same size. In a stratified clustered design, stratification generally leads to a gain in efficiency over simple random sampling, but clustering has the opposite effect because of the positive intracluster correlation of the units in the cluster. The basic measure of the relative efficiency of the sample is the design effect, defined as the ratio, for a given statistic, of the variance estimate under the actual sample design to the variance estimate that would be obtained with an SRS of the same sample size. In the ECLS-K:2011, a large number of data items were collected from children, parents, teachers, school administrators, and before- and after-school care providers. Each item has its own design effect that can be estimated from the survey data. For example, the median child-level design effect is 3.2 for fall kindergarten and 4.0 for spring kindergarten.

Nonsampling Error

Nonsampling error is the term used to describe variations in the estimates that may be caused by population coverage limitations, as well as data collection, processing, and reporting procedures. The sources of nonsampling errors are typically nonresponse, differences in respondents’ interpretations of the meaning of the questions, response differences related to the particular time the survey was conducted, and mistakes in data preparation. Steps are taken to reduce nonsampling error.

In order to reduce nonsampling error associated with respondents misunderstanding what was being asked of them, the survey design phase included cognitive laboratory interviews for the purposes of assessing respondent knowledge of different topics covered in the instruments, comprehension of questions and terms, and item sensitivity. The design phase also included testing of the CAPI/CATI instruments in order to reduce the potential for error to be introduced as a result of errors in administration.

Another potential source of nonsampling error is respondent bias that occurs when respondents systematically misreport (intentionally or unintentionally) information in a study. One potential source of respondent bias in the ECLS surveys is social desirability bias. If there are no systematic differences among specific groups under study in their tendency to give socially desirable responses, then comparisons of the different groups will accurately reflect differences among the groups. An associated error occurs when respondents give unduly positive assessments about those close to them. For example, parents may give more positive assessments of their children’s experiences than might be obtained from institutional records or from the teachers.

Response bias may also be present in the responses teachers provide about each individual student. For example, teachers filled out a survey for each of the sampled children they taught in which they answered questions on the child’s socioemotional development. Since data were collected in the fall of the base-year, first-grade, and second-grade school years, it is possible that the teachers did not have adequate time to observe the children since the start of the school year, and thus some of their responses (especially at these rounds) may be influenced by their expectations based on the children’s outward characteristics (e.g., sex, race, ELL status, disability status). In order to minimize bias, the ECLS-K:2011 used items that were previously used in the ECLS-K. Teachers were involved in the design of the cognitive assessment battery and questionnaires for the ECLS-K. NCES also followed the criteria recommended in a working paper on the accuracy of teachers’ judgments of students’ academic performances.

As in any survey, response bias may be present in the data for the ECLS-K:2011. It is not possible to state precisely how such bias may affect the results. The ECLS-K:2011 has tried to minimize some of these biases by conducting one-on-one, untimed assessments, and by asking some of the same questions about the sampled child of both teachers and parents.

Coverage error. Undercoverage occurs when the sampling frame from which a sample is selected does not fully reflect the target population of inference. The potential for coverage error in the ECLS-K:2011 was reduced by using a school-level frame derived from universe surveys of all schools in the United States and master lists of all kindergartners enrolled in sampled schools.

By designing the child assessments to be both individually administered and untimed, both coverage error and bias were reduced. Untimed, individually administered assessments allowed the study to include most children with special needs and/or who needed some type of accommodation, such as children with a learning disability, with hearing aids, etc. The only children who were excluded from the direct child assessments were those who were blind, those who were deaf, and those whose IEP stated that they were not to be tested. Exclusion from the direct child assessment did not exclude children from other parts of the study (e.g., teacher questionnaire, parent interview).

Nonresponse error. A total of approximately 780 of the 1,320 originally sampled schools participated during the base year of the study. This translates into a weighted unit response rate (weighted by the base weight) of 63 percent for the base year. Due to the lower-than-expected cooperation rate for public schools in the fall of the base year, 85 additional public schools were included in the sample as substitutes for schools that did not participate. These schools were included in order to meet the target sample sizes for students. Substitute schools are not included in the school response rate calculations.

For the base year, the weighted student unit response rates were 87 percent for the fall data collection and 85 percent for the spring data collection. The weighted student unit response rate for participation in the fall or spring data collections was 89 percent (i.e., a child assessment was completed at least once during kindergarten). The weighted student unit response rate for participation in both the fall and spring data collections was 76 percent (i.e., a child assessment was completed in both the fall and spring of kindergarten). The weighted parent unit response rates were 74 percent for the fall data collection and 67 percent for the spring data collection. The weighted parent unit response rate for participation in the fall or spring data collections was 80 percent (i.e., a parent interview was completed at least once during kindergarten). The weighted parent unit response rate for participation in both the fall and spring data collections was 55 percent (i.e., a parent interview was completed in both the fall and spring of kindergarten). The overall base-year response rate for students (with a complete assessment in either fall or spring) was 56 percent (63 percent of schools x 89 percent of sampled children). The overall response rates for the kindergarten parent interviews, which take into account school-level response, were 47 percent f or the fall kindergarten data collection, 42 percent for the spring kindergarten data collection. The overall base-year response rate for the parent interview (i.e., a complete parent interview in either fall or spring) was 50 percent (63 percent of schools x 80 percent of parents of sampled children).

For the first-grade rounds, the weighted child assessment unit response rates were 89 percent for the fall and 88 percent for the spring. The weighted parent unit response rates were 87 percent for the fall first-grade data collection, and 76 percent for the spring. Overall response rates for the child assessment, which take into account the base-year school-level response rate (63 percent), were 56 percent for the fall and 55 percent for the spring. Overall parent interview response rates, which also take into account school-level response, were 54 percent for the fall first-grade data collection and 48 percent for the spring first-grade data collection.

For the second-grade rounds, the weighted child assessment unit response rates were 84 percent for the fall and 83 percent for the spring. The weighted parent unit response rates were 81 percent for the fall and 74 percent for the spring. The overall response rates for the child assessment were 53 percent for the fall collection and 52 percent for the spring. Overall parent interview response rates, which take into account school-level response, were 51 percent for the fall second-grade data collection and 47 percent for the spring second-grade data collection.

For the third-grade round, the weighted child assessment unit response rate was 80 percent in the spring, the only period of data collection. The weighted parent interview unit response rate was 70 percent. The overall response rate for the child assessment was 50 for the third-grade collection, and the overall response rate for the parent interview was 44 percent.

For the fourth-grade round, the weighted child assessment unit response rate was 77 percent. The weighted parent interview unit response rate was 70 percent. The overall response rates, which take into account school-level response in the base year, were 49 percent for the child assessment and 44 percent for the parent interview.

For the fifth-grade round, the weighted child assessment unit response rate was 72 percent. The weighted parent interview response rate was 68 percent. The overall response rate was 45 percent for the child assessment and 42 percent for the parent interview.

A nonresponse bias analysis was conducted to determine if substantial bias was introduced as a result of nonresponse. To examine the effect of school nonresponse, estimates from the ECLS-K:2011 schools were compared to those produced using frame data (i.e., data from the Common Core of Data and the Private School Universe Survey).

The differences in the two sets of estimates are very small, suggesting there is not significant nonresponse bias present in the data. To examine the effect of nonresponse for data collected through instruments that have a response rate lower than 85 percent, estimates produced using weights that include adjustments for nonresponse were compared to estimates produced using weights without nonresponse adjustments. Additionally, for the parent interview data, estimates from the ECLS-K:2011 were compared to those from other data sources (for example, the National Household Education Surveys Program). The results of these nonresponse bias analyses also suggest that there is not a substantial bias due to nonresponse after adjusting for that nonresponse.

Top

Table ECLS-K:2011-1. Weighted unit response rates, by instrument: School years 2010–11 through 2015–16
Instrument Kindergarten, (2010–11) First Grade (2011–12) Second Grade
(2012–13)
Third Grade (2013–14) Fourth Grade (2014–15) Fifth Grade (2015–16)
School 63
Child Assessment            
 Fall 87 89 84
 Spring 85 88 83 80 77 72
Overall Child Assessment1            
 Fall 55 56 53
 Spring 53 55 52 50 49 45
Parent Interview            
 Fall 74 87 81
 Spring 67 76 74 70 70 68
Overall Parent Interview1            
 Fall 47 54 51
 Spring 42 48 47 44 44 42

† Not applicable.
1 The overall response rates take into account the base-year school-level response rate (63 percent).
NOTE: The weighted unit response rates for the child assessment and parent interview were calculated using the student base weight, which is the product of the school base weight and the within-school student weight.
SOURCE:ECLS-K:2011 publications NCES 2012-049, NCES 2015-077, NCES 2015-109, NCES 2018-094, NCES 2019-051, and NCES 2019-130; available at https://nces.ed.gov/pubsearch/getpubcats.asp?sid=024.

 

 

 

Top