Kindergarten children and children of kindergarten age in ungraded settings enrolled in the 2010–11 school year are the baseline for the ECLS-K:2011 cohort. Students who attended early learning centers or institutions that offered education only through kindergarten were included in the study sample and represented in the cohort if those institutions were included in NCES’s Common Core of Data or Private School Survey universe collections.
The ECLS–K:2011 followed a nationally representative cohort of children from kindergarten through the spring of 2016, when most of the children were in fifth grade.
Base‐year (i.e., kindergarten) collections. Approximately 20,250 children in 1,320 schools (1,035 public and 285 private) were sampled and eligible for the base-year data collections of the ECLS-K:2011. The sample included children from different racial/ethnic and socioeconomic backgrounds. Asian/Pacific Islander (API) students were oversampled to assure that the sample included enough students of this race/ethnicity to be able to make accurate estimates for these students as a group.
The ECLS-K:2011 cohort was sampled using a multistage sampling design. The first-stage sampling frame for the ECLS-K:2011 was a list of the 3,141 counties in the United States. The county-level frame was used to form a list of primary sampling units (PSUs) from which a subset of PSUs was sampled. Ten PSUs with a large measure of size (defined as the number of 5-year-old children in the PSU) were included in the ECLS-K:2011 sample with certainty. The remaining PSUs were sampled using a stratified sampling procedure. They were grouped into 40 strata defined by Metropolitan Statistical Area (MSA) status, census geographic region, size class (defined using the measure of size), per capita income, and the race/ethnicity of 5-year-old children residing in the PSU (specifically the percent of 5-year-old APIs, the percent of 5-year-old Blacks, and the percent of 5-year-old Hispanics). Two PSUs were selected without replacement in each stratum, with probability proportional to size and with known joint probability of inclusion of the pair.
The second stage of sampling involved selecting samples of public and private schools that had kindergarten programs or that educated children of kindergarten age in an ungraded setting from within the sampled PSUs. The target for the number of schools participating in the base year of the study was 180 private and 720 public schools, for a total of 900 schools. In order to achieve this target number, approximately 280 private schools and 1,030 public schools were initially sampled from a frame of public schools and a frame of private schools constructed for the 2010 National Assessment of Education Progress (NAEP). The NAEP frame had not yet been updated and, therefore, was not final at the time it was obtained for use in the ECLS-K:2011. For this reason, a supplemental frame of newly opened schools and kindergarten programs was developed in the spring of 2010, and a supplemental sample of schools selected from that frame was added to the main sample of study schools. Schools were selected with probability proportional to size. The measure of size for schools was kindergarten enrollment adjusted to take into account the desired oversampling of APIs.
In the third stage of sampling, approximately 23 kindergartners were selected from a list of all enrolled kindergartners or students of kindergarten age being educated in an ungraded classroom in each of the sampled schools. As noted above, Asian/Pacific Islander students were oversampled to assure that the sample included enough students of this race/ethnicity to be able to make accurate estimates for these students as a group.
A nationally representative sample of approximately 18,170 children from about 1,310 schools participated in the base-year administration of the ECLS-K:2011 in the 2010–11 school year.
First‐grade collection. Two data collections were conducted in the 2011–12 school year, when the majority of the children were in first grade: one in the fall and one in the spring. The fall first-grade data collection was conducted with a subsample of 30 PSUs (out of the 90 PSUs selected for the base year of the study). This data collection included base-year respondents–those students in the base year who had a completed assessment or parent interview in at least one of the two rounds of kindergarten data collection–who attended the sample schools in those 30 PSUs during their kindergarten year. The spring first-grade data collection included base-year respondents in all 90 sampled PSUs. Due to the increased data collection costs associated with following students who transferred from their original sample school, in each round of data collection only a subsample of students who changed schools were followed into their new schools. About 5,230 children from about 690 schools participated in the fall first-grade data collection, and about 15,130 children from about 1,970 schools participated in the spring first-grade data collection.
Second‐grade collections. The fall second-grade data collection included base-year respondents–those students who had a completed assessment or a completed parent interview in at least one of the two rounds of the kindergarten data collection–who attended schools within a subsample of 30 PSUs during their kindergarten year. This is the same subgroup of students who were included in the fall first-grade data collection. Also in the fall second-grade data collection round, hearing evaluations were conducted on a subsample of over 3,500 children in approximately 300 schools. The hearing evaluations subsample was the subsample of students who participated in the fall second-grade data collection provided their parents consented to the hearing evaluation. The spring second-grade data collection included base-year respondents who attended schools within all 90 sampled PSUs. Due to the increased data collection costs associated with following students who transferred from their original sample school, in each round of data collection only a subsample of these students were followed into their new schools. About 4,740 children from about 860 schools participated in the fall second-grade data collection, and about 13,850 children from about 2,280 schools participated in the spring second-grade data collection.
Third‐grade collection. The spring third-grade data collection included base-year respondents who attended schools within all 90 sampled PSUs. Due to the increased data collection costs associated with following students who transferred from their original sample school, in each round of data collection only a subsample of these students were followed into their new schools. About 13,600 children from about 2,520 schools participated in the spring third-grade data collection. The hearing evaluation in third grade was conducted with a subsample of about 6,110 students from about 1,180 schools; the hearing evaluations were conducted with the same subsample of children who were eligible for the fall second-grade hearing evaluations.
Fourth‐grade collection. The spring fourth-grade data collection included base-year respondents who attended schools within all 90 sampled PSUs. Due to the increased data collection costs associated with following students who transferred from their original sample school, in each round of data collection only a subsample of these students was followed into their new schools. About 12,100 children from about 2,650 schools participated in the spring fourth-grade data collection.
Fifth-grade collection. The spring fifth-grade data collection included base-year respondents who attended schools within all 90 sampled PSUs. Due to the increased data collection costs associated with following students who transferred from their original sample school, in each round of data collection only a subsample of these students was followed into their new schools. About 11,450 children attending about 2,970 schools participated in the spring fifth-grade data collection.
A critical component of the ECLS-K:2011 was the assessment of children on a number of dimensions, including cognitive, physical, and socioemotional development. These domains were chosen because of their importance to success in school.
Cognitive development. The ECLS-K:2011 direct cognitive assessment battery measured children's knowledge and skills in reading, mathematics, and science, as well as executive function. Because the ECLS-K:2011 is a longitudinal study, the assessments also were designed to allow for the measurement of growth in these domains across time. The longitudinal design of the ECLS-K:2011 required that the cognitive assessments be developed to support the measurement of change in knowledge and skills demonstrated by children from kindergarten through the spring of fifth grade.
The ECLS-K:2011 reading, math, and science specifications were based on the frameworks developed for the National Assessment of Educational Progress (NAEP). Although the NAEP assessments are administered starting in fourth grade, the specifications were extrapolated down to kindergarten based on current curriculum standards from several states and, for math, the National Council of Teachers of Mathematics Principles and Standards for School Mathematics. The frameworks necessarily covered content strands applicable to a range of content at different grade levels, for example from number sense (i.e., basic knowledge of numbers) to algebra in mathematics. Content appropriate for most students in the targeted grade level was included in the assessments used in that grade. For example, in the kindergarten math assessment, the “algebra” content strand was assessed through children’s recognition of patterns. While the assessments were designed to contain mostly items that assessed knowledge and skills at the targeted grade level, easier and more difficult items were included to measure the abilities of students performing below or above grade level, respectively.
The cognitive assessments were individually administered by trained assessors using computer-assisted technology and small easel test books containing the assessment items. A reading passages booklet also was used for the reading assessment. The reading and mathematics assessments were administered in both the fall and spring data collections using two-stage adaptive tests. For each assessment, the first-stage was a routing section that included items covering a broad range of difficulty. A child’s performance on the routing section determined which one of three second-stage tests (low, middle, or high difficulty) the child was administered. The second-stage tests varied by level of difficulty so that a child would be administered questions appropriate to his or her demonstrated level of ability for each of these cognitive domains. The purpose of this adaptive assessment design was to maximize accuracy of measurement while minimizing administration time.
Kindergarten science knowledge and skills were measured using a 20-item assessment that was administered only in the spring data collection. All students were administered the entire assessment. A two-stage design was not needed for science because the length of the test was relatively short with respect to both time (approximately 10 minutes) and the number of items. In all later rounds of data collection, science was administered using a two-stage assessment, such as is described for reading and mathematics above.
Executive function. Measures of executive function were included in the direct child assessment batteries to assess children’s cognitive flexibility, working memory, and inhibition.
The Dimensional Change Card Sort (Zelazo, 2006) was used to collect information on children’s cognitive flexibility. In the version of this task used in the kindergarten and first-grade collections, children were asked to sort a series of 22 picture cards according to different rules. Each card had a picture of either a red rabbit or a blue boat. The children were asked to sort each card into one of two trays depending on the sorting rule they had been told. Beginning in the fall second-grade collection, the DCCS was no longer administered using the picture cards. Instead, a computerized version of the DCCS that also captures children’s reaction time was employed.
The Numbers Reversed subtest of the Woodcock-Johnson III Tests of Cognitive Abilities (Mather and Woodcock 2001) assessed the child’s working memory. This task is a backward digit span task that required the child to repeat an orally presented sequence of numbers in the reverse order in which the numbers were presented. Children were given sequences of increasing length (up to a maximum of eight numbers) until the child got three consecutive number sequences incorrect or completed all number sequences.
The National Institutes of Health (NIH) Toolbox Flanker Inhibitory Control and Attention Task (also known as the Flanker task) was used to measure both inhibitory control and attention (Zelazo et al. 2013). For the Flanker task, children were asked to focus attention on a central stimulus displayed on a computer screen while ignoring or inhibiting attention to stimuli presented on either side of the central stimulus. The stimuli used were a series of five arrows, pointing either left of right. The stimuli that “flanked” the central stimulus (i.e., the third arrow) either pointed in the same direction as the central stimulus (congruent) or in the opposite direction as the central stimulus (incongruent). Children were presented with 20 trials and were asked to press a button on the computer to indicate the direction the central stimulus was pointing. Both children’s accuracy and response times were captured.
Hearing. Hearing evaluations were conducted on a subsample of students during the second-, third-, and fifth-grade data collections. Specially trained health technicians conducted the hearing evaluations, which included a brief visual examination of the ear, a test of middle ear function, and a basic measure of auditory sensitivity.
The hearing evaluations in the second and third grade followed the similar protocols. First, the health technician asked the child a few questions about his or her hearing and recent experiences that could affect the results of the evaluation, including whether the child had an earache or recent cold or had recently heard any loud noises. Next, the child’s ears were visually examined to see if there was any blockage that could affect the evaluation. The child’s responses to the questions and the results of the visual examination were entered into a laptop computer. Then the health technician used a tympanometer to measure inner-ear functioning. Finally, the child listened to short tones of various pitches and decibel levels that were presented through headphones connected to an audiometer in order to determine hearing thresholds (the softest sounds the child could hear) for each ear. The data collected from the tympanometer and audiometer were automatically transferred from the hearing equipment and saved to the health technician’s laptop.
All components of the hearing evaluations were conducted in English. Custom CAPI (computer assisted personal interviewing) software on the laptops guided health technicians through the different steps of the evaluation and was used to record information throughout the evaluation.
Physical development. Children’s height and weight were measured and body mass index (BMI) calculated at each data collection point in the ECLS-K:2011.
Socioemotional development. The ECLS-K:2011 indirect assessments of socioemotional development focused on the skills and behaviors that contribute to social competence. Aspects of social competence include social skills (e.g., cooperation, assertion, responsibility, self-control) and problem behaviors (e.g., impulsive reactions, verbal and physical aggression). Parents and teachers were the primary sources of information on children’s social competence and skills.
Data Collection and Processing
The ECLS-K:2011 data files include data from five primary sources: the students, their parents/guardians, their teachers, their schools, and their before- and after-school care providers. Data collection began in fall 2010 and continued through spring 2016. Hard-copy self-administered questionnaires, one-on-one assessments, and telephone or in-person interviews were used to collect the data. Westat was the data collection contractor for all rounds of data collection.
Reference dates. For the ECLS-K:2011, baseline data were collected from September through December 2010 for the fall and from late March through June 2011 for the spring. Data collection for the first-grade follow-up was conducted from August through December 2011 for the fall and from late March through June 2012 for the spring. Data collection for the second-grade follow-up was conducted from August through December 2012 for the fall and from March through June 2013 for the spring. Data collection for the third-grade follow-up was conducted from March 2014 through June 2014. Data collection for the fourth-grade follow-up was conducted from March 2015 through June 2015. Data collection for the fifth-grade follow-up was conducted from March 2016 through June 2016.
Data collection. Fall and spring data collections included direct child assessments, parent interviews, and teacher questionnaires. In addition, all spring rounds included school questionnaires and special education teacher questionnaires. The spring kindergarten round also included the before- and after-school care provider questionnaires. The fall second-grade, spring third-grade, and spring fifth-grade rounds included the hearing evaluation component. Beginning in the spring of third grade, the child questionnaire was also included. Development of the ECLS-K:2011 survey instruments built upon those from the earlier ECLS studies and carried forward much of the same content and approaches. Development of the before and after-school care (BASC) questionnaire was based on the wrap-around early care and education provider (WECEP) interview from the ECLS-B. Development of the other survey instruments (i.e., direct child assessment, parent interview, and school staff questionnaires) was based on the instruments from the ECLS-K. Exceptions were the hearing evaluations and executive function components, which were new to the ECLS-K:2011 study.
In the fall of 2009, two field tests were conducted for the kindergarten through second grade measures. These field tests served as the primary vehicle for (1) estimating the psychometric parameters of all items in the assessment battery item pool, (2) producing psychometrically sound and valid direct and indirect cognitive assessment instruments, (3) assessing the feasibility of screening children’s vision and hearing during the national collections, and (4) obtaining valid assessments for both an English reading score for Spanish-speaking children not being assessed fully in English and an assessment of these children’s early reading skills (e.g., letter recognition and sounds) in Spanish. Development of the survey instruments was also guided by advice given by the ECLS-K:2011 Technical Review Panel (TRP), the ECLS-K:2011 Content Review Panels (CRPs), and other experts and consultants. Another field test to test items for inclusion in the third-, fourth-, and fifth-grade assessments as well as the child questionnaires was conducted in the spring of 2013.
The fall and spring rounds of the ECLS-K:2011 data collection included a direct child assessment with cognitive and physical measurement components. The components of the ECLS-K:2011 assessment administered to children who spoke a language other than English at home depended on the children’s performance on a language screener used in the fall and spring base-year and first-grade data collections. The screener consisted of two tasks from the Preschool Language Assessment Scale (preLAS 2000). All children also received the first 18 items of the reading assessment in English, regardless of their home language or performance on the preLAS tasks. These items, plus two items from the preLAS task (a total of 20 items), make up the section of the reading assessment referred to as the English basic reading skills (EBRS) section because they measure such skills. Once the EBRS items were administered, the cognitive assessments in English ended for children whose home language was not English and who did not achieve at least a minimum score on the language screener. Spanish-speaking children who did not achieve at least the minimum score on the screener were then administered a short reading assessment in Spanish that measured Spanish early reading skills (SERS), as well as the mathematics and executive function assessments that had been translated into Spanish. Children whose home language was one other than English or Spanish and who did not achieve at least the minimum score on the screener were not administered any of the remaining cognitive assessments beyond the EBRS. All children had their height and weight measured.
Unlike the kindergarten and first-grade data collections, a language screener was not used in the second- through fifth-grade collections for children whose home language was not English. By the spring of first grade, nearly all children (99.9 percent) were routed through the assessment in English; therefore, the language screener was not administered beyond the spring of first grade. All children were assessed in English in the spring of fifth grade.
Parent interviews were conducted mostly by telephone, though the interview was conducted in-person for parents who did not have telephones or who preferred an in-person interview. The respondent to the parent interview was usually a parent or guardian in the household who identified himself or herself as the person who knew the most about the child’s care, education, and health. During the later data collection rounds, interviewers attempted to complete the parent interview with the same respondent who answered the parent interview in the previous round, though another parent or guardian in the household who knew about the child’s care, education, and health was selected if the prior-round respondent was not available.
The parent interviews were fully translated into Spanish before data collection began and could be administered by bilingual interviewers if parent respondents preferred to speak in Spanish. Because it was cost prohibitive to do so, the parent interviews were not translated into other languages. However, interviews could be completed with parents who spoke other languages by using an interpreter who translated from the English during the interview.
All kindergarten teachers with sampled children were asked to fill out self-administered questionnaires providing information on themselves and their teaching practices. For each of the sampled children they taught, the teachers also completed a child-specific questionnaire. In the spring, school administrators were asked to complete a self-administered questionnaire that included questions on the school characteristics and environment, as well the administrator’s own background. Also, in the spring, the special education teachers or related service providers of children in special education were asked to complete a self-administered questionnaire about the children’s experiences in special education and about their own background. Before- and after-school caregivers identified in the fall kindergarten parent interview were asked to complete self-administered hard-copy questionnaires for the before- and after-school care (BASC) component of the ECLS-K:2011 during the spring kindergarten round. The BASC instruments asked about the characteristics of the child’s care arrangement, as well as the provider’s background and professional development activities. The provider with whom the child spent the most time on a weekly basis was the respondent for the care provider questionnaire, as well as for a child-level questionnaire with questions specifically about the study child. There were two versions of the care provider questionnaire, one for providers in center-based arrangements and one for providers in home-based arrangements.
The administration of the different survey instruments in later grades was similar to the administration of those instruments in kindergarten, though the BASC questionnaires were not fielded again. The exception was the administration of the general classroom teacher questionnaires, which underwent a major change starting in fourth grade. In general, as children move into the upper elementary grades, more than one teacher is often involved in a child’s instruction. Because it could not be assumed that each child had only one regular classroom teacher who could respond to questions about the instruction of all subjects and the child’s performance in all subjects, beginning in the fourth-grade data collection, all sampled children had their reading teacher identified and that teacher was asked to complete questionnaires. Information was also collected from the children’s mathematics and science teachers. To reduce the response burden on teachers, half of the sampled children were randomly assigned to have their mathematics teacher complete questionnaires, while the other half of the sampled children had their science teacher complete questionnaires. All identified teachers also received a teacher-level questionnaire that was used to collect information about the teacher.
A continuous quality assurance process was applied to all data collection activities at all rounds. Data collection quality control efforts began with the development and testing of the computer-assisted telephone interviewing (CATI) and computer-assisted personal interviewing (CAPI) applications and the data collection contractor’s Field Management System. As these applications were programmed, extensive testing of the system was conducted. Quality control processes continued with the development of field procedures that maximized cooperation and thereby reduced the potential for nonresponse bias. Quality control activities also were practiced during training and data collection. After data collection began, field supervisors observed each assessor conducting child assessments and made telephone calls to a subset of parents to validate the interview. Field managers also made telephone calls to a subset of the schools to collect information on the school activities for validation purposes.
Editing. Within the CATI/CAPI instruments, the ECLS-K:2011 respondent answers were subjected to both “hard” and “soft” range edits during the interviewing process. Responses outside the soft range of reasonably expected values were confirmed with the respondent and entered a second time. For items with hard ranges, out-of-range values (i.e., those that were not considered possible) were usually not accepted. If the respondent insisted that a response outside the hard range was correct, the interviewer could enter the information as a comment. Data preparation and project staff reviewed these comments. Out-of-range values were accepted if the comments supported the response.
Consistency checks were also built into the CATI/CAPI data collection. When a logical error occurred during an interview, the assessor saw a message requesting verification of the last response and a resolution of the discrepancy, if possible. In some instances, if the verified response still resulted in a logical error, the assessor recorded the problem either in a comment box within the CATI/CAPI program or in a problem report submitted to home office staff.
The overall data editing process consisted of running range edits for soft and hard ranges, running consistency edits, and reviewing frequencies of the results. Where applicable, these steps also were implemented for hard-copy questionnaire instruments.
Estimation Methods
Weighting. Weights are used to adjust for disproportionate sampling at each sampling stage, survey nonresponse, and noncoverage of the target population when analyzing complex survey data. The weights are designed to eliminate or reduce bias that would otherwise occur with analyses of unweighted data. The ECLS-K:2011 data are weighted to compensate for unequal probabilities of selection at each sampling stage and to adjust for the effects of school, teacher, before- and after-school care provider, child, and parent nonresponse. The sample weights to be used in the ECLS-K:2011 analyses were developed in several stages. The first stage of the weighting process assigned weights to the sampled primary sampling units that are equal to the inverse of the PSU probability of selection. The second stage of the weighting process assigned weights to the schools sampled within selected PSUs. The base weight for each sampled school is the PSU weight multiplied by the inverse of the probability of selecting the school from the PSU. The base weights of responding schools were adjusted to compensate for nonresponse among the set of eligible schools. These adjustments were made separately for public and private schools.
To compute the base weight for each student in the sample, the school nonresponse-adjusted weight for the school the student attended was multiplied by the within-school student weight. The within-school student weight was calculated separately for API students and non-API students to account for the oversampling of API students. For API students, the within-school student weight is the total number of API kindergarten students in the school divided by the number of API kindergarten students sampled in the school. For non-API students, the within-school student weight is the total number of non-API kindergarten students in the school divided by the number of non-API kindergarten students sampled in the school. The student-level base weight was adjusted for nonresponse to produce each of the final student-level weights created for each round of the ECLS-K:2011 data collection. For each weight, a response status was defined based on the presence of data for particular components. The response status was used to adjust the base weight for nonresponse to arrive at the final full sample weight. Nonresponse classes were formed separately for each school type (public/Catholic/non-Catholic private). Within school type, analysis of child response propensity was conducted using child characteristics such as date of birth and race/ethnicity to form nonresponse classes. The child-level nonresponse adjustment was computed as the sum of the weights for all the eligible (responding and nonresponding) children in a nonresponse class divided by the sum of the weights of the eligible responding children in that nonresponse class.
A sample weight could be produced for use with data from every component of the study (e.g., data from the fall child assessment, from the fall parent interview, from the spring child assessment, from the spring parent interview, etc.) and for every combination of components for the study (e.g., data from the fall child assessment with data from the fall parent interview or data from the spring child assessment with data from the school administrator questionnaire). However, creating all possible weights for a study with as many components as the ECLS-K:2011 would be impractical. In order to determine which weights would be most useful for researchers analyzing data, completion rates for each component at each round (e.g., response to the child assessment or the parent interview in fall kindergarten) were reviewed, and consideration was given to how analysts are likely to use the data (i.e., which weights will have greatest analytic utility).
Scaling. To maximize information on which each estimate of ability derived from the direct child assessments is based, the majority of the direct cognitive assessment scores computed for the study are based on IRT. IRT uses patterns of correct and incorrect answers to compute estimates on a scale that may be compared across different assessment forms within a given domain. IRT was employed in the ECLS-K:2011 to calculate ability estimates and then derive assessment scores from those ability estimates that can be compared both within a round and across rounds.
Imputation. Not all parent respondents provided complete education, occupation, and household income information. Therefore, it was necessary to impute missing values for these components of the socioeconomic status (SES) composite variable before computing the composite. The percentages of missing data for the education and occupation variables were small (for example, 2 to 3 percent in the base year). However, the household income variable generally has a higher rate of missing data (for example, 15.3 percent in the base year). Imputation was done separately for each component using the hot deck method. In this method, similar respondents and nonrespondents are grouped, or assigned to “imputation cells,” and a respondent’s value is randomly “donated” to a nonrespondent within the same cell. Cells were defined using demographic characteristics that are the best predictors of the component. Characteristics such as census region, school type (public/Catholic/non-Catholic religious private/other private), school locale (city/suburb/town/rural), household type (female single parent/male single parent/two parents present), parents’ race/ethnicity, and parents’ age were used to form the cells. Chi-square automatic interaction detector (CHAID) analyses were used to determine the predictors. Imputed as well as reported values were used to create imputation cells, but imputed values were not donated. No donor was used more than once.
For households with both parents present, each parent’s variables were imputed separately. The order of imputation was parent 1’s education, parent 2’s education, parent 1’s labor force status, parent 1’s occupation, parent 2’s labor force status, parent 2’s occupation, and then household income.
Composites indicating the percentage of students in the school who were approved for free school meals and the percentage of students in a school who were approved for reduced-price school meals were derived from information collected from the school administrator during the spring data collection. Some school administrators did not complete the school administrator questionnaire, and among those who did, not all responded to all three questions needed to compute these composites related to approval for free or reduced-price meals. If school administrator data for public schools were missing, data were taken from the CCD (Common Core of Data). No external source data were available for private schools.
Hot-deck imputation was then conducted for cases from public schools for which data were not available in the CCD. Imputation cells were created using a measure of district poverty and whether the school received Title I funding. Within each imputation cell, the schools were sorted by longitude and latitude. Hand imputation was used for a small number of private schools.
The ECLS-K:2011 followed students through the spring of 2016, when most of them were in fifth grade. The final longitudinal kindergarten through fifth grade public- and restricted-use data files have been released for the study, although additional supplemental data file releases are planned. These file releases will be announced at https://nces.ed.gov/ecls/.