Skip Navigation
small NCES header image
Preschool: First Findings From the Third Follow-up of the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B)
NCES 2008-025
October 2007

Survey Methodology


The Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), sponsored by the U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics (NCES), is a multisource, multimethod study that focuses on the early home and educational experiences of children from infancy to kindergarten entry. The central goal of the ECLS B is to provide a comprehensive and reliable set of data that may be used to describe and to better understand children’s early development; their health care, nutrition, and physical well-being; their preparation for school; key transitions during the early childhood years; their experiences in early care and education programs; and how their early experiences relate to their later development, learning, and experiences in school. To achieve this goal, the study is following a nationally representative cohort of children born in the United States in 2001 from birth into kindergarten entry. The parents of approximately 10,700 children born in 2001 participated in the first wave of the study, when the children were approximately 9 months old. Direct assessments were conducted with about 10,200 of these children. The second wave was conducted in 2003, when the children were approximately 2 years old; the parents of approximately 9,850 children participated in this wave, and direct assessments were conducted with about 8,950 of these children. The third wave, the preschool wave, was conducted in 2005–06, when the children were approximately 4 years old; the parents of approximately 8,950 children participated in this wave, and direct assessments were conducted with about 8,750 of these children. This report presents data collected in the third wave, in 2005–061. The Research Triangle Institute (RTI), a social science research firm, conducted the third wave of the study.

The sample comprises children from different racial/ethnic and socioeconomic backgrounds, including oversamples of Chinese and other Asian and Pacific Islander children and American Indian/ Alaska Native children2. It also includes oversamples of twins and children with moderately low and very low birth weight. The sample of children born in the year 2001 was selected using a clustered, list frame sampling design. The list frame was made up of registered births in the National Center for Health Statistics (NCHS) vital statistics system. Births were sampled from 96 core primary sampling units (PSUs) representing all infants born in the United States in the year 20013. The PSUs were counties and county groups. To support the American Indian/ Alaska Native oversample, 18 additional PSUs were selected from a supplemental frame consisting of areas where the population had a higher proportion of American Indian/ Alaska Native births. Sampling was based on the occurrence of the birth as listed on the birth certificate. Sampled children subsequently identified by state registrars as having died or who had been adopted after the issuance of the birth certificate were excluded from the sample before the 9-month wave was conducted. Also, infants whose birth mothers were younger than 15 years old at the time of the child’s birth were excluded in response to state confidentiality and sensitivity concerns.4

For more on sampling, see chapter 4 of the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), Preschool Data File User’s Manual (2005–06) (Snow et al. 2007).

Top


Data Collection Procedures

The ECLS-B collects information with an in-person computer assisted parent interview5, an in-person direct child assessment, a self-administered paper and pencil father questionnaire, a computer assisted early care and education provider telephone interview, and an observation of the early care and education setting. This First Look report presents information from the ECLS-B preschool parent interviews, direct child cognitive assessments, and direct child fine motor assessments.

Preschool Parent Interview

The preschool parent data were collected using a computer-assisted personal interview (CAPI) and a Parent Self-Administered Questionnaire.6 Parents or guardians were asked to provide information about the sampled child, themselves, the home environment, their parenting attitudes, and family characteristics. Questions regarding family structure, child care use, household income, and community and social support were also included in the parent instrument. The interview was conducted as part of a home visit with the parent and child. The study design called for the child’s biological mother to be the respondent for the parent instrument whenever possible; however, the respondent could be a father, stepparent, adoptive parent, foster parent, grandparent, another relative, or nonrelative guardian. The respondent had to be knowledgeable about the child’s care and education, 15 years of age or older at the time of the child’s birth, and living in the household with the child. About 95 percent of parent interviews were conducted with the child’s biological mother. The parent interviews were conducted primarily in English, but provisions were made to interview parents who spoke other languages. Bilingual interviewers were trained to conduct the parent interview in either English or Spanish. A Spanish CAPI instrument was used when needed, as the instrument was programmed in both English and Spanish. An interpreter (recruited from a professional translating agency or from the community) or a household member was used for interviews with families who spoke languages other than English or Spanish.

Preschool Assessment of Children’s Language, Literacy, Mathematics, and Color Knowledge

The direct child assessment provides information on children’s language, literacy, mathematics, and color knowledge. The language assessment examines children’s receptive and expressive language skills. The literacy assessment examines children’s letter recognition, letter-sound knowledge, knowledge of the conventions of print, and word recognition. The mathematics assessment examines number sense, counting, operations, geometric shapes, pattern understanding, and estimation. The color assessment examines children’s knowledge of basic colors. For more information on how these measures were scored, please see the "Glossary: Constructs and Variables Used in the Analyses" section of this appendix.

The child assessments7 were administered during the home visit along with the parent interview. Information on children’s language, literacy, mathematics, color knowledge, and children’s fine motor skills is sensitive to the age at which the children were assessed. Table A-1 presents the percentage distribution of children’s age at time of assessment by children’s sex, race/ethnicity and socioeconomic status. A higher percentage of children who were assessed when they were older (more than 57 months) compared to children assessed when they were within the target range (48 through 57 months) were Hispanic (39.8 percent versus 24.9 percent) and were from the lowest 20 percent of the SES distribution (25.5 percent versus 19.6 percent).

Table A-1. Percentage distribution of children's age at time of assessment, by child and family characteristics: 2005–06
Characteristic  Less than
48 months
(less than
4 years
old)
48 through 57
months (4
years old to
4 years,
9 months)
More than
57 months
(older than
4 years,
9 months)
Total  100.0 100.0 100.0
Child's sex 
Male  50.3 51.6 50.3
Female  49.7 48.4 49.7
Child's race/ethnicity1 
White, non-Hispanic  59.1 54.7 37.2
Black, non-Hispanic  16.1 13.3 14.1
Hispanic  17.8 24.9 39.8
Asian, non-Hispanic  2.0 2.5 4.0
American Indian and Alaska
Native, non-Hispanic
# 0.5 0.5
Other, non-Hispanic  4.7 4.1 4.4
Socioeconomic status, preschool round 
Lowest 20 percent  18.7 19.6 25.5
Middle 60 percent  59.1 60.1 60.7
Highest 20 percent  22.2 20.3 13.8
# Rounds to zero.
1 Black, non-Hispanic includes African American. Hispanic includes Latino. Other, non-Hispanic includes Native Hawaiian/other Pacific Islanders and children of more than one race.
NOTE: Standard errors estimated with replicate weights W3R1 through W3R90.
SOURCE: U.S. Department of Education, National Center for Education Statistics, Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), Longitudinal 9-Month-Preschool Restricted-Use Data File.

To reduce respondent burden, the direct cognitive assessment was adaptive. That is, not every child received each item. During administration, if certain sets of items proved too difficult (the child did not answer or incorrectly answered a series of questions), the child was routed out of the next more difficult set of items and routed into another area or domain. Item Response Theory (IRT) modeling was employed to estimate children’s performance on all of the items in each domain, regardless of whether they were administered the actual item. IRT uses patterns of correct and incorrect answers to obtain estimates on a scale that may be compared for different assessment forms. The two scores presented in this report that are not IRT based are the overall color knowledge scale score and the expressive language score. For more information on the IRT modeling please refer to Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), Methodology Report for the Preschool Data Collection (2005–06), Volume I: Psychometrics (Najarian, Lennon, and Snow 2007).

Home visits were scheduled at times convenient to parents and children (i.e., not during nap, meal, or family time). The total cognitive assessment (language, literacy, and mathematics) averaged about 45 minutes in length. To conduct the child assessments in a home setting, interviewers worked with the parent to find a well-lit, quiet setting, away from sources of noise such as a television or radio, and away from any other distractions, such as the child’s toys, family pets, and so forth. The presence of other family members was discouraged whenever possible. Interviewers conducted the child assessments with the child seated at a kitchen or dining room table whenever possible. If the household did not have available table space, these assessments were conducted using a small folding table provided by RTI for this purpose. Interviewers were trained to sit at a 90-degree angle from the child so that they could see the child’s responses when the assessment item involved pointing; this also limited the opportunity for the child to be distracted from the assessment by the computer screen.

Interviewers were trained and certified on the assessments. Certification was designed to assess the interviewers’ ability to adhere to the standardized protocol and to correctly score children’s responses. An abbreviated assessment computer program was developed specifically for certifications. Selected items from the language, literacy, math, fine motor, and color knowledge assessments were compiled in the certification program. Trainees used a laptop with the certification program and the assessment administration booklet (easel) as they worked through the items. The trainer played the role of the child. For training purposes, trainees said aloud how they scored each item they administered. Trainers were provided with hard-copy instructions on how to conduct the certifications, which itemized different administration and scoring procedures evaluated during trainee certification. To be certified to administer the assessments, each trainee had to earn at least 75 percent of the total score. During the course of data collection, quality control procedures were implemented to verify adherence to the study protocol. Telephone verification interviews with the parent respondents were conducted to confirm the authenticity of the home visit data. In addition, periodic descriptive analyses on the assessment data were conducted to check for any unusual response distributions.

To the extent possible, all children were included in the assessments, including those with special needs. If the child’s family spoke a language other than English or Spanish, interviewers used an interpreter recruited through a professional translation agency or a nearby community agency or organization to conduct the home visit. If these options were not available, a family member was asked to interpret. The cognitive portion of the assessment provided information on children’s language, literacy, mathematics, and color knowledge. In part, the language assessment was designed to determine whether the child possessed sufficient English skills to understand the basic instructions and premises required to be assessed in English during the literacy, mathematics, and color knowledge components. If the child failed these language items, the child was not administered the literacy, mathematics, and color knowledge items in English. However, the motor assessments and physical measurements still were administered by the interpreter or family member. Interviewers also administered an assessability form to all sample children, with the help of the parent respondent. The assessability form gathered such information as whether or not the child had an IEP/IFSP (Individualized Education Program/Individual Family Service Plan), and if the child did have such a plan, the services being received. Also, the need for special accommodations (such as special adjustments in order to answer questions, point to pictures, follow directions, draw with a pencil, or move around) was identified. Finally, the assessability form documented if the child was wheelchair-bound or would need sign language or Braille to participate in the assessments. Interviewers were trained to make a determination of whether or not a child with special needs could be administered a given assessment item on an individual basis, with the goal of maximizing inclusion to the fullest extent possible. To make informed decisions, interviewers were guided by information obtained on the assessibility form and discussion with parents about assessment items whose administration might be problematic given the child’s particular need. Interviewers followed standard administration procedures, but they were allowed to modify the administration of items if necessary to accommodate special needs. For example, parents who used sign language to communicate with a deaf child were encouraged to do so during the course of the motor assessments. If a child could not be fairly assessed for reasons such as severe disabilities, and appropriate administration accommodations were not feasible, the child was excluded from that component of the assessment.8

Preschool Fine Motor Skills Assessment

Fine motor skills are intricately linked to perception, which is important to a child’s development and well-being. Fine motor skills are those that use the small muscle masses of the body and can include small object manipulation and drawing. Fine motor skills are important from a very young age because children absorb much information through tactile means (Weeks and Ewer-Jones 1991). Poor fine motor skills can also lead to poor performance on commonly used intelligence tests and cause such results to be inaccurate. In the preschool data collection, children’s fine motor skills were assessed, in part, by asking the child to draw forms of basic geometric shapes. Children were shown a drawing and asked to make a similar drawing in pencil on a blank page. Children were provided with seven specific forms to draw: a vertical line, a horizontal line, a circle, a square, a cross, a triangle, and an asterisk.

The copy form items were scored as pass/fail by trained coders at RTI.9 For more information on the properties of the fine motor skills score, please see the "Glossary: Constructs and Variables Used in the Analyses" section of this appendix.

As with the direct cognitive assessment, to the extent possible, all children were included in the direct motor assessments, including those with special needs.

Top


Response Rates

The ECLS-B is a nationally representative sample of the 3.9 million children born in the United States in 2001. For the preschool-year data collection, approximately 9,850 cases with completed 2-year parent interviews, and an additional 50 American Indian/Alaska Native cases (AIAN) with completed 9-month parent interviews, were fielded and considered eligible (approximately 100 children were removed from the sample because they had died or permanently left the country). The information in this report was largely derived from the preschool parent interview and the preschool child assessment. Preschool parent interviews were completed for 8,950 of the 10,700 children who participated in the 9-month collection. The weighted unit response rate for the preschool-year parent interview—calculated as the weighted number of children with completed preschool parent interviews divided by the weighted number of children eligible to participate in the preschool collection—is 91.3 percent. The weighted unit response rate for the preschool child assessment is 98.3 percent, meaning that about 98 percent of the children eligible for the preschool collection have at least some assessment data.

The ECLS-B also collected information from fathers, early care and education providers, and, through an observation, of early care and education settings. Although these data were not presented in this report, the weighted unit response rate for the resident father questionnaire, calculated for cases where a resident father was living in the household with the sampled child, is 87.7 percent. The weighted unit response rate for the early care and education provider (ECEP) interview, calculated for cases in which the child had a regular early care and education arrangement, is 87.4 percent. The weighted unit response rate for the child care observation (CCO), calculated for cases with a complete child care provider interview and sampled for the CCO, is 56.8 percent. All weighted response weights were calculated by using the base weight.

The unit response rate is a round-specific rate in that it indicates the proportion of the eligible sample responding to a survey at a particular time point. For a longitudinal study such as the ECLS-B, it is also useful to calculate a longitudinal response rate, also called an overall unit response rate, which takes into account response for all rounds of collection. For example, for the 9-month collection, the weighted overall unit response rate was 74.1 percent (after substitution); this rate dropped to 69.0 percent when the 2-year parent data collection was taken into account. Therefore, the preschool overall unit response rate for the ECLS-B indicates the proportion of all eligible cases10 originally sampled for the 9-month collection that participated at preschool. This rate is 63.1 percent when the preschool parent data collection is included, and drops to 62.0 percent when the preschool child assessment unit response was taken into account. The overall weighted response rate at preschool is 55.3 percent for resident fathers; 55.1 for the ECEP; and 35.8 percent for the CCO.

For more on eligibility requirements, response rates, and efforts to improve survey response, see section 5.6 of the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), Preschool Data File User’s Manual (2005–06) (Snow et al. 2007).

Top


Data Reliability

Estimates produced using data from the ECLS-B are subject to two types of error: nonsampling and sampling errors. Nonsampling errors are errors made in the collection and processing of data. Sampling errors occur because the data are collected from a sample rather than a census of the population.

Nonsampling Errors

Nonsampling error is the term used to describe variations in the estimates that may be caused by population coverage limitations, as well as data collection, processing, and reporting procedures. The sources of nonsampling errors are typically problems like unit and item nonresponse, differences in respondents’ interpretations of the meaning of the questions, response differences related to the particular time the survey was conducted, and mistakes in data preparation.

In general, it is difficult to identify and estimate either the amount of nonsampling error or the bias caused by this error. In the ECLS-B, efforts were made to prevent such errors from occurring and to compensate for them where possible (e.g., field tests, cognitive laboratory sessions testing items new to the surveys, multi-day interviewer training, certification sessions, and monitoring throughout the collection period of interviewer performance and field data quality).

Another potential source of nonsampling error is respondent bias that occurs when respondents systematically misreport (intentionally or unintentionally) information in a study. One potential source of respondent bias in this survey is social desirability bias. An associated error occurs when respondents give unduly positive assessments about those close to them. For example, parents may give a higher assessment of their children’s motor accomplishments (like feeding themselves) than might be obtained from a direct assessment. If there are no systematic differences among specific groups under study in their tendency to give socially desirable or unduly positive responses, then comparisons of the different groups will provide reasonable measures of relative differences among the groups.

A nonresponse bias analysis was conducted to assess the potential bias in the survey estimates due to unit nonresponse11 for the various components of the survey. Analyses of the weighted estimates versus the sample frame data from the birth certificates indicate the degree to which the adjustments that go into weighting account for potential nonresponse bias. At 9-months, differences between the full sample birth certificate data (frame characteristics) and weighted respondents was negligible (less than 0.7 percent) for all variables examined (for more information, see Bethel et al. 2005).12 For the 2-year data collection, analysis of nonresponse bias showed only one difference remaining13 after the weights were adjusted for nonresponse and undercoverage (for more information, see Nord et al. 2006). For the preschool data collection, analysis of nonresponse bias showed only one difference remaining14 after the weights were adjusted for nonresponse and undercoverage (for more information, see Snow et al. 2007).

Information in this report uses items from the preschool parent interview and child assessment. Analysis of potential bias due to item nonresponse is typically conducted for those items with less than 85 percent response. None of the items from the preschool parent interview had item response rates less than 85 percent. The child assessment data are not reported out at the item level, so it would be inappropriate to discuss item level nonresponse rates. However, it would be appropriate to consider the unit response rate for the child assessment. The unit response rate for the child assessment was 98.3 percent.

Sampling Errors and Weighting

The sample of children born in the United States in 2001 is just one of many possible samples of births that could have been selected. Therefore, estimates produced from the ECLS-B sample may differ from estimates that would have been produced from other samples. This type of variability is called sampling error because it arises from collecting data on a sample of children, rather than all children, born in 2001.

The standard error is a measure of variability due to sampling when estimating a statistic. Standard errors for estimates presented in this report were computed using a jackknife replication method. Standard errors can be used as a measure of the precision expected from a particular sample. The probability that a sample estimate would differ from the census count by less than 1 standard error is 68 percent. The probability that the difference would be less than 1.65 standard errors is about 90 percent and that the difference would be less than 1.96 standard errors is about 95 percent.

In order to produce national estimates from the ECLS-B, the sample data were weighted. Weighting the data adjusts for unequal selection probabilities, unit nonresponse, and provides estimates that reflect the population under study through raking adjustments. Estimates presented in this report use the preschool data collection parent and/or child respondent weight (W3R0), which is the weight that accounts for the child’s probability of selection in the sample, as well as nonresponse to the preschool parent interview.

Replication methods of variance estimation were used to reflect the actual sample design used in the ECLS-B. A form of the jackknife replication method (JK2) using 90 replicate weights was used to compute approximately unbiased estimates of the standard errors of the estimates in the report, using WesVar version 4.0 software. Jackknife methods were used to estimate the precision of the estimates of the reported national percentages, means, and counts.

Top


1 The preschool wave of data collection began in late August 2005 and ended in mid-July 2006.
2 Other Asian/Pacific Islander refers to children whose ethnicity is any Indo-Southeastern Asian or Far Eastern Asian except Chinese children. Chinese children are oversampled separately as the largest component of the Asian/Pacific Islander ethnic group.
3 The sample design called for the use of the birth certificate records received through the NCHS vital statistics system as the sampling frame to be used for selecting births within selected PSUs. In a few states, state institutional review boards or registrar offices had requirements that placed restrictions on contacting parents based on birth certificate information. In some cases, these restrictions would have resulted in low response rates or even complete nonparticipation. In states that required active consent or that prohibited follow-back research studies, substitution and alternative frames were used. Please see Bethel et al. 2005 for more information.
4 0.2 percent of all births in 2001 were to mothers younger than 15 years old at the time of birth.
5 The parent interview is loaded into a computer based interviewing program, and the field interviewer reads the questions to the parent and enters the responses into the computer. The computer program routes the interview through the appropriate question sequence.
6 The self-administered questionnaire was provided to parents as an audio computer-assisted self-interview (ACASI). Respondents were given earphones, enabling them to listen to the questions and privately enter their responses into the interviewer’s laptop.
7 The preschool round direct child assessment was comprised of four parts: (1) cognitive assessments; (2) socioemotional assessments (a caregiver-child interaction through the Two Bags Task); (3) physical measurements; and (4) fine and gross motor assessments.
8 Two percent of children were excluded from the cognitive assessment based on language (lack of English skills). Approximately 0.04 percent were excluded from the cognitive assessment based on a physical limitation. Estimates are weighted by the preschool parent respondent weight (W3R0).
9 Reliability estimates for the items coded centrally at RTI were computed as a percentage agreement between coders and a group of standard coders, who coded approximately 5 percent of each coder’s cases. Percentage agreement across the seven items ranged from 85 to 94 percent (with the agreement on “circle” being the lowest, at 85 percent).
10 All 9,850 cases with 2-year parent interview completes and an additional 70 American Indian/Alaska Native with 9-month parent interview completes were fielded and considered eligible for the preschool data collection (with the exception of 10 cases in which the child had died and 80 cases in which the child had moved permanently abroad between the 2-year interview and the preschool wave). All other cases were included in the preschool wave; there was no further sampling of cases, except for the child care observation component.
11 The unit response rate is a round-specific rate in that it indicates the proportion of the eligible sample responding to a survey at a particular time point.
12 Variables examined were: age of mother; age of father; mother’s education; child’s race; birth order; number of prenatal visits; five-minute APGAR score; mother’s alcohol use during pregnancy; presence of medical risk factors; presence of complications in labor and delivery; presence of congenital anomalies; birth weight; plurality; population of PMSA/MSA where the mother resided at the time of birth; and census region where the mother resided when interviewed.
13 Only one variable, the percentage of households with 5 members, showed a difference that remained significant after final 2-year weight adjustments (19.6 versus 19.4 percent in columns (2) and (3), respectively, p value = 0.044).
14 Statistically significant differences were examined to see whether they were meaningful in a substantive sense, using the rule that relative differences less than 5 percent are small and likely not meaningful. One variable had relative bias greater than 5 percent. In the race/ethnicity distribution, the percentage of Chinese children showed a percent relative difference of greater than 5 percent. However, the actual difference was only 0.05 percent (in the weighted race/ethnicity distribution, 0.54 percent were classified as Chinese at 9-months; 0.59 percent were classified as Chinese at preschool), suggesting this difference could be considered insubstantial.


Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.