Skip Navigation
small NCES header image
Children's Reading and Mathematics Achievement in Kindergarten and First Grade

Acknowledgments

+ Executive Summary

+ Children's Reading and Mathematics Achievement in Kindergarten and First Grade


Questions

Organization of the Report


+ Measures

Children's Reading and Mathematics Knowledge and Skills

Children's Approaches to Learning

General Health, Home Educational Activities and Child/Family Characteristics

Analytic Sample

+ Findings


Summary


List of Figures

Full Report (PDF)
Line Children's Reading and Mathematics Knowledge and Skills

The ECLS-K reading and mathematics assessment was directly administered to children in a quiet one-on-one setting. Children used pointing or verbal responses to complete the tasks; they were not asked to write anything or to explain their reasoning. The data were collected using computer-assisted interviewing methodology. The assessment included the use of a small easel with pictures, letters of the alphabet, words, short sentences, numbers, or number problems. This report includes information from the assessments administered in the fall and spring of kindergarten, and the spring of first grade.4

In the ECLS-K, the reading assessment,5 specifically designed for the ECLS-K (National Center for Education Statistics 2001), was administered in English, and the mathematics assessment was administered in both English and Spanish. Prior to administering the English reading and mathematics assessment, children's English language proficiency was evaluated. Children whose home language was other than English (as determined by school records) were administered the Oral Language Development Scale (OLDS) (for more information, see the ECLS-K Base-Year User's Manual, National Center for Education Statistics 2001). If children demonstrated sufficient proficiency in English for the ECLS-K direct child assessment, they received the English reading and mathematics battery. This report focuses on those children who were assessed in English, at all points in time.

The reading assessment included questions designed to measure basic skills (letter recognition, beginning and ending sounds), vocabulary (receptive vocabulary, as in "point to the picture of the cat"), and comprehension (listening comprehension, words in context). Comprehension items were targeted to measure skills in initial understanding, developing interpretation, personal reflection, and demonstrating critical stance (evaluative judgments about the text, such as recognizing implausible events).

The mathematics assessment items were designed to measure skills in conceptual knowledge, procedural knowledge, and problemsolving. Approximately one-half of the mathematics assessment consisted of questions on number sense and number properties and operations. The remainder of the assessment included questions in measurement; geometry and spatial sense; data analysis, statistics, and probability; and patterns, algebra, and functions. Each of the mathematics assessment forms contained several items for which manipulatives (e.g., blocks) were available for children to use in solving the problems. Paper and pencil were also offered to the children to use for the appropriate parts of the assessment.

In this report, information on children's overall reading and mathematics knowledge and skills are presented as a standardized t-score.6 T-scores provide norm-referenced measurements of achievement- that is, estimates of achievement level relative to the population as a whole. A high t-score mean for a particular subgroup indicates that the group's performance is high in comparison to other groups. It does not mean that group members have mastered a particular set of skills, only that their performance level is greater than a comparison group. Similarly, a change in t-score means over time reflects a change in the group's status with respect to other groups. Consequently, t-scores are not ideal for indicating gains in achievement.

In addition to the standardized overall achievement score (i.e., t-score) for reading and mathematics, specific proficiency scores were calculated.7 These proficiency scores represent a progression of skills. The reading assessment contained five proficiency levels (from easiest to most difficult): (1) recognizing letters (identifying upper and lower case letters by sight); (2) understanding the letter-sound relationship at the beginning of words (identifying the letter that represents the sound at the beginning of a word); (3) understanding the letter-sound relationship at the end of words (identifying the letter that represents the sound at the end of a word); (4) recognizing words by sight (reading simple words aloud); and (5) understanding words in context (listening comprehension and reading simple text passages).

The mathematics assessment also contained five proficiency levels: (1) numbers and shapes refers to a cluster of items that measures reading numerals, recognizing shapes, and counting to 10; (2) relative size refers to a cluster of items that measure reading numerals, counting beyond 10, sequencing patterns, and using nonstandard units of length to compare objects; (3) ordinality refers to items that measure number sequence, reading two-digit numerals, identifying the ordinal position of an object, and solving a word problem; (4) addition and subtraction refers to a cluster of items which measure calculating sums up to 10 and relationships of numbers in sequence; and (5) multiplication and division involves items that measure problemsolving using multiplication and division and number patterns.

Children's proficiency in specific reading and mathematics skills was calculated in two different ways. First, to estimate the percentage of the total population who can demonstrate specific skills, proficiency probability scores were utilized (i.e., a score that is the probability a child would have passed the proficiency level). These scores refer to IRT-based probabilities, and are continuous (e.g., ranging from 0 to 1). They are estimates based on overall performance rather than counts of actual item responses. Second, to determine a dichotomous (e.g., yes or no) cut-point of whether a specific child is proficient in a specific skill, the specific items in a cluster (i.e., proficiency area) were utilized. For each proficiency level, a score of 1 was assigned to children who correctly answered at least three of the four items in the cluster, and a score of 0 was given if at least two items were incorrect or "don't know." Both the continuous score and the dichotomous score reference the exact same set of assessment items. Due to the slight computational difference, the estimates produced by these scores do not exactly match (see tables 1 and 2 versus table 5). The continuous proficiency probability scores maximize the amount of information the ECLS-K captured on children's reading and mathematics knowledge and skills (through an IRT model, information is provided on every item in the assessment battery). Therefore, this report utilizes the proficiency probability scores when presenting information on children's reading knowledge and skills. The dichotomous proficiency scores, in this report, are only used to determine, in a yes/no fashion, whether a child demonstrated a specific reading and mathematics skill at kindergarten entry (e.g., table 5).

4 A subsample of children was also assessed in the fall of first grade. Findings from that assessment will be included in future reports.
5 In deference to time and efficiency, the cognitive assessment was developed as a two-stage assessment. Separately for each domain, all children received the first-stage routing section. A routing section is a set of items of varying difficulty levels, in which all children receive all items. Depending on the number of items children correctly answered in the routing section, they were then "routed" into a second-stage form, which varied by level of difficulty. The two-stage design allowed for the maximum amount of information with efficiency of time. The routing section provided a rough estimate of each child's achievement level, so that a second-stage form with items of the appropriate difficulty for maximizing measurement accuracy could be selected. Scores for each domain were developed using Item Response Theory (IRT). These scores can be compared regardless of which second-stage form a student was administered. In other words, each child has a score that reflects the entire battery of items.
6 The t-score is a transformation of the Item Response Theory-based (IRT) scale score.
7 For information on reliability of the scores, please see the Methods and Technical Notes section.

<< back    >> next

Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.