EDUCATION INDICATORS: An International Perspective


Indicator 7: Reading Literacy

Comparing reading scores from the International Association for the Evaluation of Educational Achievement's (IEA) Reading Literacy Study and the National Assessment of Educational Progress (NAEP)

In contrast to the positive results provided by the IEA Reading Literacy Study, where American 4th- and 9th-grade students do well when compared with students from other countries, the picture of American students' reading proficiency provided by the National Assessment of Educational Progress (NAEP) is less optimistic. For example, in 1992 NAEP reported that less than 30 percent of 4th- and 8th-graders and only 40 percent of 12th-graders met or exceeded the Proficient level in reading. /1 Proficient is the central level and represents solid academic performance and competency over challenging subject matter relevant for each grade level. The Advanced achievement level is the highest level, signifying superior performance beyond Proficient. Very few students at any of the three grades assessed attained the Advanced level (only 3 percent). By 1994, the NAEP picture was slightly worse, as the average reading proficiency of 12th-grade students declined significantly from 1992-94. /2 It should be noted that the setting of achievement levels for the national assessment is relatively new and in transition.

This contrast between the positive results reported by IEA and the less positive results reported by NAEP could imply that IEA and NAEP report or measure different things. This question is addressed in the following discussion.

Differing points of comparison

One of the first things to consider is that although both studies provide descriptions of reading performance of analogous samples of students, the basis for reporting differs considerably.

In the case of IEA, reporting is based on comparisons of the performance of groups of students within and across countries. Student performance in one country is compared with that of students in the other participating countries. Or, students in one subgroup within a country are compared with other students in other subgroups within the same country. These comparisons address issues such as mean performance of each country or the distribution of scores within a country as compared with the distribution of scores in other countries. As such, the point of comparison is a relative or normative comparison rather than an absolute comparison. In other words, students are always being compared against other students and not against a standard set of criteria on knowledge.

Much of the NAEP reporting, on the other hand, is based on comparisons between actual student performance and desired performance. It is a comparison against an absolute standard or criterion that is defined independently of what students do. As such, the reporting is referenced to a description of the tasks that students are expected to be able to do, or that someone or some group thinks they should be able to do. This is a criterion-referenced comparison.

Success or failure in either context does not necessarily imply success or failure in the other context. Consequently, American students do very well based on the relative comparisons used by IEA, but within the NAEP context, they do not do as well as the National Assessment Governing Board (NAGB) believes they should be doing.

Differing emphases

In addition, NAEP and IEA assess different aspects of reading. More than
90 percent of the IEA items assess tasks covered in only 17 percent of NAEP items. Further, virtually all the IEA items are aimed solely at literal comprehension and interpretation. Items of that kind make up only one-third of NAEP reading assessments.

Both IEA and NAEP expect literal comprehension and the development of understanding. Both define domains of reading literacy. However, there is a major difference between IEA and NAEP in what students must do to demonstrate their comprehension. While success in IEA depends on reaching and correctly answering more questions directly related to a reading passage, to reach NAEP's advanced level, more interpretive and higher level thinking is required. Fourth-grade students in NAEP, for example, had to interpret text, summarize information across text, develop ideas about textual information, and formulate more complex questions about text. /3 Eighth-graders were required to show an even greater level of competency. They had to compare and contrast information across multiple texts, connect inferences with themes, understand underlying meanings, integrate prior knowledge with text interpretations, and demonstrate some ability to evaluate the limitations of documents. /4

Equally important is the fact that NAEP requires students to generate answers in their own words much more frequently than IEA, which mainly asks students to respond to the test designers' options. Thus the skills required by IEA reading tasks can be seen as a subset of those required by NAEP. Moreover, the IEA test items did not cover the entire expected ability range. Many American students got every item correct, creating a ceiling effect. Thus, distinguishing between abilities of students in the upper range is not possible.

In contrast, the range of item difficulty on the NAEP reading assessment exceeds the ability of most American students. Few, if any, students would correctly answer all items. Thus, differences in the abilities of students in the upper range can be distinguished easily.

One might wonder whether students in the other participating countries would do better than American students on the standards set by NAGB. There is a high probability that the rank ordering or relative performance of countries would remain pretty much the same. /5 Therefore, it seems reasonable to conclude that American students would do well as compared with students in other countries even if the NAEP test were administered.

Footnotes

1/ I.V. Mullis, J.R. Campbell, and A.E. Farstrup, NAEP 1992 Reading Report Card for the Nation and the States (Washington, D.C.: 1993).

2/ J.R. Campbell, P.L. Donahue, C.M. Reese, and G.W. Phillips, NAEP 1994 Reading Report Card for the Nation and the States (Washington, D.C.: 1996).

3/ Mullis, Campbell, and Farstrup, op. cit.

4/ Ibid.

5/ This statement is derived from the theoretic underpinnings of item response theory and its application to the test scaling used for both the IEA Reading Literacy Study and the NAEP Reading Assessment.



[Table of Contents] Table of Contents