Skip Navigation

StatChat Home

TRANSCRIPT: The Nation's Report Card - Results from the 2009 NAEP Reading Assessment

Dr. Peggy G. Carr

Hello, this is Dr. Peggy Carr, Associate Commissioner of the National Center for Education Statistics (NCES). Welcome to Ask NAEP, an online Q&A session about the findings for the NAEP 2009 Reading report. I hope you've had time to look at the results on the Nation's Report Card website, and that I can answer any questions you may have about them. I have a team of NAEP specialists with me to make sure your questions are addressed thoroughly, so there may be a slight delay between the time you submit your question and the time the answer appears.

We are usually able to post answers for all questions, even after the Ask NAEP session has ended at 3:30. If you cannot stay online for the full time, a complete Ask NAEP transcript will be available online after our discussion. I'm interested to hear which results you want to talk about, so let's get to your questions!

Diane from Albany, New York asked:
How do you assess that your reading passages are grade and age appropriate? Do you use readability formulas, teacher judgment, or educational expert judgment?

Dr. Peggy G. Carr's response:
Diane, thanks so much for your interest in today's release of NAEP 2009 reading results. Your question is a good one. Obviously, passages are a critical part of any reading assessment and we take great care to make sure that all of our passages are appropriate. We use at least four sources of information to help us select passages to include in the assessment. First, passages are drawn from grade-appropriate sources -- in many cases, some of the same sources teachers draw from in their classroom. Second, we analyze individual texts using a variety of readability formulae -- the exact combination of formulae we use is determined by the genre, grade, and length of text, as some formulae work better at different levels and with different genres. Third, all passages are reviewed and approved by our standing committee of reading experts (teachers, curriculum specialists, teacher educators, and researchers). And fourth, we try passages and items out with small groups of students even before we conduct a nation-wide pilot of all passages and item sets. In addition to these rigorous evaluations of each passage's appropriateness, each passage undergoes several fairness reviews. I hope that this answers your question. Thank you again for your interest in NAEP.


Barbara from San Diego asked:
The 2009 NAEP contained more informational text than ever before. How did students perform on this text type? Is there a breakdown of performance in terms of the different types of informational texts found on the test?

Dr. Peggy G. Carr's response:
Barbara, the reading results released today are based on a composite reading scale composed of both literary and informational texts. Results for all three scales (composite, literary, and informational) can be found on the NAEP Data Explorer (NDE) at http://nces.ed.gov/nationsreportcard/naepdata/dataset.aspx. To obtain the informational subscale results first choose subject (reading) and grade (4th or 8th) from the primary NDE page (the first tab). Upon doing so, the page will be refreshed with a list of "measures" as the next selection option. From here you can click on any combination of "composite scale," "gain information," or "literary experience" and obtain results for the nation, states, and subgroups. Generally speaking, the national results are similar for the informational scale as the composite scale. No further breakdown of informational results by different types of informational texts is available. Additionally, NAEP released 2 blocks of assessment items, one of which is an informational block. The released blocks including source items, percent correct statistics, and exemplar responses can be found in the NAEP Questions Tool at http://nces.ed.gov/nationsreportcard/itmrlsx/.


Alicia Miller from New York, NY asked:
How will the new national standards affect future reading assessments? Will the assessments be redeveloped and aligned to the new standards? If so, when do you anticipate this to happen? Thank you!

Dr. Peggy G. Carr's response:
The NAEP content frameworks are carefully developed by working with states and content experts to reflect a broad national consensus of content. Our frameworks are generally revisited every ten years. However, NAEP is closely monitoring the development of the new core content standards and their adoption and implementation by the states. We are planning to study the degree of alignment between our current reading framework and the new standards.


Sara from Washington, DC asked:
When will the results for participating TUDA districts be available? We would like to be able to disaggregate the scores by school type - specifically public versus charter schools. Is it possible to do such an analysis at this point, and if not, when will it be possible? Many thanks, Sara

Dr. Peggy G. Carr's response:
Sara, Sara, the TUDA reading results are expected to be released in May, 2010. Regarding the charter school results, those are currently available in the NAEP Data Explorer (NDE) at http://nces.ed.gov/nationsreportcard/naepdata/. To get to these data, select subject and grade from the first tab and then select a jurisdiction (e.g. national public). Then proceed to tab #2 (select variables) where there are variables listed under "categories" and "subcategories." Listed as the 3rd entry under "Major Reporting Groups" and "School Factors" is the variable "school identified as charter." Select this variable and you will be able to obtain results for charter vs. non-charter schools.


Sam from Washington DC asked:
Why were the 12th grade reading scores not released at the same time? What are the implications of these scores for the goal of 100% proficiency by 2014?

Dr. Peggy G. Carr's response:
The grade 12 results for Reading and Math are planned to be released together in the latter part of 2010. Be sure to watch for that release because it will be the first time we have state-level results to report at grade 12 for a group of 11 states that participated in our 12th grade state pilot assessment. With respect to proficiency, I believe you may be referring to the NCLB goal of 100% proficiency, which actually refers to the performance of students on the state assessment, not the performance of students on the NAEP assessment. Different states have set different proficiency standards and they may or may not look like NAEP's "Proficient" achievement level.


Jeff from Washington, DC asked:
Why is the National Public average different in the state profiles, NDE, and snapshot state reports? For 4th grade mathematics I see average national score of 220, 221, and 239.

Dr. Peggy G. Carr's response:
Jeff, the national public score for mathematics is 239, and the national public score for reading is 220. The score of 221 is the overall reading score for the nation including private schools. When comparing with states, be sure to use the national public results because state results are based only on public schools.


Gene Pettinelli from Boston, MA asked:
The definition of proficient is hard to relate to everyday use. For example, if someone is NOT proficient can they read the front page of the New York Times and understand it all? Thanks

Dr. Peggy G. Carr's response:
Hi Gene, to help understand how this definition relates to what students performing at this level can do, it may be helpful to refer to the achievement level descriptions that have been developed. The achievement level descriptions were updated in 2009 to reflect the new reading framework, and can be viewed at http://nces.ed.gov/nationsreportcard/reading/achieveall.asp. For instance, when reading informational texts, fourth-grade students performing at the Proficient level should be able to locate relevant information, integrate information across texts, and evaluate the way an author presents information. An additional source of information about what students performing at each achievement level can do is the item map that appears in the 2009 Reading Report Card for each grade (page 19 for grade 4, page 37 for grade 8). You can also view the item maps at http://nces.ed.gov/nationsreportcard/itemmaps/index.asp. In each item map, you can see descriptions of NAEP assessment items that students at various performance levels are likely to answer correctly.


David from Alexandria, VA asked:
This is a great forum, but why do answers to questions take so long to appear?

Dr. Peggy G. Carr's response:
I'm glad that you find this forum useful, but I'm sorry, David, I can't answer questions as they come in because of the process we have in place. Although we open up our website for questions well in advance, we don't actually begin answering until the afternoon when the Ask NAEP session is scheduled to begin. But, thanks for your patience and I'm glad you are taking part.


Isaac Bernard from Indianapolis, IN asked:
When will the science results be released?

Dr. Peggy G. Carr's response:
Glad you are interested in the results of the new science assessment we administered last year, Isaac. The release for the national and state results from that assessment is anticipated for the Fall 2010. The release of the 2009 science results for the trial urban districts is expected to follow 4 to 6 weeks later.


Neil Snyder from Rockville, MD asked:
Where are children most deficient in their reading skills according to NAEP? Does the NAEP test reflect students' oral language skills and their ability to process language?

Dr. Peggy G. Carr's response:
Neil, while some state and district tests do report subscale results for skills like oral language, the NAEP reading assessment focuses on students' reading comprehension overall and with different types of texts--literary and informational. So, while you would be able to look at subscale results for those two different text types through the NAEP Data Explorer (NDE) (http://nces.ed.gov/nationsreportcard/naepdata), ), the assessment is not constructed to allow for analyzing other subskills in which children may demonstrate weaknesses. I should point out that the questions we ask students in the reading assessment are developed to engage students in various processes of reading like "locate and recall," "integrate and interpret," or "critique and evaluate." Although we don't report summary results across those types of questions, you can examine student performance on examples of these question types in the NAEP Questions Tool (http://nces.ed.gov/nationsreportcard/itmrlsx). Looking at how students responded to individual sample questions can be very interesting and revealing of what students are able to do.


Linda Cervantes Hoslins from Merced, CA asked:
Do you feel that very young children, who are developing reading skills and are learning a new language, are having difficulties learning to read? If not, how do we know this is not so? If so, what best practices are in place to help these children keep pace and not fall behind?

Dr. Peggy G. Carr's response:
Linda, thank you for your interest in today's release of the 2009 NAEP reading results in the Nation's Report Card. NAEP results are reported at the 4th, 8th, and 12th grade, so we cannot provide you with data on the literacy development of very young children. However, we can direct you to three sources that you might find helpful. First of all, The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES). The WWC assesses the rigor of research evidence on the effectiveness of intervention programs to provide educators with the tools they need to make informed instructional decisions. It produces user-friendly guides and recommendations for classroom practices. You can access the WWC at http://ies.ed.gov/ncee/wwc/. Second, the National Center for Education Statistics (NCES) has an Early Childhood Longitudinal Study with one cohort that they began following at birth and another that began at kindergarten. Their website is http://www.nces.ed.gov/ecls You might be particularly interested in one of their reports, Children's Reading and Mathematics Achievement in Kindergarten and First Grade.


Miriam from New York, NY asked:
Will the assessments change with the new common reading standards?

Dr. Peggy G. Carr's response:
Hello Miriam, your question is very timely and one that I'm sure many others have. Although the NAEP standards bear some similarities to the new Common Core Standards, such efforts are distinct from the NAEP program. NAEP will continue to serve as an independent indicator while it monitors the Common Core Standards to ascertain its proper change in role, if any.


Tyler from San Diego, CA asked:
As near as I can tell, NAEP assessments do not ask how long a test subject has been a student at the school or within the district where he/she is taking the test. To better determine and understand specific school/district effects, and to control for mobility effects, would it not be helpful to disaggregate results based on how long the test subject has actually been a student in the school/district?

Dr. Peggy G. Carr's response:
Hello Tyler, your question brings up a point that is relevant to evaluating the effectiveness of a school or a district. That type of analysis, however, would require individual student scores. The NAEP assessment does not produce scores for individual students or schools. Additionally, NAEP only produces district-level scores for select urban districts that participate in our Trial Urban District Assessment (found in the TUDA reports). However, the NAEP assessment is well designed to act as a "snapshot" of educational achievement at a particular point in time. This is done by using a sample of students, as opposed to assessing every student. Although the current performance for the sample of students is compared to earlier scores, the NAEP assessment does not follow students as they progress through grades or as students change schools.


Lindsay Weil from New York, New York asked:
How many students were at the Basic Level and how many students were at the Below Basic Level on the NAEP reading test? The data on the web site show the performance level or higher.

Dr. Peggy G. Carr's response:
Lindsay, thank you for your interest in the achievement of our nation's students. Sixty-seven percent (that is, approximately 2,444,000 students) of our nation's fourth-graders performed at the Basic or above achievement level. The remaining 33% of fourth-fv graders (approximately 1,204,000) were below Basic. For eighth grade, 75% (that is, 2,770,000 students) performed at the Basic level or above. The remaining 25% (923,000) were below Basic.


Bob from Brookeville, MD asked:
Good afternoon. Did you get my question submitted a couple of days ago about cohort achievement gaps?

Dr. Peggy G. Carr's response:
We did, Bob. We are in the process of answering your question to make sure you get a thorough, accurate response; it will be posted very soon!


Daniel from Language Magazine, Los Angeles California asked:
On page 11 of the report, 4th grade results are listed according to ethnicity. All groups show little or no progress since 2007 but some gains since 1992 except for the American Indian/Alaska Native category which shows a drop of 7 points since 1994 (minus 10 points since 2000), yet the result is dismissed as statistically insignificant. A 10-point shift would be significant in all other groups so does that mean that all results for American Indian/Alaska Native are statistically insignificant?

Dr. Peggy G. Carr's response:
Daniel, it's important to keep in mind that we use the term "significant" in only a statistical sense. It is an indication of how reliable the results are — not an indication of their practical or educational significance. The American Indian/Alaskan Native score change since 1992 is apparently large, but is nevertheless not larger than the margin of error around the score. Whether a 7- or 10-point shift is significant or not depends on the margin of error around the estimated change. Because American Indian/Alaska Native students are generally a small group and concentrated in few areas, the margins of error for this group tend to be large. As a result, only large differences can be detected as significant. In addition, in 1992 NAEP had a much smaller national sample than it has had since 2003. Changes since 1992 for many groups are also insignificant, due to the smaller samples. Perhaps it is worth noting that not all the results for American Indian/Alaskan Native are statistically insignificant. For example, in 2009 the difference between American Indian/Alaska Natives in the Nation and Alaska was 13 points and statistically significant.


Young from Bethesda, MD asked:
I noticed the 2009 Reading assessment is based on a new framework developed by NAGB. In other subjects like Math, the new framework forced not reporting trend data as I recall. How was the Reading trend data maintained despite the new framework?

Dr. Peggy G. Carr's response:
Young, you're correct that, in the past, a new trend line was typically established when a new framework was introduced for a NAEP assessment. However, special analyses were conducted in 2009 to determine if the results from the 2009 reading assessment could be compared to results from earlier years despite being based on a new framework. These special analyses included a content alignment study that compared the old and new frameworks and items, as well as a reading trend study that involved administering items from both the old and new assessments to students in 2009. Based on those analyses, we determined that the 2009 reading assessment results could be compared to those from earlier assessment years. You can read more about the special analyses at http://nces.ed.gov/nationsreportcard/reading/trend_study.asp.


Bob from Brookeville, MD asked:
Has the 8th grade gap between black students (or Hispanic students) and white students closed in 2009 compared to the 4th grade gap for that same cohort back in 2005? Has anyone looked at cohort gap changes over time? Say for example, 4th graders in 2003 compared to 8th graders in 2007? Or 4th graders in 2005 compared to 8th graders in 2009? And not just in reading, but in math and science, too?

Dr. Peggy G. Carr's response:
Hi Bob, thank you for your patience while you waited for your answer! This is an interesting empirical question that you ask. NCES has not examined achievement gaps over time for the same student cohort. However, such an approach has been done over a decade ago by educational researchers using NAEP data. Such an approach could be carried out using the NAEP raw data files. But we must caution you that there are a number of psychometric challenges and demographic changes that would cause the cohorts to be less comparable over time. For example, we know from our most recent NAEP results that the percentage of Hispanic students has almost tripled from the earliest assessment year, whereas the percentage of White students has fallen from about 73 percent to 56 percent.


Karen from Pennsylvania asked:
The Executive Summary mentioned that an expert review had determined that results gained under the previous Framework and the new 2009 Framework were comparable. Can you point me to the report which details that expert review/study? Thank you.

Dr. Peggy G. Carr's response:
Karen, more information about the special analyses into whether the results from the 2009 reading assessment could be compared to results from previous assessment years is available at http://nces.ed.gov/nationsreportcard/reading/trend_study.asp.


David from Irvine, CA asked:
To what degree do NAEP results reflect actual student reading achievement in a meaningful way?

Dr. Peggy G. Carr's response:
Hi David. NAEP uses several approaches to ensure that the NAEP reading assessment results are both valid and meaningful. First, the National Assessment Governing Board consults widely with reading experts and state departments of education around the country to ensure that the most current reading research is reflected in the content of the reading assessment. Thus the NAEP reading assessment reflects content validity according to experts as well as scientific research on the skills that are required to read well. Second, the NAEP reading assessment includes some multiple-choice questions, but at least half or more of the testing time is devoted to questions that require students to do more than simply choose an answer from a list. Instead, they must construct their own answers that show their ability to develop and interpret the meaning of the written texts in the assessment booklets. Third, NAEP has for many years supported studies of the validity of NAEP's measurement of reading and mathematics. The papers and publications derived from this program of research are available at http://nces.ed.gov/nationsreportcard/researchcenter/nvspapers.asp. For example, one study compared how well NAEP's mathematics assessment could detect the impact of an instructional program compared to another test that was specifically designed for that purpose. NAEP's mathematics assessment was just as good, if not better than the other test. Unfortunately, no such comparison has yet been done in reading. In short, great effort is devoted to ensuring that NAEP results do reflect actual student achievement, measured in a way that is substantively meaningful.


Karyn from Pittsburgh asked:
The full report mentions that the NAEP 2009 Reading Assessment included a systematic study of meaning vocabulary, but then never reported any findings related to that issue. Can we anticipate a separate report on this topic, and if so, when might that be available?

Dr. Peggy G. Carr's response:
Thank you for your question about the NAEP reading results that were released today in the Nation's Report Card. This is the first year we have assessed "meaning vocabulary" as part of the Reading Assessment. The 2009 NAEP Reading Framework describes this new means of assessing vocabulary as follows, "...The 2009 NAEP Reading Framework recommends a more systematic approach to vocabulary assessment than previous frameworks. Vocabulary assessment will occur in the context of a passage; that is, vocabulary items will function both as a measure of passage comprehension and as a test of readers' specific knowledge of the word's meaning as intended by the passage author. A sufficient number of vocabulary items at each grade will provide reliable and valid information about students' vocabulary knowledge..." The 2009 NAEP marks the first national assessment using these new vocabulary item types. We began evaluating the strength of a "Vocabulary Scale," using pilot data from 2007. Over the next several months, we will continue those analyses, using data from the 2009 assessment. Preliminary analyses suggest that reporting a separate vocabulary score may be achievable. If this is the case, we may begin reporting vocabulary results as early as 2011. I hope that this answers your question. You can access the full text of the Reading Framework for the 2009 National Assessment of Educational Progress at http://www.nagb.org/publications/frameworks.htm. Thank you again for your interest in NAEP.


Sally Holland from Washington, DC asked:
Was there a trend as to what types of questions that the 4th and 8th grade students had problems answering?

Dr. Peggy G. Carr's response:
Sally, the NAEP scores in the 2009 reading report summarize students' performance across all types of questions in the assessment. In terms of performance on different types of questions, students generally tend to perform better on multiple choice items than on constructed response items that require them to write out their answer. Beyond that, we have some resources that provide information about students' performance on individual questions. For example, it may be helpful to look at item maps, which show descriptions of NAEP assessment items that students at various performance levels are likely to answer correctly. You can view the item maps at http://nces.ed.gov/nationsreportcard/itemmaps/index.asp. In addition, you can view a selection of items that were in the 2009 reading assessment in the NAEP Questions Tool at http://nces.ed.gov/nationsreportcard/itmrlsx/landing.aspx. In the Questions Tool, you'll see information about the content of the item (e.g., whether it corresponds to a literary or informational text) and the type of reading process it was intended to measure (e.g., locate and recall, interpret and integrate, or critique and evaluate).


Patrick from Spartanburg, South Carolina asked:
How is the Nation's Report Card aligned with the No Child Left Behind expectations for early reading intervention at the elementary level? Where can I locate a copy of the Nation's Report Card report?

Dr. Peggy G. Carr's response:
Patrick, NAEP is best viewed as an independent monitor of the performance of the nation's educational systems. And this is one reason it serves a valuable role as part of the No Child Left Behind legislation. Within this context, it's an important finding that the average reading score for fourth graders did not change from 2007 to 2009. You can download a copy of the Reading Report Card at http://nces.ed.gov/nationsreportcard/pubs/main2009/2010458.asp. Also, you may want to look at our website which provides an array of data displays and interactive tools to examine all the results in some depth (http://nationsreportcard.gov).


Mark from Frankfort Kentucky asked:
Hi Peggy, Education Week on March 17, 2010 quoted the following:
We know that there is a non-significant relationship between exclusion rates and student scores, said Peggy Carr, the Associate Commissioner for NCES Assessment Division. We do know, however, that there is a moderate, mostly significant relationship between changes in exclusion rates and changes in scores.

Kentucky's exclusion rates did not change much, but I was asked a question regarding the second quote about changes. Is there further information or a document that explains significant relationship between moderate changes in exclusion rates and changes in scores? Thanks in advance to NCES and NSSC for any feedback, if available.

Dr. Peggy G. Carr's response:
Hello Mark, and thank you for your question. As part of each assessment, the National Center for Education StatisticsNCES investigates the impact that exclusion has on the average score for the nation and for each state. To explore the impact of inclusion, we can look at the data in at least two ways. We can focus on a single year and ask, "Is there a relationship between the percentage of students excluded, and NAEP scores?" Or, we can look at two years of data and ask, "If a state makes a substantial change to the percentage of students excluded from one assessment to the next, will this affect their average score on NAEP?" In response to the first question, in any one year, there is no consistent relationship between a state's score and the percentage of students excluded from NAEP. In response to the second question, looking across two assessment years (e.g., 2007-2009), we do find that some states that excluded more students in the most recent year saw slightly more of an improvement than states that excluded fewer students. So, while there is not a systematic relationship between percent of students excluded and performance on NAEP in any single year, states that increase the percentage of students they exclude, from one assessment to the next may observe a slight increase in performance. NAEP also produces results for the full population (that is, results that include estimates for excluded students) in each jurisdiction and each state assessment year. These full population estimates are designed to reflect the impact of state exclusion rates on their scores. These data are available for review during each release of state NAEP results and can be viewed at http://nces.ed.gov/nationsreportcard/about/2009fpe4r.asp and http://nces.ed.gov/nationsreportcard/about/2009fpe8r.asp


Dr. Peggy G. Carr:
Thanks for all the excellent questions. Unfortunately, I could not get to all of them, but please feel free to contact NAEP staff or myself for more information. I hope that you found this chat helpful and the report interesting.

Back to StatChat Home