Skip Navigation

TRANSCRIPT: The Nation's Report Card - Results from the 2008 Trends in Academic Progress

Dr. Peggy G. Carr Good afternoon, and welcome to our StatChat on the long-term trend 2008 report. I hope you've had time to examine the results and that I can answer any questions you may have. There are many findings for both subjects and all three age groups. I'm interested to hear which results you want to talk about.

Before we begin answering your questions, I would like to address several questions that have been raised regarding the finding for 17-year-olds that average scores for White, Black, and Hispanic students have all increased since the 1970s—while, at the same time, the average scores for all students have remained relatively flat. This is the result of changes in the demographic makeup of the 17-year-old population during the last four decades. Despite the gains made within each of these three student groups, over time an increasing proportion of the population is represented by student groups who tend to score lower, on average.

To provide some specifics — in 1975 the make-up of the 17-year-old population was as follows: 84% White, 11% Black, and 3% Hispanic. By 2008, the make-up of the 17-year-old population was 59% White, 15% Black, and 18% Hispanic.

Although Black and Hispanic students have made significant gains like their White peers during that time period, their average group scores remain lower than that of White students. Consequently, their increasing representation within the overall population has the effect of masking overall gains-even as each individual group within the population is improving.

This is a phenomenon referred to as Simpson's Paradox - something that can often be observed in statistical data sets. Let me provide another example of how the phenomenon may work. Imagine that you are teacher with a classroom that in one year has 25 White students, 3 Black students, and 2 Hispanic students. In that year the average score for White students was 80, the average score for Black students was 50, and the average score for Hispanic students was also 50. Overall the average score for your classroom was 75.

Several years later, the demographic make-up of your classroom has changed significantly. You now have 18 White students, 3 Black students, and 9 Hispanic students. You notice that each group is making gains on the same test-their respective average scores are 85 for White students, 55 for Black students, and 55 for Hispanic students. The interesting thing here, however, (and the evidence of another Simpson's Paradox), is that when you calculate the overall average for the classroom, it has actually dropped to 73 points.

Peter from Westminster, Maryland asked:
In the area of mathematics, can the rise in rise in scores be attributed to the accountability requirements of ESEA of 2001?

Dr. Peggy G. Carr's response:
Peter, NAEP data are a snapshot of existing conditions, and not designed to support causal relationships. However, the data raise interesting patterns for policymakers and analysts such as yourself to consider.


Doreen from Cary, NC asked:
Why do you assess at ages 9, 13 and 17 for this report?

Dr. Peggy G. Carr's response:
Doreen, NAEP long term trend assessments have a history that goes back to about 1970. The founders of NAEP at that time were aware that, for any given grade, there would be variation across states, and over time, as to the age of students in that grade. This is due to differing policies and practices with regard to admission to kindergarten and first grade, and grade promotion and retention. So by defining the populations in terms of age, rather that grade, one can make a fairer comparison over the long-term as to the educational progress that the nation is making, not subject to variations over time in the age at which students enter school and progress through the grades. Therefore NAEP continues to use these age definitions for its long term trend assessments. Other NAEP assessments, which do not track trends over several decades, are based on grades.


Peter from Bloomington, Indiana asked:
The 2008 mathematics results follow the trend we've seen for years of growth at ages 9 and 13 but no real change at age 17. Given that high school students are taking more and harder high school mathematics courses than their counterparts of the 1970's and 1980's, what could explain the lack in improvement in the NAEP scores at age 17?

Dr. Peggy G. Carr's response:
Peter, This is a good question and a difficult one to answer with the NAEP data. It is true that more students are taking higher-level courses (pre-calculus or calculus) now than in the 1980's and 1990's (as shown on page 45 of the report). The fact that this did not translate into larger scale score gains may be somewhat explained by the mixed performance of the students who are taking higher-level math in 2008 relative to earlier years. Of course there are many other factors such as instruction time spent on math, course content over the years, and population characteristics. The NAEP Data Explorer provides assessment results for many relevant questions such as these. For example, course-taking can be found via the "search" function and is listed under "Student Factors" and then "Academic Record and School Experience." The NDE can be reached at: http://nces.ed.gov/nationsreportcard/naepdata/


Elizabeth from Virginia asked:
Are there any conclusions to be drawn from the response rates lacking from the report data for 17 year-olds in private schools? How does that affect the overall statistical data for 17 year old improvements?

Dr. Peggy G. Carr's response:
Elizabeth, the response rate for non-Catholic private schools was not satisfactory for the 17-year-old sample, and did not met NCES standards (although it is important to note that, in those private schools that did participate, student response was high). However, this is not a serious threat to the OVERALL results for 17-year-olds, for two reasons. First, we checked that the characteristics of the private schools that did not participate are not greatly different from those that did. And second, since non-Catholic private schools enroll less than five percent of the nation's 17-year-olds, their data do not have a substantial impact on the overall results. The weighting process that is applied to the data ensures that the contribution of private school students to the NAEP results reflects their population size, so it is not the case that private schools are underrepresented in the reported data.


Robert from Baltimore, MD asked:
I find it interesting that the website has information for policymakers, teachers, and parents, but not for students. Given the increase in web savvy among today's youth and teenagers, do you plan to implement additional online resources for teenagers in the near future? I would think that sample tests, tips, etc, would be valuable for them.

Dr. Peggy G. Carr's response:
This is a compelling point Robert, and NCES is interested in making its information more useful for all users. There are currently many sources of information available for users about the NAEP assessment. Questions that have been used on past assessments, for various subjects, can be found at http://nces.ed.gov/nationsreportcard/itmrls/startsearch.asp. As for the test itself, scores are not reported at an individual level, and group means are used to get an overall picture as to how students are doing. Providing tips or study guides for the NAEP assessment might distort what the actual performance of student groups would be.


John from Ithaca, New York asked:
How do the NAEP trend findings since 2000 compare to the PISA and TIMSS findings for the United States.

Dr. Peggy G. Carr's response:
The Trends in International Mathematics and Science Study (TIMSS) provides data on the mathematics and science achievement of U.S. students compared to students in other countries. TIMSS data were collected in grades 4 and 8 in 1995, 1999 (grade 8 only), 2003, and 2007. The Program for International Student Assessment (PISA) provides data on reading, mathematics and science literacy among 15-year-olds. The most recent results are for 2006 and have trends back to 2000. Results for the three assessments cannot be directly compared, as they target slightly different populations and cover different content. For example, the long-term trend assessment focuses on computation skills, while TIMSS also includes problem-solving skills. However, examining patterns of changes over time across separate assessments can provide useful information. Long-term mathematics trends showed some similarities to the TIMSS and PISA trends. For example, fourth-graders showed an increase in TIMSS scores between 2003 and 2007, similar to long-term trend mathematics for 9-year-olds. PISA mathematics literacy did not show a change between 2003 and 2006 among 15-year-olds. For long-term trend mathematics, 13-year-olds did show an increase between 2004 and 2008 however age 17 did not.


Eric A Nelson from Falls Church, Virginia asked:
The largest increase in scores since 2004 was found in Age 9 reading, where there were significant gains for all students, but especially for Black and Hispanic students. Economically disadvantaged students (who are disproportionately minority students) in Grades K-3 were a special focus of the federal Reading First program from 2002 to 2008. Students at a higher age were not in the group that received most Reading First assistance. Are the scores evidence that Reading First worked?

Dr. Peggy G. Carr's response:
Eric, You are correct that overall, 9-year-olds made significant gains in reading, and even larger gains for Black and Hispanic students. The NAEP data also show that 9-year-old students at the lower percentiles showed greater gains than upper percentile students in reading. However, the NAEP program is not designed to evaluate the effectiveness of Reading First or other reading interventions.


John Krumich from Warrenton, Virginia asked:
Does this assessment include data from students in independent and parochial schools. How about home-schooled children?

Dr. Peggy G. Carr's response:
For the long-term trend assessment, students from public, private, and parochial schools are included in the sample. Unfortunately, home-schooled students are not included in the sample.


Hazel from Mitchell, SD asked:
Do you find that the digital divide is helping or hurting reading scores for students over time? Do students with computers at home do better overall, worse overall or is there no significant difference? Was grammar and writing taken into account?

Dr. Peggy G. Carr's response:
From the NAEP Data Explorer, you will find that students with computers at home generally performed better on reading than students without computers. This was true for all age groups for both 2004 and 2008. Also, the gap between these student groups did not change significantly between 2004 and 2008. Responses are scored for content, not spelling, handwriting, or grammar. However, NAEP was designed to provide a snapshot of student performance at a particular point in time. NAEP provides observational data that cannot be used to determine causal relationships. For example, we don't know if students with home computers perform better on reading because the computer helps them with their reading or because these students were rewarded with computers by their parents because of their efforts in school. Another possibility is that home computers reflect higher socioeconomic status and parents who are more involved in their children's success in school. With NAEP data we cannot identify the causal factor.


Mary from Natchez, MS asked:
I would like to suggest the possibility of researching having Pre-school and Elementary Teachers declare college majors in specific subject areas; i.e. Elementary Math & Science or Elementary Reading & Language Arts, etc. I firmly believe that this would improve the widespread weaknesses that are seen in student performances, particularly in math & reading.

Dr. Peggy G. Carr's response:
Data from our main NAEP assessment show that teachers' college majors appear to have some relationship to students' mathematics performance; however, there are grade-level differences. At grade 4, students whose teachers had a college major in mathematics education or education outperformed those students whose teachers had a major in a field other than education, mathematics education, or mathematics. At grade 8, it was the students of teachers with a college major in mathematics who outperformed students whose teachers had a college major in education or a field other than education, mathematics education, or mathematics.


Kristi from Sacramento, CA asked:
Critics continue to quote 1990s-era reports from the U.S. Government Accountability Office and the National Academy of Sciences that the method for setting NAEP's achievement levels is "fundamentally flawed." Has the process been improved in the interim? Why should the public have confidence in the test's results?

Dr. Peggy G. Carr's response:
Your question is not relevant to the NAEP long-term trend data that were just released. It's not relevant because NAEP's long-term trend assessment does not use achievement levels in reporting results. Instead, the long-term trend results are reported according to "performance levels," which are not goals for U.S. students, were set in the early 1980's, pre-date the National Assessment Governing Board's achievement levels, and were never evaluated by the Government Accountability Office (GAO) or the National Academy of Sciences (NAS). You may wish to ask your question again in the fall, when NCES reports results for the main NAEP assessment of mathematics (This assessment does report results according to achievement levels). The mathematics achievement levels remain in a trial status, as the NAEP legislation requires, until the NCES Commissioner makes a determination, based on a congressionally mandated evaluation, that the achievement levels are valid, reliable, and informative to the public. Previous NCES Commissioners have not made such a determination, so each NAEP report continues to state that the achievement levels remain in a trial status and should be interpreted and used with caution. Nevertheless, the achievement levels, since they represent constant points on the NAEP scales, are useful for marking changes in the percentages of students reaching each achievement level. The process for setting NAEP's achievement levels has been improved over the years. Recent achievement levels, such as those for civics, U.S. history, and grade 12 mathematics and economics, use a different achievement level setting process that has resolved some of the problems that those evaluations of the 1990s identified.


Finnegan from Washington, DC asked:
It seems that the lower percentage of students reading for fun is probably due to the advent of new forms of entertainment (ipods, computers, video games, etc). So it's all well and good that we know this is happening, but what can be done about it? It's a disturbing trend and it seems that the data can only help if something is done to reverse the trend.

Dr. Peggy G. Carr's response:
The NAEP long-term trend data do in fact indicate a decrease in reading for fun at all ages (9, 13, and 17). NAEP, however, is not designed to provide cause-and-effect relationships that might guide policy recommendations. Interested citizens, educators, and policymakers can use NAEP data to identify existing conditions such as the one you have identified. These data can be used as a platform for discussion.


Jason from Arlington, VA asked:
Since the long term trends of math scores for 17 year-olds is a mostly level line, will there be a look at how math is taught to students instead of just how much math is taught to students? Is there any relevance of the math content to students experiences/career goals being taught with the rigorous math content or are students just being taught more math in a vacuum?

Dr. Peggy G. Carr's response:
The long-term trend assessment does not evaluate curriculum and instruction. However, a new Mathematics Curriculum Study, which will be released in the Fall of 2009, examines the content that students are being exposed to in algebra and geometry courses. In addition to assessing what students know and can do in mathematics, NAEP collects information on a variety of background factors including a variety of questions about instructional content and practice. The 2008 long-term trend mathematics assessment did not include such questions, information about instructional content and practice can be found in the NAEP Data Explorer for the long-term trend assessment from 1978 through 2004, and for the 2005 grade 12 main NAEP mathematics assessment. Results for the 2009 grade 12 mathematics assessment will be available next year.


Marta from Bakersfield, Ca asked:
What where the results comparing English Learner students and English Only students in California and throughout the nation in the reading portion of the NAEP exam? Is the academic achievement gap decreasing among English Learner students?

Dr. Peggy G. Carr's response:
Hello, Marta. While we do not report state-level results as part of the long-term trend assessments, we do have national results for students who were classified by their schools as English Language Learners (ELL) and their non-ELL peers. Our ability to look at trends by these categories of students, however, goes back to only 2004. Before that time, we did not have a sufficient number of ELL students in the sample in order to estimate their average scores. That being said, the 2008 long-term trend results show that both ELL and non-ELL 9-year-olds made significant gains in reading. We did not, however, see any closing of the gap between these two student groups from 2004 to 2008 at any of the three age levels we assessed.


Mark from Austin, TX asked:
Hello. I'm curious to know if the sampling included students from all states. I imagine it did, but I wanted to confirm. I'm from the Southwest, so I'm particularly interested in which states from the Southwest contained students that were included in the test. Also, as Latino man, I'm concerned that it appears academic progress has slowed in comparison to previous generations of students (in the 70's and 80's). How would you suggest we in the Latino community make use of this information? Thank you.

Dr. Peggy G. Carr's response:
Mark, the samples represented all 50 states and the District of Columbia (but not Puerto Rico or other U.S. territories). The process of sampling the schools is designed to ensure that all kinds of schools and students are represented from across the country. Since there are only a few hundred schools in each sample, it is not guaranteed that schools from every single state are included. By chance, some smaller states may not have any schools in the sample. The results for Latino students thus represent the broad population of Latino students, from all parts of the country. It includes students whose families have lived in the U.S. for many generations, as well as those who have only recently mastered English well enough to take an assessment like this (a bilingual Spanish-English version of the mathematics assessment was available to those English Language Learners who needed it). This may make comparisons with previous generations of Latino students difficult, because the composition of the Latino population of the U.S. is changing over time, and rapidly so in recent years. But one can still use the results to judge how Latinos as a group are performing on reading and mathematics, compared with other groups in the population. And I think the evidence from the report is that Latino students are continuing to make substantial gains in achievement. Other NAEP assessments in other years (such as 2007, with assessments in reading and writing) provide state-level results, and have much larger samples overall, permitting some useful comparisons among different Latino subgroups.


Geoff from Washington, DC asked:
How do the NAEP long term trends compare with the trends in public education spending since the 1970s? Do you believe it is fair to consider both data sets in the same context?

Dr. Peggy G. Carr's response:
One source for trends in spending would be the Digest of Education Statistics (http://nces.ed.gov/programs/digest/d07/tables/dt07_026.asp). NCES has not made such a comparison, though with the right data and methodology you could probably analyze the relationship between the two and report on this yourself. However, NAEP is not designed to attribute causes to the changes in scores that we track. I would not recommend attributing the changes in scores to the one factor of changes in educational spending.


stephan from mclean, va asked:
As in the past, black 17-year-olds have a lower average score than white 13-year-olds. Are the tests given and the scaling such that we can say that blacks at 17 are 4 years behind whites? There is some current controversy about such claims.

Dr. Peggy G. Carr's response:
This is a very good question, Stephan. It is important to note that even though the average scores for 9-, 13, and 17-year-olds are on the same scale, the comparisons across ages and across subgroups are not supported by the data and are discouraged. The content on the 13- and 17-year-old assessments are different. This means that a particularly able student at age 13 may perform well on age 13 math content, but would likely not perform as well on more complex age 17 math content.


Dr. Denise Morrow from Oak Park, Michigan asked:
Dr. Carr, As a college instructor new to the discussion of high school performance I have the following questions: 1.Teaching models have been presented as change specifically the attempt to use of manipulatives in math. What factors/analysis ,if any, were used to make adjust for those factors. 2.Based on the 'no significant change' status and the various programs and philosophies that exist in teaching, does it concern you that there is not a significant change? Why?

Dr. Peggy G. Carr's response:
The NAEP long-term trend mathematics assessment has remained essentially unchanged since it was first administered in 1973. No manipulatives are used in this assessment. However, the main NAEP mathematics assessment has some sections that allow the use of a calculator (four-function for grade 4 and scientific or graphing for grades 8 and 12), a ruler (grade 4) or ruler/protractor (grades 8 or 12), or other manipulatives such as geometric shapes or spinners. To answer the second part of your question, it is important to examine the data beyond overall averages. White, Black, and Hispanic students all showed improvement, but White students continue to score higher than Hispanic students. During the past 30 years, there has been a shift in the student population, with the number of Hispanic students increasing and the number of White students decreasing. While Hispanic students' average scores are increasing, the fact that they make up a larger proportion of the population can result in a level overall score. This can lead to seeming inconsistencies between aggregate (overall) and subgroup-level results. Despite the gains made within each subgroup, a greater proportion of the population comprises subgroups that tend to score lower, on average. It is the increase in the proportion of lower performing groups that suppresses growth of the overall average score.


Peter from Bloomington, Indiana asked:
As a follow-up to my earlier question about the lack of improvement by 17-year-olds, I notice that released age-17 LTT items on the NAEP website focus exclusively on elementary and middle school mathematics. There are, for example, no items that assess what is normally taught in high school algebra or geometry. What are the chances that 17-year-olds know more math than they did in the past but the lack of high school level items on the assessment means that NAEP does not pick up that knowledge?

Dr. Peggy G. Carr's response:
Peter, it is true that the content of the long-term trend mathematics assessment for 17-year-olds does not cover the full range of high school mathematics--although there are some items that represent content typically covered in Algebra I and Geometry courses. As a consequence, much of what students taking more advanced mathematics courses would have encountered in their classes is not represented on the long-term trend assessment. This is one reason, that the main NAEP mathematics assessment framework has recently been updated by the National Assessment Governing Board to include content that extends beyond Algebra II, and includes some pre-calculus content as well.


alfonzo from Upper Marlboro, MD asked:
Simpson's Paradox is very interesting. Can you provide more on the fact that although the average scores increased for all groups in your example; yet the mean scores went down a couple points and how this is represented in the reporting of actual student performance data today?

Dr. Peggy G. Carr's response:
Simpson's Paradox is a phenomenon that can often be observed in statistical data sets. For example, the finding for 17-year-olds that average mathematics scores for White, Black, and Hispanic students have all increased since the 1970s—while, at the same time, the average scores for all students have remained relatively flat. This is in part the result of changes in the demographic makeup of the 17-year-old population during the last four decades. Despite the performance gains made within each of these three student groups, over time an increasing proportion of the population is represented by student groups who tend to score lower, on average. To provide some specifics — in 1975 the make-up of the 17-year-old population was as follows: 84% White, 11% Black, and 3% Hispanic. By 2008, the make-up of the 17-year-old population was 59% White, 15% Black, and 18% Hispanic. Although Black and Hispanic students have made significant gains like their White peers during that time period, their average group scores remain lower than that of White students. Consequently, their increasing representation within the overall population has the effect of masking overall gains—even as each individual group within the population is improving. In NAEP reporting, the scale score results are shown as well as the student group percentages to provide further insight into the results.


Young from Brookeville, MD asked:
Dear Dr. Carr, I'm so delighted to see the latest updates of trends by age group. Compared to other age groups, the 2008 data at age 17 indicate that there were no significant differences of average scores with those in 1971 in both math and science. As I briefly looked over participation rates at age 17 in both subjects, the overall participation rates appeared to be significantly lower than those at ages 9 and 13. I'm curious about the extent to which nonparticipation bias may have affected overall findings at age 17 and about the latest methods of NCES to assess nonresponse bias in trend analysis. I also observe that analysis of trend by age comparing among public, private, and Catholic schools appear to be more susceptible to nonparticipation bias. I find from the release table footnotes that participation rates fell below the required standard for reporting for 17-year-olds attending private schools in all assessment years and for 17-year-olds attending Catholic schools in 1996 and 2004. Participation standards were reportedly not met for other age groups in certain years. Will you please address this concern about nonresponse bias for trend analysis? I'm so grateful to all NAEPers who put together another great release!

Dr. Peggy G. Carr's response:
Young, you're correct in noting that the participation rate of 17-year-old students in 2008 is lower than the participation rates for 9- and 13-year-old students. We rely upon the participation of both schools and students for the success of the NAEP program, since each student sampled to participate in NAEP represents other students in the population with similar demographic characteristics. While the ideal situation would be for all sampled students to participate in the assessment, nonresponse in NAEP is unavoidable and anticipated. One way in which nonresponse is addressed is in the sampling weights assigned to schools and students. A weighting adjustment is used to account for nonresponse at both the school and student level. These weighting adjustments help to ensure that the NAEP sample represents the target population. More specific information about the weighting procedures in NAEP, including nonresponse adjustments, is available in the technical documentation at http://nces.ed.gov/nationsreportcard/tdw/. To address your observation about the nonparticipation of private schools and Catholic schools, NAEP follows participation rate standards established by NCES. While we would like to be able to report results for private and Catholic schools for all age samples in all years, following these participation standards help to ensure that the results that NAEP reports (both for a single assessment year and trend results) are based on unbiased samples. Furthermore, NCES statistical standards require that a nonresponse bias analysis is conducted when participation rates are below 85 percent. The NAEP technical documentation also contains detailed information about nonresponse bias analyses.


Sarah from Alexandria, VA asked:
Do you disaggregate scores for students with disabilities and limited English proficiency? Where can I find this information?

Dr. Peggy G. Carr's response:
Sarah, yes, NAEP does disaggregate scores for students with disabilities and limited English proficiency, but only since 2004, when they were first collected. Although these scores are not included in the initial report, they are available on the web using our NAEP Data Explorer at http://nces.ed.gov/nationsreportcard/naepdata. One example of results that you will find on the NAEP Data Explorer is that the 2008 average score of 9-year-old students with disabilities in reading is not significantly different from 2004, but 9-year-old limited English proficient students showed a significant gain in reading since 2004. The NAEP Data Explorer will show results for both reading and mathematics for all three ages.


Geoff from Washington, DC asked:
How is it that 17 year-olds are at the same place they were 35 years ago, but 9 and 13 year-olds have been make slow but steady improvements? Shouldn't those gains eventually be seen for 17 year-olds?

Dr. Peggy G. Carr's response:
This question has puzzled educational policy experts for many years, and we are no closer to an answer today than we ever were. If it were simply a question of the younger students retaining their advances as they mature, we might see their gains show up four years later in the next older cohort, but we haven't seen such consistent patterns in the various NAEP cohorts. Other things may be going on, such as in-migration of older students from other countries, dropping out of older students in high school, changes in motivation with age, and curricular and programatic differences in different levels of the educational system. NAEP is designed to track changes accurately, but lacks the ability to attribute changes in scores to various educational policies and other factors.


Lynne from Los Angeles, CA asked:
Your earlier response regarding the relationship of teachers' college majors to the math performance of students is interesting and helpful. I'm afraid the college major may be too simple a variable to reveal the full picture. Based on state policy there are other indicators that may match or outweigh the major (e.g., college minor, number of math units taken, passage of math teachers' exam such as the CSET in CA). Do you have findings relative to the other indicators or are the data such that they could be examined for these relationships? Is there another NCES database we could look to for these?

Dr. Peggy G. Carr's response:
We are now winding down the chat. Lynne, You are very perceptive. NAEP cannot be used to identify causal relationships, and educational performance is related to complex interactions of many factors. NAEP has information on many of the teacher qualification and background factors you cite. You might want to look at the NAEP Data Explorer at our web site (http://nces.ed.gov/nationsreportcard/naepdata). This menu-driven tool allows you to compile your own data tables using the many variables NAEP collects. You can examine data from both the long-term trend NAEP and main NAEP (which has more teacher data).


Mary from Lewisburg, WV asked:
Has anyone done an analyses of the math items to see if any content is not included in today's curricula (such as set notation)?

Dr. Peggy G. Carr's response:
Mary, the more modern main NAEP assessment includes questions assessing basic skills and recall problem solving and reasoning in all topic areas. The main NAEP reflects more current mathematics skills and curricula. The long-term trend mathematics assessment was designed in the mid-1960s to assess knowledge of basic mathematics facts, ability to carry out computations using paper and pencil, knowledge of basic formulas such as those that apply in geometric setting and the ability to apply mathematics to daily-living skills such as those involving time and money. These skills have remained the same since the early years of this assessment. However the main NAEP includes questions assessing basic skills and recall but goes further to include problem solving and reasoning in all topics areas.


Bob from Brookeville asked:
Nice job on the report! If I'm looking at the figures correctly, I noticed that the white-black gap for 9-year- olds in 2004 was 24 points; for 13 year-olds in 2008 (that is the same cohort--4-years later) the gap was 28 points. Does this mean things are getting worse? thanks.

Dr. Peggy G. Carr's response:
Bob, that's an interesting question. NAEP does measure the same cohort of students four years apart, though not the very same students. That the 13-year-olds exhibit a wider gap than they did when they were age 9 does not conclusively indicate the gap has widened, however. For example, other changes may have taken place within the cohort over the four-year period, such as immigration, changes in education funding and teacher qualification, and other factors.


Peter from Westminster, Maryland asked:
We have data about the percentage of Hispanic students in our schools. I have heard that Hispanic students born in the U.S. have achievement rates comparable to the general population. Is this true? Do we have achievement data on Hispanic students born in the U.S. vs. Hispanic students born outside of the U.S.?

Dr. Peggy G. Carr's response:
NAEP collects and reports data on the race/ethnicity group of students. However, NAEP does not collect any data about the place of birth of students to allow for the comparison of the performance of U.S. born and non U.S. born student subgroups. However, performance of the various Hispanic subgroups is available on the NAEP Data Explorer, our online data analysis tool.


Rachel from New York, NY asked:
Now that you have this data, are there plans to implement specific, systemic changes? For example, what might be done to tackle the discrepancy on reading scores for white kids vs. black and Hispanic kids?

Dr. Peggy G. Carr's response:
Rachel, the NAEP long-term trend results provide information about the performance of Black and Hispanic students in comparison to White students over time, but NAEP is not designed to provide information about why these results occur. NAEP data can be used by policymakers and educators to inform discussion of achievement gaps.


Dr. Peggy G. Carr:
Thanks for all the excellent questions. Unfortunately, I could not get to all of them, but please feel free to contact NAEP staff or myself for more information. I hope that you found this chat helpful and the report interesting. If you aren't signed up to receive our NewsFlash notifications I encourage you to do so at http://ies.ed.gov/newsflash/, so that you are alerted when new NAEP results are available.

Back to StatChat Home