Skip Navigation
small NCES header image

TRANSCRIPT: The Nation's Report Card- Results from the 2009 NAEP Trial Urban District Assessment (TUDA) in Mathematics

Dr. Peggy G. Carr

Hello, this is Dr. Peggy Carr, Associate Commissioner of the National Center for Education Statistics (NCES). Welcome to Ask NAEP, an online Q&A session about the findings for the NAEP 2009 Mathematics Trial Urban District Assessment (TUDA) report. I hope you've had time to look at the results on the Nation's Report Card website, and that I can answer any questions you may have about them. I have a team of NAEP specialists with me to make sure your questions are addressed thoroughly, so there may be a slight delay between the time you submit your question and the time the answer appears.

We are usually able to post answers for all questions, even after the Ask NAEP session ends at 3:00. If you cannot stay online for the full time, a complete Ask NAEP transcript will be available online after our discussion. I'm interested to hear which results you want to talk about.

Before we begin answering your questions, I would like to discuss the way in which the results of the TUDA assessment supplement and enrich the results of the national and state assessment in mathematics, released in October.

At the national and state levels, we saw no change in scores at grade 4 since 2007. This was the first time that scores leveled off between two assessment years since the early 1990s. However, a closer look at those urban districts that participated in the mathematics assessment reveal that fourth-graders in large cities nationally recorded higher scores than in 2007, and perhaps most encouraging is the fact that these gains in large cities were primarily the result of positive score increases in the lower end of the student score distribution at the 10th and 25th percentiles. What this means is that the students whose scores would place them in the lowest 10th and 25th percent of the student group achieved higher scores in 2009 than students in those same percentiles in 2007.

It is important to remember when looking at national results and trends that aggregate scores often mask underlying patterns, so it is crucial to examine specific district results and the performance of different groups within the student population in order to gain a real picture of education in that content area.

For example, in 2009, all the racial/ethnic groups assessed at grade 4 in Houston scored higher than their peers in public schools nationwide. However, the overall average score for Houston's fourth-graders is lower than the average score for fourth-graders nationwide. How could this be? The answer is found by looking more closely at the specific groups within Houston's student population.

Although all racial/ethnic groups in Houston did achieve scale scores higher than the national average for their peers, the demographic makeup of the student body in Houston is quite different than the demographics observed for fourth-graders nationally. In Houston, Black and Hispanic students make up a larger portion of the student population at grade 4 than they do nationally (25 and 64 percent, respectively, compared to 16 and 22 percent nationally). While Black and Hispanic students have made significant gains over the years, their average group scores still remain lower than those of White and Asian/Pacific Islander students. Consequently, their increased representation within the Houston student population has the effect of lowering the average score for the district as a whole—even as each individual group within the population is improving. This is a phenomenon referred to as Simpson's Paradox.

NAEP uses the TUDA report to ensure that the specific educational patterns of urban districts, often significantly divergent from those patterns in other regions across the country, do not go overlooked, so folks know what students are truly able to do in Houston, Baltimore, Los Angeles, Boston, and Miami-Dade, to name just a few of the 18 urban districts that participated in the 2009 assessment. Results are available for each of these districts, so let's get to your questions!

Maisie McAdoo from New York, New York asked:
How is it that the urban district average is higher in 2009 than 2007 and yet the 18 participating cities saw no significant improvement since 2007?

Dr. Peggy G. Carr's response:
Maisie, Thank you for raising the question. First, while it is true that 18 large cities participated in NAEP in 2008, only 11 of them are repeat participants from 2007. Therefore trend reporting is limited to those 11 districts. Also, "Large city" includes students representing all cities with populations of 250,000 or more, not just the cities participating in the Trial Urban District Assessment. Looking at those 11 districts as summarized on page 1 of the report, 2 districts showed gains between 2007 and 2009 at 4th grade (Boston and District of Columbia) and 2 districts showed gains at 8th grade (Austin and San Diego). Also note that many other districts showed numerical gains between 1 and 3 points. At the district level these were not significant however they contributed to the significant gain for the aggregate large city results.

Maisie McAdoo from New York, New York asked:
Overall, can you comment on the achievement gap trends for the 11 longer-participating cities in 2009 compared with 2007 and 2003? Is it true that the black-white and Hispanic-white gaps have not narrowed since 2007 overall? What about since 2003?

Dr. Peggy G. Carr's response:
Maisie, Overall, when we look at results for students in public schools nationwide, the black-white and Hispanic-white mathematics score gaps have not changed compared to 2003 or 2007 at grade 4. At grade 8, the black-white and Hispanic-white gaps have narrowed for public school students nationwide compared to 2003 but have not changed since 2007. For students in large cities, there has been no change in the achievement gaps at either grade compared to 2003 and 2007. Looking at the districts that participated in 2003, 2007, and 2009, nearly all have seen no change in the black-white or Hispanic-white score gaps at either grade compared to 2003 and 2007. However, there are some exceptions. For instance, at grade 4, the District of Columbia (DCPS) saw a narrowing of the Hispanic-white score gap compared to 2003. At grade 8, Charlotte saw a narrowing of both the black-white and Hispanic-white gaps compared to 2007 and a narrowing of the black-white score gap compared to 2003. The district profiles in the 2009 TUDA report provide insight into the score trends within each district.

Mike Bowler from Baltimore Maryland asked:
Isn't it time you dropped the Trial?

Dr. Peggy G. Carr's response:
"Trial Urban District Assessment" is the term used for NAEP's focus on large school districts. The term "trial" is used because the NAEP program considers that some of its procedures may need further development. For example, this year we changed our sampling and reporting procedures to reflect that charter schools are accountable only to some urban school districts. In other districts, charter schools are completely independent of the urban district and should not be included in NAEP's reporting. Eventually, I expect that the term "trial" will be dropped, which will be when the NAEP program reaches the point that its procedures are fully worked out.

Sally from Washington, DC asked:
What trends did you see in racial achievement gaps?

Dr. Peggy G. Carr's response:
Good question, Sally. We get asked this question a lot. We have seen few changes in the gaps compared to previous years. Please see my response to Maisie for more details.

Maureen from San Diego, California asked:
Overall, what are the most significant - or surprising - portions of the results? Regarding scores from San Diego Unified School District, what is most significant? What do the results say about math education in San Diego Unified?

Dr. Peggy G. Carr's response:
In 2009, perhaps one of the most notable results is that large cities showed an increase since 2007 at both grades 4 and 8. This is in contrast to the national public results, where results were flat since 2007 at grade 4. Regarding San Diego's results, it is worth noting that they were one of only 2 districts to show a gain since 2007 at grade 8. Additionally, both 4th and 8th graders in San Diego scored higher than the students in large cities nationally. At grade 8, San Diego had one of the largest gains of any district, 8 points since 2007. In terms of what these results say about math education in San Diego Unified, we encourage you to visit the NAEP Data Explorer and NAEP Questions Tool, where you can explore a wide variety of contextual variables and released items.

Rhonda from New York City, NY asked:
My question refers to the following statement which appeared in today's NY Times. "New York City has been criticized for granting a large percentage of special education students and those learning English special accommodations for the federal tests, like extra time. The city,granted more accommodations than any other of the 17 urban districts. Why isn't the granting of accommodations more controlled so that all districts operate according to the same set of rules?

Dr. Peggy G. Carr's response:
When accommodations are offered for a NAEP assessment, a student's individual education plan (IEP) is the guiding document that is used to determine what accommodations are made available to the student. The reason that accommodations vary across the states and districts is because states vary in their policies regarding accommodation, and the NAEP assessment must follow what a student's IEP specifies.

Mike from Baltimore MD asked:
How are the sampled students identified and chosen? When and where do they actually take the test? Do they go to some central location? Finally, when do 2009 reading and science results come out?

Dr. Peggy G. Carr's response:
NAEP assesses a representative cross-section of students in each district. The selection process for schools uses stratified random sampling within categories of schools with similar characteristics (e.g., demographic makeup of the school). Once the schools are identified, students are selected randomly in each school. NAEP is administered in each school by contractors for the US Department of Education. This practice allows for standard administration practices to be followed in each school across the country. The results from the 2009 reading assessment are expected in the spring of 2010. Science results are expected in the summer of 2010. The results for the nation and the states will precede the TUDA results.

Catherine from Palo Alto, CA asked:
The new motion charts are so helpful in exploring the district data. Will you use them to report state data as well?

Dr. Peggy G. Carr's response:
I'm glad you like this recent innovation on our web site. We continue to experiment with new graphical ways of representing our data. If our web users try the motion charts and find them helpful, it's very likely that we will continue to use it in the future.

Sally Holland from Washington, DC asked:
These numbers were pulled out from the mathematics report that you released in October to compare urban school districts specifically. Why is it important to compare the urban schools in particular? What challenges do they face that are different from other districts?

Dr. Peggy G. Carr's response:
Sally, These are two good questions. Today, Michael Casserly devoted the first few minutes of his remarks to why the urban district assessment was initiated in 2002. His main point was that urban school systems are committed to the highest academic standards for ALL children, and using NAEP as a common yardstick is the only available method for doing so. I highly recommend watching the archived webcast of the press conference to see his remarks at

Maria from New Jersey asked:
Can you comment on the achievement gap trends for ELLs?

Dr. Peggy G. Carr's response:
Maria, In 2009, ELL students tended to score lower than their non-ELL peers. For public school students nationwide, the mathematics score gap between ELL and non-ELL students in 2009 was 24 points at grade 4 and 41 points at grade 8. The average mathematics scores in 2009 for ELL and non-ELL students within each participating district are presented in the 2009 mathematics TUDA report. The 2009 score gap varies across districts at each grade, ranging from 8 to 30 at grade 4 and 15 to 45 at grade 8. Compared to 2007, the score gap between ELL and non-ELL students at the overall public school level did not change at grade 4, but increased at grade 8. More detailed exploration of the national and district-level data for ELL and non-ELL students is available using the NAEP Data Explorer. When interpreting results for ELL students across districts and across time, it is important to note that the percentages of students identified as ELL varies across assessment years.

Dr. Peggy G. Carr:
Thanks for all the excellent questions. Unfortunately, I could not get to all of them, but please feel free to contact NAEP staff or myself for more information. I hope that you found this chat helpful and the report interesting.

Back to StatChat Home

Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey


No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics -
U.S. Department of Education