In 2005, the U.S. Government Accountability Office (GAO) released the report No Child Left Behind Act: Most Students With Disabilities Participated in Statewide Assessments, but Inclusion Options Could Be Improved. The report focused primarily on No Child Left Behind and state tests. However, there was one chapter about the National Assessment of Educational Progress (NAEP). In the report, the GAO recommended that NAEP "work with the states, particularly those with high exclusion rates, to explore strategies to reduce the number of students with disabilities who are excluded from the NAEP assessment." The National Center for Education Statistics (NCES) responded with the following actions:
NCES also conducted research to develop a methodology for measuring state inclusion rates while taking into account the differing demographics and policies in each state. The report Measuring Status and Change in NAEP Inclusion Rates of Students With Disabilities was the result of that research. It provided a methodology and two measures of change in each state?s inclusion rate taking into consideration the following factors that differ across states and across time:
That study reported results for 50 states and the District of Columbia and used data from the 2005 and 2007 NAEP fourth- and eighth-grade reading and mathematics assessments. The full research and development report, Measuring Status and Change in NAEP Inclusion Rates of Students With Disabilities, is available for download.
The methodology developed in the report Measuring Status and Change in NAEP Inclusion Rates of Students With Disabilities was next applied to measuring change in districts participating in the Trial Urban District Assessment (TUDA). As with this report, the discussions presented here are exploratory in nature and do not reach definitive conclusions at this time because the methodology is new and developing.
The TUDA program is designed to explore the feasibility of using NAEP to report on the performance of public school students at the district level. As authorized by federal law, NAEP has administered the mathematics, reading, science, and writing assessments to samples of students in selected urban-district public schools.
Assessment results from the NAEP TUDA make it possible to compare the performance of students in participating urban school districts to that of public school students in the nation, in large central cities (population over 250,000), and to each other. In 2007, about 38,000 fourth- and eighth-graders from 11 urban districts participated in the mathematics assessment. An approximately equal number participated in the reading assessment. Participating districts were: Atlanta, Austin, Boston, Charlotte, Chicago, Cleveland, District of Columbia, Houston, Los Angeles, New York City, and San Diego. The District of Columbia participated in NAEP as both a state and as part of the TUDA program.
Inclusion rates are expected to vary by district depending on differences in a district?s population of students with disabilities. Whether a student can participate in NAEP depends on the student?s characteristics.
Student characteristics that are expected to have an impact on a district?s inclusion rate include the following:
Students with less severe disabilities, such as a speech or hearing impairment, are more likely to be included in NAEP testing. Students with more severe disabilities, such as mental retardation, are less likely to be included in NAEP.
A district with a 90 percent inclusion rate is not necessarily more inclusive than a district with an 80 percent inclusion rate. If a district has a higher percentage of severely disabled students, for example, it would be expected to have a lower inclusion rate. Hence, to properly compare the status of inclusion rates across districts or to properly measure a district?s change in inclusion rates across time, differences and changes in districts? populations of students with disabilities must be controlled. For example, if a district experienced a drop in the proportion of students classified as having mental retardation, who are less likely to be included, the district?s inclusion rate would be expected to increase.
The characteristics of a student with disabilities affect the likelihood that the student will be included in NAEP. The percentage of students with each of these characteristics varies across districts. In the tables below, the percent of students with each characteristic are present for participating urban districts for a sample subject, grade, and year: mathematics, grade 4, 2007. All percentages discussed are based on students identified as having a disability who are not English language learners.
The first table gives the number of observations used for analysis in each district. This is the un-weighted number of students observed in each district that have a disability but are not an English language learner. The percentages that follow in this and the next two tables are the weighted percentage of those students with disabilities with each characteristic. In this first table are percentages of students with each disability type. These types are not mutually exclusive. Additionally, while the NAEP questionnaire allows respondents to identify 11 different disability types as well as an ?other? category, many of the types were uncommon. For our statistical analysis we collapsed all but the four largest disability types into the ?other? category.
To measure change in district-level inclusion rates, researchers developed two approaches for controlling for differences in populations of students with disabilities. The nation-based approach uses national averages to set expectations for including different types of students. The jurisdiction-specific approach uses averages in each jurisdiction to set expectations for including different types of students for that jurisdiction. Each approach has its advantages and disadvantages. We report results for both without claim that either is better (see the state-level report, Measuring Status and Change in NAEP Inclusion Rates of Students With Disabilities, for a full explanation of the methodology).
The nation-based approach uses average inclusiveness by type of student among all jurisdictions in the nation as a benchmark for comparing status and measuring change. The national inclusion rate was examined for each of the student characteristics in 2005 to determine the proportion of each that was included in NAEP. These rates were used to predict rates of inclusion for each student in the district. For example, if nationally 90 percent of students identified as having a speech impairment were included in NAEP, it was expected that in each district in each year, 2005 and 2007, 90 percent of students identified as having a speech impairment would be included. Change is measured relative to each year?s prediction. The benefit of the nation-based approach is that because of the large number of observations we are able to cross the disability types with the severity levels and with the grade level of instruction in our statistical model that estimates the benchmark averages. This allows us to estimate an independent benchmark for each combination of those characteristics.
For state-level analysis, we developed an approach titled the state-specific approach. This approach can be applied equally to any identified jurisdiction. Here, within the context of applying the methods of the state-level analysis to districts participating in the TUDA, we use a more generic name for this approach: the jurisdiction-specific approach. In the jurisdiction-specific approach, a district?s average 2005 inclusion rates for different student types were used as benchmarks for measuring change. These averages were used to predict the district?s inclusion rate in 2007. For example, if in District A 90 percent of students identified as having a speech impairment were included in NAEP in 2005, it was expected that in District A in 2007, 90 percent of students identified as having a speech impairment would be included. In District B, however, the benchmark set in 2005 might be 85 percent, and this would be the expectation for District B for 2007. The predicted inclusion rate for 2007 was then compared to the actual rate in 2007. If the actual inclusion rate was greater than the predicted rate, the district was considered to have made progress. In the jurisdiction-specific approach, we are able to cross the disability types with the severity levels in our statistical model that estimates the benchmark averages. While there is less independence among the benchmarks for different types of students, the benefit is that a separate set of benchmarks are estimated for each district.
Each district?s change measure needs to be understood in terms of how inclusive the district was in the initial year, 2005. Districts with high inclusion rates in the initial year have relatively less room for improvement, while districts with lower inclusion rates at the start have more room for improvement. A comparison of the status of inclusion rates in 2005, the starting point, is essential for understanding the change measure.
To compare inclusion rates at the starting point, we need to control for differences across districts in the populations of students with disabilities. To do so, we use the averages from the nation-based approach and compare inclusiveness across districts relative to each district?s expected inclusion rate: a district with an actual inclusion rate in 2005 greater than predicted is considered more inclusive while a district with an actual inclusion rate less than predicted is considered less inclusive.
The following tables contain results for the nation-based approach. Standard errors were calculated using NAEP jackknife weights as described in the state-level report. Tests for statistical significance were conducted at the .05 level using the Student-t distribution to ensure that the differences are larger than those that might be expected due to chance or sampling error.
The following tables contain results for the jurisdiction-specific approach (note: in the jurisdiction-specific approach, the 2005 predicted is always equal to the 2005 actual by design; we therefore do not report the 2005 predicted value or the 2005 difference from predicted, which is always zero):
Read more about the relation of exclusion and accommodation rates to results.