Skip Navigation
small NCES header image

Methodology

Overview of the methodology developed in Kitmitto & Bandeira de Mello (2009)

The NCES Research and Development report Measuring the Status and Change of NAEP State Inclusion Rates for Students with Disabilities (NCES 2009-453) focused on 2005 inclusion rates for states and the change in these rates from 2005 to 2007 (Kitmitto and Bandeira de Mello, 2009). In that report, two distinct approaches were used to compare inclusion rates across states for students with disabilities (SD) and to gauge the progress of individual states in improving these rates over time. Both approaches used regression analysis to estimate the relationship between a student's characteristics and the probability that the student was included in the NAEP assessment. One approach, the nation-based approach, estimated a regression model using data pooled from all states. The other, the jurisdiction-specific approach, estimated a regression model separately for each state. The relationships were estimated using data for each student, which were then used to establish student-level predicted probabilities, otherwise known as student-level benchmarks, for the inclusion of each student with disabilities on the basis of his or her characteristics. Student-level benchmarks were aggregated to the state level to form state-level expected inclusion rates, otherwise known as state-level benchmarks. Finally, change in inclusiveness was measured across time in relation to these benchmarks.

Additionally, a measure of the relative inclusiveness of a state, the "status measure," was estimated for each state to enable a comparison of states in the first year of the time period over which change in inclusiveness was estimated. That status measure was used as a context for interpreting the estimated change measures. States that were more inclusive in the first year were expected to have less potential to increase inclusion; hence, there was less expectation for those states to do so. States that were less inclusive in the first year were expected to have more potential to increase inclusion; hence, there was greater expectation for such states to do so.

The report

  • Describes fully how the status measures (i.e., the starting point measures) and change measures were calculated;
  • Describes the rationale behind the two different methodologies developed (nation-based and jurisdiction-specific);
  • Provides details about how standard errors were calculated and how significance testing was conducted; and
  • Provides state-level results for status and change over the 2005–07 period.

Read the 2005–07 report, Kittmito and Bandeira de Mello (2009).

Back to the Top

Summary of alterations to the methodology for the 2007-09 report, Kittmitto (2011)

With the release of the 2009 NAEP reading and mathematics assessments, NCES again had the opportunity to measure the status and change in inclusion rates and, hence, conducted this update to the Kittmitto and Bandeira de Mello (2009) report. This update, Kittmitto (2011), focuses on changes over 2007-09. Although the general methodology did not change from Kitmitto and Bandeira de Mello (2009),  the specification of the statistical model changed slightly owing to a number of factors. These changes are summarized below.

  • First, because of changes in the background information that NAEP collects on students with disabilities, one of the control factors, grade-level of instruction, that had been used in the previous report was not available in the 2009 NAEP administration and therefore was not used in the 2007–09 model. However, other measures of student characteristics were included in the model.
  • Second, the statistical model was re-specified to better handle student observations with missing background information.
  • Third, instead of using 2007 data to set benchmarks for measuring change over 2007–09 as originally recommended in the previous report, 2005 data were used to provide a longer-term analysis. In the current report, change over 2007–09 was analyzed as well as change over 2005–07 and 2005–09 using the updated model.

Back to the Top

Changes in the SD Questionnaire and Items Used in Analysis

The first alteration was necessary due to changes in the background information that NAEP collects on students with disabilities. One of the control factors, grade level of instruction, that had been used in the previous report was not available in the 2009 NAEP administration and therefore was not used in the 2007–09 model. However, other measures of student characteristics were inserted into the model as compensation.

Information about students' characteristics came from the NAEP SD Questionnaire. The SD Questionnaire is intended to be completed by the special education teacher or staff member who is most familiar with the student. Because the methodology sets benchmarks for inclusion on the basis of a student's characteristics, it is necessary that the items used in the analysis are consistent across the time periods being analyzed.

In Kitmitto and Bandeira de Mello (2009), information from the item "What grade level of instruction is this student currently receiving?" was included in the model. This item was discontinued in the 2009 NAEP SD Questionnaire and therefore was not used in the model for the current study, Kitmitto (2011). It was replaced in the questionnaire with: "At what grade level does this student perform?" This replacement item was also not used in Kitmitto (2011) as it is not present in administrations prior to 2009. However, the replacement item may, in the future, be used for analyzing change from 2009 forward.

Two variables were added to the 2009 analysis, Kitmitto (2011). The first was an indicator for whether the student has multiple disabilities. The second was an indicator for whether the student has an Individualized Education Plan (IEP). These are both thought to be indicators of a more severely disabled student and, hence, of a student who is less likely to be deemed able to participate in NAEP.

Table 1 lists information taken from the NAEP SD Questionnaires over the 2005, 2007, and 2009 administrations and which items were used in the previous study, Kitmitto and Bandeira de Mello (2009) and the current study, Kitmitto (2011).

Table 1. List of NAEP items used in the statistical models analyzing the status of and change in state inclusion rates of students with disabilities
Item
Kitmitto and
Bandeira de Mello
(2009)
Kitmitto
(2011)

Student's identified disability(ies)

 

Yes

 

Yes

 

Degree (severity) of this student's disability(ies)

 

Yes

 

Yes

 

Received accommodation on state assessment that was not allowed on NAEP

 

Yes

 

Yes

 

Grade level of instruction

 

Yes

 

No

 

Indicator for multiple disabilities

 

No

 

Yes

 

Indicator for having an IEP

 

No

 

Yes

 

Back to the Top

Low Incidence Disability Types, Missing Disability Type, and Interactions with Severity

Both Kitmitto and Bandeira de Mello (2009) and Kitmitto (2011) use a logit model with whether or not the student was included as the dependent variable and student characteristics as the independent predictor variables. Although some of the same items were used in the 2007–09 update as in Kitmitto and Bandeira de Mello (2009), the use of these items in the specification of the statistical model changed in two ways.

First, the handling of low incidence disability types and missing disability type has changed:

  • Kitmitto and Bandeira de Mello (2009) 
    • The following four disability types each had their own indicator: “specific learning disability,” “speech impairment,” “mental retardation, “emotional disturbance.”
    • One single indicator variable was constructed for students either missing data on disability type or having any of the following eight disability types ("hearing impairment/deafness," "visual impairment/blindness," "orthopedic impairment," "traumatic brain injury," "autism," "developmental delay," "other health impairment," "other").
  • Kitmitto (2011) 
    • The following six disability types each had their own indicator: “specific learning disability,” “speech impairment,” “mental retardation, “emotional disturbance,” “autism,” “other health impairment.”
    • One single indicator variable was constructed for students having any of the following six disability types: “hearing impairment/deafness,” “visual impairment/blindness,” “orthopedic impairment,” “traumatic brain injury,” “developmental delay,” “other.”
    • An additional indicator variable was constructed for students missing disability type information.

Second, the interaction of variables in the estimation equations has changed:

  • Kitmitto and Bandeira de Mello (2009) 
    • In the nation-based approach, disability type indicators, severity indicators, and grade level of instruction indicators were all crossed with one another to produce a separate indicator variable for each student profile (disability type, severity, grade level of instruction).
  • Kitmitto (2011)
    • The nation-based model included both main and cross-indicator effects (an indicator for each disability and severity combination). There were separate indicator variables for each disability type and each level of severity and then cross-indicators for each combination of disability type and level of severity. As described above, neither grade level of instruction, nor its 2009 replacement, grade level of performance, were used in this report.

In both reports, no imputation procedure was used for missing data. Instead, missing was coded as a possible response for disability type and severity of disability (i.e. a ‘missing disability type’ and a ‘missing severity level’ indicator were created). The rationale for the changes in the specification described here is that they increase the number of disability types that we can control for and better allow for the use of disability type and severity of disability data when either one is missing. For the most part, when disability type is missing, so is severity of disability information and there is a separate effect estimated for a student with this profile. However, in some cases, one variable is missing when the other is not, but not often enough to always estimate cross-indicator effects. When main effects and interaction effects are estimated, the main effect coefficients may be used when the number of observations for a particular student profile is low. For example, if in 2005, the year when student-level benchmarks were estimated, no student had a profile with "autism" and "missing severity of disability," no coefficient was estimated when only interaction effects were included. If in later years a student with such a profile is present, his or her student-level benchmark inclusion rate will be a default number (the average inclusion rate). When main and interaction effects are estimated, the student’s benchmark inclusion rate in this example will be adjusted on the basis of the "autism" main effect estimated in 2005.

Back to the Top

Longitudinal Analysis

In addition to changing the statistical model slightly, the analysis was refined to provide estimates of change over 2005-07, 2007-09, and 2005-09 that could be directly compared.

Kitmitto and Bandeira de Mello (2009) used 2005 as the benchmark-setting year to estimate change over 2005–07 and suggested that 2007 be the benchmark-setting year to estimate change over 2007–09. In the development of the current analysis in Kitmitto (2011), however, it was decided to again use 2005 data to set benchmarks because this would allow for a longer-term analysis where the magnitude of the 2005–07 change could be directly compared with the magnitude of the 2007–09 change and their sum would equal the 2005–09 change. If 2005 had been used as the benchmark-setting year for estimating the 2005–07 change and 2007 had been used as the benchmark-setting year for estimating the 2007–09 change, the magnitude of change would not be comparable. The 2005 benchmarks were re-estimated from the previous report to reflect the changes in methodology described above. 

Back to the Top


Last updated 05 October 2011 (EP)
Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics - http://nces.ed.gov
U.S. Department of Education