After computing the full-sample weights, an analysis is conducted on the distribution of the final student weights for each grade-subject combination in each jurisdiction. The analysis is intended to (1) check that the various weight components have been derived properly in each jurisdiction, and (2) examine the impact of variability in the sample weights on the precision of the sample estimates, both for the jurisdiction as a whole and for major subgroups within the jurisdiction.
For the 2000 state assessments, the analysis was conducted by looking at the distribution of the final student weights for the assessed students in each jurisdiction, grade, and subject. Two key aspects of the distribution were considered in each case: the coefficient of variation (computationally, the standard deviation divided by the mean) of the weight distribution; and the presence of outliers, that is, cases whose weights were several standard deviations away from the median weight.
It was important to examine the coefficient of variation of the weights because a large coefficient of variation reduces the effective size of the sample. Assuming that the variables of interest for individual students are uncorrelated with the weights of the students, the sampling variance of an estimated average or aggregate is approximately times as great as the corresponding sampling variance based on a self-weighting sample of the same size, where is the coefficient of variation of the weights. Outliers, or cases with extreme weights, were examined because the presence of such outliers was an indication of the possibility that an error was made in the weighting procedure, and because it was likely that a few extreme cases would contribute substantially to the size of the coefficient of variation.
In most jurisdictions, the coefficients of variation were 35 percent or less, both for the whole sample and for all subgroups. This means that the quantity was generally below 1.15 and the variation in sampling weights had little impact on the precision of sample estimates.
A few relatively large student weights were observed in some jurisdictions for both subjects at both grades 4 and 8. An evaluation was made of the impact of trimming these largest weights back to a level consistent with the largest remaining weights found in the state and grade. Such a procedure produced an appreciable reduction in the size of the coefficient of variation for these weights, and hence this trimming was implemented. It was believed that this procedure had minimal potential to introduce bias, while the reduction in the coefficient of variation of the weights gave rise to an appreciable decrease in sampling error for all jurisdictions, grades, and subjects.
The tables below show the distribution of the student-level trimming factor STUTRIM1 among the 2000 state assessment's participating jurisdictions by grade (fourth and eighth), assessment subject (mathematics and science), and reporting population (non-accommodated and accommodated). Reporting populations differ by whether accommodations were offered to disabled or limited English proficiency (SD/LEP) students. The non-accommodated reporting population, also known as the R2 reporting population, includes all non SD/LEP students plus SD/LEP students from non-accommodated sessions. The accommodated reporting population, also known as the R3 reporting population, includes all non SD/LEP students plus SD/LEP students from accommodated sessions.