5. VARIANCE OF THE LINKING FUNCTIONIf the means and standard deviations used to construct the linking function in Equation (1) were known without error, the transformed value could be used in the same manner as an observed value from the TIMSS assessment. That is, one could ignore the fact that was based on a transformation. Thus, for example, if x were the mean proficiency of some state on NAEP, the predicted mean proficiency for that state on the TIMSS would be , from Equation (1), and the variance of that predicted TIMSS mean proficiency would be simply (2) However, the means and standard deviations used to construct Equation (1) are based on sample data and hence are subject to various sources of variability. This implies that the linking function also is subject to variability and that the variance of given in Equation (2) is too small. There are (at least) four sources of variability that will affect the variance of the linking function. These include the following:
Each of these components will be considered in turn. Prior to that, however, a general equation needs to be developed for the variance of in terms of the observed data. This equation will serve as a basis for the application of the various components of variance listed earlier. Equation (1) expressed the linked value as a function of the statistics , determined from the U.S. samples, and of the term x, assumed to be a statistic determined from the TIMSS data from a sample different from the U.S. NAEP and TIMSS samples. Since is a nonlinear function of the various means and standard deviations, a precise derivation of the variance is not practical. However, since both the NAEP and TIMSS samples are large, Taylor series linearization provides a convenient large sample approximation to the variance:
where the partial derivatives are evaluated x, respectively, the superscript T denotes matrix transpose, and is the matrix
where and where the covariances between x and are both zero since x is from a sample independent of those used to construct the estimates of .
Since ,
one has (3)
Estimates of can be obtained by expressing in terms of and and applying the delta method to the result. Let X = and å X X = . Since the mean and standard deviation from the NAEP sample is independent of those from the TIMSS sample, and since a sample mean and a sample standard deviation are independent assuming normality, å X X can be conveniently and credibly taken as a diagonal matrix with diagonal elements . As
and
some algebra produces
(4)
Since depends on x and Var(x), it is convenient to reexpress as
, (5) where
Equation (4) and the equivalent Equation (5) form the basis of the variance estimate of . In the subsequent discussion, estimates of the successive components of due to sampling, measurement error, model misspecification, and temporal shift will be derived, accompanied by a comparison of how the standard error of a linked estimate changes. As observed, the variance of depends on the value of x and the value of Var(x). For convenience, the comparisons of the components of will be for a typical value of Var(x), equal to the variance of the mean for the U.S. NAEP population. Additionally, two values of x will be used. The first, setting x equal to the U.S. overall NAEP mean, provides the smallest possible variance. The second, setting x equal to the 90th percentile of the NAEP proficiency distribution, provides an indication of how large could get. The specific values to be used are shown in Table 2.
Table 2.—Values of Var(x) and x used for comparing variances of the linked estimate for grade 8
5.1 Component of Due to Sampling Because both NAEP and TIMSS are samples, the estimates of the statistics are subject to sampling variability. Estimates of sampling variability quantify the stability of the samplebased statistics by estimating how much each statistic would likely change had it been based on a different, but equivalent, sample of students selected in the same manner as the achieved sample. Traditional analysis procedures often assume that the observed data come from a simple random sample. That is, it is assumed that the observed values from different respondents are independent of each other and that these values are identically distributed. Such assumptions do not hold for data from complex sampling designs such as those used by NAEP and TIMSS. In fact, the complex sample designs of NAEP and TIMSS lead to variance estimates that are larger than the simple random sampling values. Both assessments use the jackknife procedure (see, e.g., Johnson and Rust 1992) to estimate the variance due to sampling. The aim of the jackknife is to simulate the repeated drawing of samples of individuals according to the specified sample design. Once the various replicate samples are available, it is straightforward to compute the statistic of interest, t, on each sample and from these, obtain a variance estimate. Pairs of firststage sampling units (FSSUs) are defined to model the sample design as one in which two firststage units are drawn within each of a number of strata. The sampling variability of any statistic t is estimated as the sum of the components of variability that may be attributed to each of the FSSU pairs. The variance attributed to a particular pair of FSSUs is measured by recomputing the statistic of interest, t, on an altered sample. The i^{th} altered sample is created by randomly designating the two members of the i^{th} FSSU pair as the first and second respectively, eliminating the data from the first FSSU, and replacing the lost information with that from the second FSSU of the pair. The statistic of interest is then recomputed producing the pseudoreplicate estimate t_{i}. The component of sampling variability attributable to the i^{th} pair of FSSUs is (t_{i}t)^{2}. The estimated sample variance of the statistic t is the sum of these components across the M FSSU pairs^{2}:
(6)
To estimate the sampling variance of the linking function, the jackknife procedure is applied to estimate the sampling variance for each of .^{3} These variance estimates are then plugged into the formula of Equation (4). The results are shown in Table 3, which gives the sampling variance values of the components of in Equation (5).
^{2 }The variance of a statistic based on a stratified sample is the sum of the variances within each stratum, each multiplied by constants reflecting the degrees of freedom of the withinstratum variance and various weighting factors. There is no further division by degreesoffreedom adjustments. In the case of NAEP and TIMSS, the paired FSSU estimates each have a single degreeoffreedom, and the jackknife estimates are derived so that the weighting factors are identical to 1. See Wolter (1985, Section 4.5) and Johnson (1989, pages 315316, 321322). ^{3} Following accepted practice, the jackknife variance estimates
were based only on the first plausible value (see Mislevy, Johnson,
and Muraki 1992).
Table 3.—Components of due to sampling for grade 8
Table 4 provides a comparison between the naive estimate of the variance of from Equation (2) and the current estimate, which also accounts for the effect of sampling, for the values of Var(x) and x given in Table 2. Column headed "Percentage increase" gives the amount by which the addition of the sampling component increases the variance estimate.
Table 4.—Comparison of the naive estimate of with the estimate including sampling error for grade 8
These results show that the inclusion of the sampling variability as a component of the variance of the linked estimate can substantially increase that variance estimate. The increases shown here are in accord with similar findings presented by Johnson, Mislevy, and Zwick (1990) who report a study where the traditional estimate of the standard error of a linked estimate of the mean underestimated by a factor of 1.6 a standard error that properly took the sampling variance into account.
5.2 Component of Due to Measurement Error Both NAEP and TIMSS use IRT scaling models to summarize their data (see, e.g., Mislevy, Johnson, and Muraki 1992). IRT was developed in the context of measuring individual examinees' abilities. In that setting, each individual is administered enough items to permit a reasonably precise estimation of his or her ability, . Because the uncertainty associated with each is negligible, the distribution of , or the joint distribution of with other variables, can then be approximated using individuals' estimated abilities, , as if they were the true abilities. This approach breaks down in NAEP and TIMSS where each respondent is administered relatively few items in a scaling area. The problem is that the uncertainty associated with individual s is too large to ignore, and the features of the distribution can be seriously biased as estimates of the distribution (see Mislevy, Beaton, Kaplan, and Sheehan 1992). "Plausible values" were developed as a way to estimate key population features consistently. The essential idea of plausible value methodology is to represent what the true proficiency of an individual might have been, had it been observed, with a small number of random draws from an empirically derived distribution of proficiency values that is conditional on the observed values of the assessment items and on background variables for each sampled student. These background variables are called conditioning variables. The random draws from the distribution can be considered to be representative values from the distribution of potential proficiencies for all students in the population with similar characteristics and identical patterns of item responses. The several draws from the distribution are different from each other in a way that quantifies the degree of precision in the underlying distribution of possible proficiencies that could have generated the observed performances on the items. Both NAEP and TIMSS provide five sets of plausible values. Following Rubin (1987) the plausible values are regarded as five completed data sets, where the m^{th} data set consists of all information about each student along with the m^{th} plausible value for that student. Calculating a statistic, t, based on the m^{th} plausible value across all students provides an estimate, t_{(m)}, of t. A better estimate of t is t_{M}, the mean of the t_{(m)}. The variance of t_{M} consists of two components. The first component is the variance due to sampling subjects. There are five potential estimates of this variance, one for each plausible value, the m^{th} estimated as the jackknife variance of t_{(m)} according to Equation (6). While the best estimate of the sampling variance of t_{M} is the average of the five jackknife estimates, due to the heavy computational requirement of computing five jackknife variances, the typical practice used by NAEP and TIMSS is to simply use the jackknife variance for the first plausible value. That practice will be followed in this report. The second component of the variance of t_{M} is that which is due to not observing . This component is added to the sampling component in Equation (6) and is estimated by (7)
^{4 }In its analysis, TIMSS essentially used a single conditioning variable, grade, within each country. NAEP used several hundred. Table 5 gives the components of in Equation (5) attributable to measurement error. It can be seen that these components are an order of magnitude smaller than the equivalent components for sampling error shown in Table 3.
Table 5.—Components of due to measurement error for grade 8
Table 6 provides a comparison between the estimate of the variance of based on the naive estimate plus the term accounting for sampling error and the current estimate, which also accounts for the effect of measurement error. Included in the table is the percentage showing increase in the size of the naive variance that would have been obtained if the measurement error (but not the sampling error) was added to the variance. As in Table 4, the table uses the values of Var(x) and x from Table 2.
Table 6.—Comparison of the estimate of before and after including measurement error for grade 8
It can be seen that, while the measurement error provides a noticeable increase in the size of the naive variance estimate, the bulk of the overall variance is determined by the sampling error component.
5.3 Component of Due to Model Misspecification As discussed earlier, statistical moderation can produce markedly different links if carried out with different samples of students. To be useful, the link between NAEP and TIMSS should be the same for various subpopulations. That is, the function linking TIMSS to NAEP should be the same for boys as it is for girls, for members of various ethnic categories, and for students in public and private schools. To the extent that the link is consistent across the subpopulations, there is increased confidence in the goodness of the link. Tables 7A and 7B provide estimates of from Equation (1) for subpopulations defined by gender, selected race/ethnicity (black, Hispanic), and school type (public, private). In each case, the link was formed using data only from that subpopulation. The table also includes values of for the values of x equal to the U.S. mean and the 90th percentile, along with standard errors, computed from the subpopulation data, which include the naive, sampling, and measurement components of variance. Note that the values in the tables are somewhat biased due to the absence of conditioning variables related to these subgroups in the generation of plausible values from the TIMSS at grade 8. It is known (Mislevy, Beaton, Sheehan, and Kaplan 1992) that exclusion of conditioning variables leads to underestimation of differences between subgroup and overall means. Following Mislevy (1993), the bias in the subgroup estimate of is of the order of times the difference between the subgroup and overall NAEP means, where is the reliability of a form of the TIMSS instrument for the U.S. population, reported to be around .8 to .9. Nevertheless, these functions accurately reflect the reported TIMSS distributions for these subgroups.
Table 7A.—Parameters and linked estimates derived within subpopulation—grade 8 mathematics
Table 7B.—Parameters and linked estimates derived within subpopulation—grade 8 science
On examining Tables 7A and 7B, some variability exists in the parameter estimates across subgroups, particularly for the intercepts, Additionally, the estimates of vary somewhat. However, the differences in between subgroups and between a subgroup and the total population is invariably nonsignificant. This nonsignificance would appear to sanction the use of the overall linking function for the subgroups examined here. Nevertheless, the issue of the consequence of variability of the linking function across subgroups will be explored. In essence, variability of the linking function across subpopulations is an indication of model misspecification. That is, the linking function needs to include terms related to specific subpopulations. This was the approach adopted by Williams, et al., (1995) in their linking of NAEP to the North Carolina End of Grade (NCEOG) mathematics test. In their study, they noted different relationships between the NCEOG and NAEP by gender and race. These differences were accounted for through the use of a prediction equation that included intercepts and slopes for those groups. A similar approach was adopted by Bloxom, et al., (1995) in a linkage of scaled scores on the Armed Services Vocational Aptitude Battery (ASVAB) with NAEP. However, both the NCEOG and the ASVAB situations involved the construction of a linking function that would then be applied to individuals who are plausible members of the same population. That is, the NCEOG to NAEP link was derived on a sample of North Carolina students for application in North Carolina—the ASVAB to NAEP link was based on a sample of the population to which the ASVAB is normally administered. This is less clearly the case for the linking of NAEP to TIMSS, where the linking is performed on the combined U.S. population, but the results are to be applied to separate states. Instead, it is reasonable to view the instability of the linking function across subgroups as a potential component of variance of the linking function. Suppose one has N subpopulations, which collectively constitute a partitioning of the population. For specificity, the 12 subpopulations formed by crossing gender by race/ethnicity (black, Hispanic, white+Asian+other) by school type (public, private) will be used. The selection of these specific subpopulations was made because they are key subgroups, and because the linking function could potentially differ across the subgroups. For subpopulation s, suppose the linking function is where are estimated solely from the data for subpopulation s. From Equation (3), one has (8) Notice that can be viewed as the conditional expectation of the linked estimate, conditional on membership in subpopulation s. Further, in Equation (8) is the conditional variance. To emphasize this conditional relation, write where E denotes expectation and S stands for subpopulation. By standard probability theory, the following representation for the unconditional variance of occurs. (9) where E_{S} and Var_{S} denote the expectation and variance taken across subpopulations. The first term of Equation (9) is , (10) where, for example, (11) is the weighted average of the subpopulation values of , weighting by rf_{s }, the relative frequency of subpopulation s in the whole population. Approximating Equation (11) by , the value for the complete population, and performing similar substitutions for the remaining terms in Equation (10) means that Equation (10) can be approximated by Equation (3). Consequently, Equation (9) becomes (12) Thus, the variance of has acquired a second component, which measures instability (or meansquared error) due to the variability of the linking function across subpopulations. The value of this component is , (13) where A_{s} and B_{s} are the population values of the intercept and slope for subpopulation s and are their averages across the subpopulations. An estimate of this component is . (14) Note that even if for all s, so that the variance component in Equation (13) is equal to zero, the estimate from Equation (14) will be nonzero simply because it is based on sample values. Consequently, a correction to the estimate must be applied. Normal theory with linear statistics gives the expectation of where N is the number of subpopulations, equal to 12 in this case, d/D is the ratio of the average design effect (defined below) within a subpopulation, D is the design effect for the whole population, and (15) with estimate that includes both the sampling and measurement error components. The design effect measures the impact of complex sample data collection designs, such as used by NAEP and TIMSS, on the variance of a statistic. Specifically, the design effect is the ratio of the actual variance of the statistic, taking the data collection design into account, to the equivalent variance estimate obtained by ignoring the complex nature of the data caused by the sample design and by measurement error. Typically, the design effect is larger than 1. Additionally, it is possible that the design effects for subpopulations are smaller than those for the total population, implying that the ratio, d/D, could be smaller than 1. Experience based on NAEP, TIMSS, and other complex data sets suggests that the ratio could be as small as 0.5, implying that the multiplier for the expected value of the estimate of variance due to model misspecification could be as small as 5. Table 8 gives the values of and for the values of Var(x) and x in Table 2. We see that in every case, is smaller than the factor 5, so that the estimate of the variance due to model misspecification is smaller than a reasonable estimate of its expected value . Furthermore, this implies that the variance estimate is much smaller than a critical value for, say, the 95% level of significance, which, for 5 degrees of freedom is about 11. This indicates that the variance estimate does not exceed the value to be expected due to sample and imputation variability under the hypothesis that the true component of Equation (13) is zero. Consequently, component due to model misspecification in the variance of the link is taken as zero.
Table 8.—Comparison of the component of variance due to model misspecification estimated by with its expected value estimated by for grade 8
5.4 Component of Due to Temporal Shift One disadvantage with using the actual TIMSS and NAEP data to construct a link is due to the fact that TIMSS and NAEP were administered in different years. Any procedure that attempts to link 1996 NAEP scores to 1995 TIMSS scores, based only on the 1995 TIMSS and the 1996 NAEP samples, will suffer from an unavoidable confounding of secular change—the withininstrument change in achievement over time—with effects due to differences between the instruments. Estimation of the temporal effect of linking 1996 data to 1995 data is problematic, since any direct measure is lacking of the change in either NAEP or TIMSS measures of achievement between the 2 years. It is possible, by using related data (the NAEP longterm trend data from 1994 and 1996), to estimate the potential change in achievement as measured by NAEP between 1995 and 1996. As in every other case, it is impossible to estimate what the change in achievement would be in the TIMSS countries in 1996. Adjustment for temporal trend would potentially adjust m^_{N} of the linking function by a prediction of the difference between the NAEP mean in 1996 and what the mean would have been in 1995. This difference is estimated by (16) where and are the mean and standard deviation from the 1996 NAEP longterm trend assessment and and are the equivalent values from the 1994 longterm trend assessment. The second term in Equation (16) adjusts for the fact that the standard deviations for the main NAEP assessments differ from those for the longterm trend assessments. The square of Equation (16) is added to the variance of in the estimate of the variance of the linking function. Since the variance of is multiplied by in Equation (5), the value of this component of is D ^{2} and is constant for all x. The value of this component for the two subjects are shown in Table 9.
Table 9.—Value of the component of
