Skip Navigation
Illustration/Logo View Quarterly by  This Issue  |  Volume and Issue  |  Topics
Education Statistics Quarterly
Vol 2, Issue 1, Topic: Methodology
School-Level Correlates of Academic Achievement: Student Assessment Scores in SASS Public Schools
By: Donald McLaughlin and Gili Drori
 
This article was excerpted from the Introduction and Conclusions of the report of the same name. The sample survey data are from the Schools and Staffing Survey (SASS), the National Assessment of Educational Progress (NAEP), and assessments conducted by state education agencies (SEAs).
 
 

The Schools and Staffing Survey (SASS), conducted by the National Center for Education Statistics (NCES), offers the most comprehensive picture available of the education system in the United States. Initiated in 1987-88 and repeated in 1990-91 and 1993-94, SASS consists of surveys of districts, schools, principals, and teachers associated with a national sample of schools. It offers information on such issues as policies, programs, services, staffing, and enrollment at both the district and school levels, as well as principals' and teachers' backgrounds, training, experience, perceptions, and attitudes. Given the broad reach of SASS, it can speak to a variety of important educational research and policy questions. The value of SASS would be even greater, however, if the relationship between these measures and the level of achievement in schools were known. As noted by others (Boruch and Terhanian 1996; Kaufman 1996), by combining this survey information with data from other sources, SASS could more meaningfully inform debates over which factors relate to school effectiveness and could contribute to a broad-based evaluation of school improvement strategies.

The aim of this report is to show the potential value of a linkage between SASS and data on student academic achievement. To achieve this aim, our approach is two-staged. First, we match the 1993-94 SASS data with state reading and mathematics assessment scores for public schools in 20 states, adjusting for between-state differences in achievement scales by using State NAEP (the state-by-state component of the National Assessment of Educational Progress). Second, by combining these data sources, we identify school-level correlates of student achievement in a broad sample of American public schools.

We model the relationship between a variety of SASS school-level responses and average student assessment scores at the school level. In our model, average student achievement in a school is related to student background factors, school organizational features, teachers' professional characteristics, and school climate. Of particular interest in this study are the relationships among student achievement, average class size, and the school's behavioral climate. Overall, we investigate relationships among these measures in three types of schools—1,123 public elementary schools, 496 public middle schools, and 595 public high schools—in 20 states. The data are analyzed using correlational, multiple regression, and structural equation model analyses.

back to top


Although the analyses reported here merely scratch the surface of the potential for analyses of these data, they should provide evidence of a meaningful pattern of relationships between school-level factors and assessment scores.

Class size

The clearest result with respect to correlates of achievement is that average achievement scores are higher in schools with smaller class sizes. This result, obtained from structural equation modeling using both state assessment data and NAEP adjustments for between-state variance in achievement, is consistent across grade levels, although it is significant only in middle and secondary schools. While there are alternative causal explanations for this finding, such a finding in a large sample of public schools in 20 states is an important corroboration of the controlled research results that indicate that class size makes a difference.

The positive relationship between small classes and achievement was stronger for secondary schools than for elementary schools. In secondary schools, the positive association with achievement included both large schools and small classes. An important aspect of the relationship between class size and achievement, shown by the comparison of results with and without between-state variance components, is that it is primarily a between-state phenomenon. Restricting the study to within-state comparisons and then aggregating the results across 20 states yields much less evidence of a class-size relationship to achievement scores. This may be due to state policy-related limitations on variation in class sizes.

School climate

Substantive findings were not limited to class size. There was limited evidence of a positive relationship between teachers' perceptions of the school's behavioral climate and achievement scores. In particular, this relationship was only statistically significant when between-state variation was omitted from the data; and although all three analytical methods found it to be significantly positive in middle schools, it was not statistically significant in the structural equation analyses in elementary and high schools.

Conclusion

Based on these findings, one cannot avoid the conclusion that combining the SASS data with a school-level student achievement measure has the potential for addressing important policy questions about school-based strategies for improving student performance. Because the data are not longitudinal, causal inferences must be treated much more tentatively than conclusions based on data on the achievement gains of a specified set of students over time. Also, because the data are school means, they cannot address the factors that differentially affect the achievement of different students in the same school. Nevertheless, findings from analyses of the SASS student-achievement subfile, based on over 2,000 schools in 20 states, can contribute to the overall educational policy database.

back to top


This report demonstrates both the potential value of combining SASS with school-level assessment data and certain limitations of the restricted set of analyses reported here.

Feasibility and reliability of this approach

The primary conclusion reached in this study is that the strategy of matching school-level assessment scores to a national survey (1) is feasible and not costly (because the data are readily available) and (2) leads to valid and reliable conclusions about correlates of public school achievement across much of the United States. The additional step of linking the database to State NAEP to capture between-state achievement variation is also feasible and not costly and provides additional informational value.

It is clear from these analyses that between-state variation in achievement and in its correlates is an important component of the national database on education, because the contexts within states reduce the variance on key factors to the point that important relationships disappear. In a sense, that is the goal of many state policies—to provide resources to schools in such a way that students in all schools in a state have equal opportunities to achieve at high levels. However, in this database of 20 states, a quarter to a half of the variance in school sizes and class sizes is between states, and a third or more of the variance in percentage of minority enrollment is between states (table A). Studies that focus purely on variation between schools within states will miss the effects of these factors on educational achievement.

The methods used in this report focused on overall correlates of achievement, including between-state variation, but comparison with analyses of within-state relationships indicates the potential value of applying a multilevel analysis to these data. No state-level variables were included in this analysis, but combining this database with information on the educational policies of these 20 states, in a hierarchical linear structural equation model, would provide the basis for addressing many educational policy issues.

Generalizability of between-state achievement measures across grade levels

A positive methodological finding was the generalizability of the between-state achievement measures across grade levels. Although state assessment scores were available for grades 3 to 11, NAEP reading scores for individual states were only available for grade 4 in 1994. If the ordering of states in reading achievement changed substantially from grade 4 to grades 8 and 11, then the results of overall analyses of middle school and high school data would be diluted by linkage error. This dilution should not affect the within-state analyses, however.

The extension of the NAEP adjustment proved valid, in that the findings for secondary schools, using the between-state data, are as meaningful as the findings for elementary schools. This conclusion is not surprising, given the very high correlation of State NAEP means in different grades and subjects, but its support in this study may suggest new uses of State NAEP data in conjunction with state assessment data.

Table A.—Percentage of between-school variance between states
Table A.- Percentage of between-school variance between states

SOURCE: U.S. Department of Education, National Center for Education Statistics, Schools and Staffing Survey (SASS), "School Questionnaire" and "Teacher Questionnaire," 1993-94; and individual state education agencies (SEAs) in 20 states, state reading and mathematics assessment scores for public schools, 1993-94. (Originally published as table 7 on p. 42 of the complete report from which this article is excerpted.)

Low reliability of teacher qualifications data for school-level analyses

A limitation on the validity of aggregating teacher data for school-level analyses became apparent in the findings concerning teacher qualifications (average years of teaching experience and percent having a master's degree). These measures, unlike the teachers' responses to questions about school policies and school behavioral climate, had very low reliability as measures of the school, because there was relatively little systematic between-school variation: most of the variation was between teachers at the same school. This problem was manifest in the low intercorrelation between these measures; and, as a result, preliminary analytical findings concerning the relationships of this teacher qualifications factor to school-level achievement were uninterpretable. Ultimately, the decision was made to omit this factor from the analyses reported here.

Logical constraints on correlational data

Finally, although the data are purely correlational, there are logical constraints, such as that school factors probably do not cause differences in student background characteristics in the short term. Interpretation of the results of structural equation modeling in terms of hypothetical path models can lead to fruitful suggestions for avenues of research and policy development.

back to top


Three broad areas of research stemming from this study appear to be fruitful: (1) hierarchical analyses to examine the relationships between state education policies and the SASS correlates of achievement; (2) development of a measure of a school's achievement gains over time, which can be associated with SASS measures; and (3) further refinement of the linkage functions between state assessments and NAEP.

State education policies and SASS correlates of achievement

The findings of this study clearly indicate different patterns of correlates of achievement within states and between states. Schools in the same state tend to operate within common frameworks of funding, staff accreditation, curriculum, testing, and school reform programs. With uniformity in these aspects of education, variations in other factors are more likely to manifest correlations with achievement. On the other hand, to the extent that state frameworks affect achievement outcomes, between-state correlates of achievement can emerge. An analysis methodology that simultaneously models the within- and between-state relationships among variables and takes measurement error into account is needed for this. With such a methodology, and with the addition of a database of quantitative indicators of relevant state policies, the SASS student achievement subfile would increase in value.

School achievement gains over time

Every school addresses the needs of a different student population with different resources, and it is therefore unfair to hold all schools accountable to the same achievement standard. However, a number of states are turning to reform criteria that base decisionmaking on measures of gains in achievement over years. Although SASS cannot easily add longitudinal student growth data, it is certainly feasible to add other years' school-level achievement data to the subfile. Specifically, the addition of 1997-98 reading scores, linked to the 1998 fourth- and eighth-grade State NAEP reading assessment and to Common Core of Data (CCD) estimates on changing enrollment patterns and resources over the intervening years, would provide the basis for identifying SASS factors (measured in 1994) that are predictive of gains in achievement. Of course, states continue to develop and refine assessment systems, and the state assessment scores for a school in 1998 may not be equivalent to scores obtained in 1994, so linkage of measures of achievement gains over time to repetitions of State NAEP is an essential requirement for the development of a longitudinal database.

The power of the database for longitudinal analyses can be greatly enhanced with the addition of the next administration of SASS. If a subsample of schools included in SASS in 1994 is also included in 2000, then using the 2000 State NAEP assessment for adjustment of mathematics scores would enable matching of longitudinal changes in SASS school-based factors with longitudinal changes in achievement, controlling for longitudinal changes in student background factors.

Linkage between state assessments and NAEP

A third line of research would focus on improving the achievement measures included in the SASS student achievement subfile. The linkages used for the analyses presented in this report were based entirely on the means, standard deviations, and correlations between State NAEP and state assessment school means. The errors in these linkages can be diminished significantly by more detailed analysis of the relationships among the scores. In particular, current research by NCES has found that linkages to NAEP can be improved by considering nonlinear terms and by including demographic indicators. For example, all state reading assessments are sensitive to racial/ethnic differences, but some are more sensitive than others. Their sensitivities could be matched to NAEP's measurement of the distribution of racial/ethnic achievement differences by explicitly including that matching factor in the NAEP adjustment step in constructing the SASS school-level achievement score. The result would be increased comparability of within-state variation in the achievement measure across states.

back to top


Boruch, R.F., and Terhanian, G. (1996). "So What?" The Implications of New Analytic Methods for Designing NCES Surveys. In G. Hoachlander, J.E. Griffith, and J.H. Ralph (Eds.), From Data to Information—New Directions for the National Center for Education Statistics (NCES 96-901). Washington, DC: U.S. Government Printing Office.

Kaufman, P. (1996). Linking Student Data to SASS: Why, When, How. In J.E. Mullens and D. Kasprzyk (Eds.), The Schools and Staffing Survey: Recommendations for the Future (NCES 97-596) (pp. 53-65). U.S. Department of Education. Washington, DC: U.S. Government Printing Office.

back to top
   

Data sources:

NCES: Schools and Staffing Survey (SASS), "School Questionnaire" and "Teacher Questionnaire," 1993-94; and National Assessment of Educational Progress (NAEP) 1994 Reading Assessment, and 1992 and 1996 Mathematics Assessments.

Other: State reading and mathematics assessments for public schools, conducted in 1993-94 by state education agencies (SEAs) in the following 20 states: Alabama, California, Delaware, Florida, Georgia, Hawaii, Kentucky, Louisiana, Maine, Massachusetts, Michigan, Montana, New Hampshire, New York, Pennsylvania, Rhode Island, Tennessee, Texas, Utah, and Washington.

For technical information, see the complete report:

McLaughlin, D., and Drori, G. (2000). School-Level Correlates of Academic Achievement: Student Assessment Scores in SASS Public Schools (NCES 2000-303).

Author affiliations: D. McLaughlin and G. Drori, American Institutes for Research (AIR).

For questions about content, contact Andrew Kolstad (andrew.kolstad@ed.gov).

To obtain the complete report (NCES 2000-303), call the toll-free ED Pubs number (877-433-7827), visit the NCES Web Site (http://nces.ed.gov), or contact GPO (202-512-1800).


back to top