Does Money Matter? An Empirical Study Introducing Resource Costs and Student Needs to Educational Production Function Analysis 
Corrine Taylor
Department of Economics

Introduction
Do expenditures on school resources have a positive effect on student outcomes? This question is important to many audiences: parents of schoolaged children; citizens concerned about the effectiveness of their tax dollars; educators trying to improve student outcomes; and state policymakers charged with developing fair school finance formulas. Despite thirty years of research by economists, sociologists, and educational researchers, beginning with the Coleman Report (1966), this question still has no definitive answer.
Most economic analyses take an "educational production function" approach. These studies use econometric techniques to relate educational outcomes (e.g., students academic achievement) to school inputs while controlling for other contributions such as those of the students themselves, their families, peers, and communities. Within this broad framework, educational production function studies exhibit a wide range of empirical approaches.^{1} They vary in their choice and measurement of educational outcomes, explanatory variables of interest, and control variables. They also differ in their geographical scope and their unit of analysis.
Findings from these studies are as mixed as their empirical approaches are varied. Some studies estimate large, positive effects of school inputs on student outcomes; others find little or no effect; still others conclude that additional school resources are inversely related to student outcomes. The most wellknown result of this vast literature is Hanusheks (1986, 1989) conclusion of "no strong or systematic relationship between school expenditures and student performance." Hanusheks finding is based on his syntheses of more than thirty separate educational production function studies.^{2} A more recent synthesis by Hedges, Laine, and Greenwald (1994) challenges the validity of the analytical method of "vote counting," employed by Hanushek. Using the same primary studies as Hanusheks 1989 analysis, but a more sophisticated synthesis methodology known as "metaanalysis," Hedges, Laine, and Greenwald reach the opposite conclusion.^{3} They find a statistically significant and economically substantial, positive relationship between school inputs and student outcomes.
The relevance of the findings from these syntheses depends not only on the quality of their methodological approaches but, more importantly, on the quality of the primary research studies. In reviewing the primary studies considered in these syntheses, I find that none of the primary studies adequately accounts for acrossdistrict variations both in the resource costs of educational services (notably teacher compensation) and in the proportion of students with special needs, who require additional, more costly services.
These variations in resource costs and student needs are significant. The power of school districts to purchase a standard "market basket" of educational resources varies by twenty to forty percent within states and as much as forty percent across states (Chambers, 1981; McMahon, 1995). Student needs vary widely across districts as well, with the proportion of specialneeds students approaching fifty percent in some large urban school districts (Odden & Picus, 1992). I expect that a stronger relationship between student achievement and school expenditures will emerge after accounting for these resourcecost and studentneed differentials.
To test this hypothesis, I use a unique data set merged from three high quality, national data sources: the National Education Longitudinal Study of 1988, the Common Core of Data, and a districtlevel teacher cost index.^{4} I specify and estimate a valueadded student achievement model for which my explanatory variable of interest is perpupil expenditures. I find that the estimated effects of perpupil expenditures on high school students academic achievement are consistently positive and statistically significant. However, these effects do not increase appreciably when the measure of expenditures is corrected to account for resourcecost differentials or when differences in the proportions of specialneeds students are taken into account.
The remainder of this paper is organized as follows: I first present my conceptual model and describe the data sources, sample, and variables used in my empirical analysis. Next, I explain how I conducted my estimations and present and discuss the results. Lastly, I summarize my findings and presents suggestions for future research.
Conceptual Framework
Educational Production Function Studies
My conceptual model is the basic valueadded, reducedform specification of the educational production function presented in Hanusheks (1979, 1986) reviews. The educational outcome of interest is academic achievement. An individual students achievement at time t (A_{t}), is modeled as a function of the students prior achievement (A_{t}*), other student characteristics and effort (I), and the influences of the students family (F), peers (P), school (S), and community (C) during the period between t* and t. That is,
A_{t} = (A_{t}*, I_{tt}*, F_{tt}*, P_{tt}*, S_{tt}*, C_{tt}*).The effects of the school inputs on achievement are of primary interest in educational production function analyses. The types of school inputs considered in these analyses depend on the policy questions being addressed. Studies that focus on how schools allocate their funds typically consider teacher/pupil ratios, and teachers education levels and years of experience as the school inputs. My policy interests involve the equity of school finance formulas; hence, I consider schools fiscal resources as the school input of interest.
The efforts of states to provide more equitable educational opportunities and student outcomes by reducing acrossdistrict disparities in schools fiscal resources inspired my two primary research questions: 1) Is there a positive, systematic relationship between student performance and schools fiscal resources? and 2) How does the strength of that relationship depend on the precise measure of fiscal resources? Specifically, is the relationship between student achievement and perpupil expenditures (PPEs) stronger when the PPE measure reflects the costs of educational services and the population of specialneeds students? If this is the case, then states would be more likely to achieve their student equity objectives by attempting to equalize not nominal perpupil expenditures, but rather perpupil expenditures adjusted for costs and student needs.
Variations in Costs
One problem in educational production function studies that link schools fiscal resources to student outcomes is that the costs of equivalent educational services vary widely across districts. Researchers estimate that these costs vary by twenty to forty percent within states and up to forty percent across states (Chambers, 1981; McMahon, 1995). In studies that ignore such differential resource costs, disparate outcomes for districts with identical expenditure levels seemingly lend support to the notion that money does not matter. In fact, higher student achievement should be expected in low cost districts which, for the same nominal expenditure level, can purchase more or higher quality real resources than high cost districts can afford, all else being equal.
One recent production function study does attempt to account for variations in education costs by location. William Sander (1993) adjusts his expenditure and income variables by a costofliving index developed by Walter McMahon (1988), and finds that teacherrelated spending is positively related to ACT scores in Illinois. Although Sanders study represents an improvement over the prior literature, costofliving adjustments do not adequately account for educational price differentials.
The cost of living is but one factor affecting the attractiveness of a school district as a place to live and work. Other characteristicsincluding the size of the school district, the types of students served, the crime rate, the level of pollution, the climate, access to medical facilities, availability of recreational opportunities, and consumption opportunitiesalso affect the attractiveness of districts, and ultimately affect the salaries that are required to attract and retain individuals with specific professional characteristics (Chambers, 1981). A costofliving adjustment fails to adequately account for variations in salaries of school personnel due to differences in job and regional characteristics. Since personnel costs comprise at least 80 percent of school expenditures and since variations in personnel costs dominate the pattern of cost differences across districts it is important to account for them (Chambers and Fowler, 1995).^{5}
While a number of approaches have been taken in efforts to develop an index for personnel costs (see Chambers, 1981, pp. 4552), Chambers argues that the most appealing approach is based on the hedonic wage model. The theoretical framework, established by Lucas (1972), maintains that through a simultaneous process of matching the attributes of individual employees and the working conditions offered by employers, differential wages are determined. In its application to the market for school personnel, hedonic wage theory recognizes that differences in the characteristics of school districts require different salary levels to attract the types of personnel needed to provide a given level and quality of educational services across districts.
The personnel index indicates the relative cost of employing workers with similar skills and jobs in different environments. The different environments are characterized by district and regional factors that are beyond the control of local school decisionmakers (Chambers, 1981, p. 63).^{6} The types of district and regional factors considered reflect the overall quality of the environment within which the individual works and lives as well as the condition of the labor market in which prevailing wages and employment levels are determined. Thus, a personnel cost index accounts for variations in district and regional characteristics, controlling for personal and job assignment characteristics.
Adjusting expenditures by a personnel cost index allows for more meaningful comparisons of PPE levels across districts that face different resource costs. We would expect that costadjusted expenditures are better at capturing the quantity and quality of the educational services purchased, and that such "real" measures should be more closely related to student performance than the typically considered "nominal" measures.
Variations in Student Needs
In educational production function analyses for which the observations are individual students, the ideal measure of a schools fiscal inputs would be the dollars (adjusted to reflect resource costs) spent on each individual student. However, school expenditures are most accurately measured (and often only available) at the district level and are difficult to accurately allocate to schools, classrooms, or individual students. Hence, whether the unit of analysis is individual students, schools, or districts, most analyses that focus on fiscal resources simply use districtlevel PPEstotal district expenditures divided by the total number of students in the districtas the measure for school inputs. Just as nominal expenditure levels make for poor comparisons across districts with different resource costs, simple PPEs make for poor comparisons across districts with different proportions of specialneeds students.
The distribution of specialneeds studentsincluding special education, compensatory education, and limited English proficiency (LEP) studentsis not uniform across school districts. The incidence of students with physical and mental handicaps varies widely across states and districts. Large, urban districts and small, rural districts tend to have higher proportions of students for whom English is not the primary language. Urban and rural areas also tend to serve a higher proportion of students living in poverty (Odden and Picus, 1992). The costs of providing services to these specialneeds students vary depending on such factors as the number and types of students with special needs, the size of the school, and the kinds of services provided. In general, though, studies estimate that special education programs are about 2.3 times as costly as regular programs (Kakalik et al., 1981; Moore, Strang, Schwartz, and Braddock, 1988; Chaikind, Danielson, and Brauen, 1993), and compensatory and LEP programs are at least 20 percent more costly (Odden and Picus, 1992; Parrish, Matsumoto, and Fowler, 1995).
A variety of federal and state aid programs are designed to help districts offset the additional costs of providing extra services for specialneeds students. Under Chapter 1 of the Elementary and Secondary Education Act (ESEA), the federal and state governments provide extra funds to districts for compensatory education. Title VII of the ESEA makes funds available for bilingual education programs. The federal Education for All Handicapped Children Act mandates and helps fund special education programs. Analyses of expenditures that include these additional funds should also reflect the size of the special needs population for whom these funds are provided.
Because the distribution of specialneeds students varies widely among school districts, simple comparisons of PPEs across districts fail to reflect differences in school resources available for the average student. Districts with smaller proportions of the more costly specialneeds students, in effect, have more money to spend on the average student than do schools with higher proportions of these specialneeds students, ceteris paribus. Hence, in educational production function studies relating school expenditures to student achievement, control variables for the proportion of specialneeds students in each district need to be included in the regressions.
Hypothesis
Figures 13, show how I expect these variations in resource costs and students needs to affect the relationship between student achievement and school expenditures. Figure 1 is a stylized representation of Hanusheks conclusion that there is no relationship between student achievement and school expenditures. Figure 2 illustrates my hypothesis. I expect that districts with higher levels of student achievement and lower nominal expenditures (upper left portion of graph) face lower costs of education and have relatively fewer specialneeds students. Under these conditions, the adjusted measure of PPEs would be higher than the nominal measure. (The arrows represent the change in PPE measure from nominal to adjusted.) Similarly, I expect that districts with lower levels of student achievement and higher nominal expenditures (lower right portion of graph) face higher costs of education and serve a higher proportion of specialneeds students. For these districts, the adjusted measure of PPEs would be lower than the nominal measure. If my expectations are correct, then a (larger) positive relationship between student achievement and school expenditures should emerge as the measure of expenditures is adjusted to account for these differences in resource costs and student needs (see figure 3).
Empirical Model
Data Sources
This study uses data merged from two large data sets and a smaller data file, each released by the . The first source is the restricteduse version of the National Education Longitudinal Study of 1988 (NELS), a generalpurpose panel study that surveyed and tested eighth graders from about 1,000 public and private middle schools in the spring of 1988 and followed these students through high school. The first three waves of NELS include scores on cognitive tests administered to students in 1988, 1990, and 1992 as well as information from questionnaires administered to students, their parents, teachers and school administrators over the same time period (Ingels et al., 1994).
The second source is the Common Core of Data (CCD), an annual, comprehensive database containing descriptive data on all public elementary and secondary schools and school districts in the United States. The CCD also contains enhanced financial data at the district level for fiscal years 1990, 1991, and 1992. Additionally, the CCD contains demographic indicators derived from special tabulations for school districts from the 1990 Census (, 1995).
The third, smaller data source is a national, districtlevel teacher cost index (TCI) developed by Jay Chambers of the American Institutes for Research. Chambers TCI reflects acrossdistrict variations in nondiscretionary resource costs of teacher services. Based on a hedonic wage model, the TCI was created using survey data from over 40,000 public school teachers who participated in the NCESs Schools and Staffing Survey for school year 19901991. Chambers TCI is the only nationwide, districtlevel index available that takes into account both the factors that underlie differences in the cost of living and variations in other teacher and school attributes that are within local control (Chambers and Fowler, 1995). Appendix A describes the construction of the TCI.
Sample
My sample is drawn from those students who participated in all the first three waves of the NELS panel study (16,489 students). I consider only students attending public schools (11,598) because they are the only ones to whom I can assign reliable, comparable expenditure data from the CCD.^{7} I further refine my sample to include only students who never dropped out of school (11,503) and who attended the same high school in both 1990 and 1992 (11,167).
These restrictions are imposed because I want to consider only those students who are consistently associated with school resources at particular schools. The disadvantage is that these students constitute a more stable student body than is reflected in the total student population. To the extent that dropout rates, transfer rates, or participation in all three waves of the NELS survey are systematically related to PPE levels, my findings are not generalizable to the entire student population; rather, they must be qualified to apply to this more stable group of students.
I further eliminate observations with missing data in three critical areas: test scores, specialneeds students, and TCI values. I lose a substantial number of observations by considering only students with complete test score data in both 1988 and 1992; this restriction leaves 7,854 students.^{8} Eliminating observations lacking CCD data on the number of specialneeds students and observations with missing TCI values leaves a sample size of 6,990. Missing values for some control variables reduce the number of observations used in the regression computations to 5,955.^{9}
Variables
The dependent variable in my regression equations is the students 1992 (senior year for most of the students) score on the NELS mathematics test. The specific measure I use for mathematics achievement is the item response theory (IRT) theta score, which is standardized to a mean of 50 and a standard deviation of 10. To eliminate floor and ceiling effects, three forms of the mathematics tests were administered to the students in 1992, depending on their prior achievement. Students who performed in the highest quartile on the 1990 test were given the most difficult version of the 1992 exam; those in the lowest quartile in 1990 received the easiest version of the 1992 exam; and the rest of the students received the test of medium difficulty in 1992. Item response theory was used to calculate scores that could be compared across test forms that differed across the years and across the students in a given year. The theta score, which is standardized across the three waves of testing is the best score to use when assessing gains in cognitive skills. (See Ingels et al., 1994 for more information about NELS testing and IRT scoring.)
The independent variables include controls for achievement in eighth grade, in order to analyze the gain in cognitive outcomes during the high school years. I include both the 1988 mathematics IRT theta score and the average 1988 IRT theta score on the other three NELS testsscience, reading, and social studiesas control variables.^{10} I use the average of the other test scores as an additional control to reduce bias from unmeasured preexisting differences among students (see Gamoran, 1996; Gamoran and Mare, 1989; and Jencks, 1985). I expect to find strong, positive relationships between these measures of prior achievement and the measure of achievement on the mathematics test in 1992.
Other control variables included in my empirical analysis capture student and family characteristics, the students interest and effort in mathematics and in school, and characteristics of the students peers, school, and community. Descriptive statistics for these control variables are reported in table 1.^{11} Definitions and sources for all the variables are provided in appendix B.
Methodological Approach
Recall that two primary questions are addressed in this study. First, do these high quality, nationwide data reveal a positive relationship between student achievement and PPEs? Second, is the estimated effect of PPEs on student achievement strengthened by accounting for acrossdistrict variations in resource costs and student needs? Addressing the first question is a straightforward matter of examining the statistical significance and substantive magnitude of the coefficient estimates on the PPE variables. Addressing the second question is more involved.
Coefficient Comparisons Across Regressions
To address the second question I run four main regressions then compare the coefficient estimates on the PPE variables across these regressions. The four regressions differ only in their measure of PPE and in their controls for specialneeds students. I consider two measures of PPE: nominal and costadjusted. "Nominal PPE" is calculated by simply dividing the districts expenditures by the number of pupils in the district. "Costadjusted PPE" divides the nominal PPE value by the teacher cost index (TCI) times 100. (The TCI is centered at 100 in the population rather than at one; hence the need to multiply by 100.) Additionally, I consider two alternative specifications of the model: in the first specification I do not control for the proportion of specialneeds students; in the second specification, I do. In the second specification I include separate control variables indicating the proportion of students in each of the following special needs categories: special education, limited English proficiency, and compensatory education. The combination of the two alternative PPE measures and the two alternative specifications produce the four distinct regressions.
To examine the robustness of the results, I consider three alternative categories of expenditures. The three expenditure categories are: 1) total district expenditures; 2) core current expenditures; and 3) expenditures on instructional salaries. The first category encompasses all current operation and capital outlay expenditures. The second includes just three key types of current operation expenditures: instructional expenditures (salaries and benefits for teachers and aides, contracted services, and supplies), pupil support services, and instructional staff support. The third category is the narrowest of all: only instructionrelated salaries for teachers and aides are considered. Table 2 reports descriptive statistics for the nominal and the costadjusted PPE measures in each of these three expenditure categories.
To meaningfully compare the coefficient estimates across regressions, the nominal and costadjusted PPE measures used in the regressions need to be on a common scale. Therefore, I create a new variable, called "comparable costadjusted PPE," by multiplying each observation of the "costadjusted PPE" by a constant factor. The factor equals the ratio of the mean nominal PPE to the mean costadjusted PPE. The factor differs slightly across the three expenditure categories, but in all cases is approximately 0.987. (Descriptive statistics for the "comparable costadjusted PPE" measure are also presented in table 2. Note that the means for the nominal and comparable costadjusted PPE variables are identical by design.) It is the "nominal PPE" and the "comparable costadjusted PPE" variables that are included in the regressions, thus allowing for meaningful acrossregression comparisons of the coefficient estimates on the PPE variables within each expenditure category.
Within each expenditure category, I expect to find that the magnitude of the coefficient on the PPE measures increases: 1) as the measure changes from "nominal PPE" to "comparable costadjusted PPE"; 2) when the regressions control for specialneeds students; and 3) as both cost and student needs are taken into account (i.e., we move from nominal PPE and no controls to costadjusted PPE and specialneeds controls).
Estimation Results
The results confirm that student achievement on the 1992 NELS mathematics test is positively related to perpupil expenditures. This result holds for all three expenditure categories, whether the PPE measure is nominal or costadjusted, and whether or not control variables for specialneeds students are included in the regression. Table 3 summarizes the estimated effects of the various expenditure measures on achievement for both model specifications. The coefficient estimate is consistently positive and statistically different from zero, though it is substantively small.^{12} For example, the coefficient on nominal core PPE in the regression that controls for specialneeds students is 0.381. This coefficient means that for an additional $1,000 in perpupil expenditures, the math score is expected to increase by 0.381 points over the four years of high school. Given that typical gain in math score is about 8.5 points, the extra $1,000 per pupil raises test scores by only 4 percent of what is already expected.
The results lend mild support for the hypothesis that accounting for differential resource costs and student needs would reveal a stronger positive relationship between student achievement and school expenditures. In table 3, I use a solid arrow to indicate changes in the magnitude of the coefficient that are in the expected direction; broken arrows indicate changes in the unexpected direction. While the direction of change is as expected in 13 of 15 cases, the magnitude of the change is minuscule compared to the standard errors. Indeed, the confidence intervals for the coefficients within each of the three expenditure categories almost entirely overlap.
Although not of primary interest in this study, it is interesting to examine the effects of the other explanatory variables included in the model. These other effects may shed light on the weak effects of the fiscal resources. Table 4 presents all the estimated effects from the regressions that use (comparable) costadjusted core expenditures per pupil as the explanatory variable of interest. Performance on the 1992 mathematics test is positively and statistically significantly related to prior achievement in both math and other subjects. Higher math achievement is also positively and significantly related to higher socioeconomic status. Females performance on the math tests is worse than males, and minorities performance is worse than nonminorities. Students from singleparent homes perform worse than those from twoparent households, but not significantly so. All three separate measures of student effort are positive and statistically significant. Students who experience multiple disruptions at school perform worse than those in less disruptive learning environments. The signs on most of the other nonexpenditurerelated explanatory variables are generally as expected. The most notable unexpected result is the negative coefficient on the median income for households with children. The effects of the PPE variable were highly sensitive to the inclusion or exclusion of this income variable, even though the correlation coefficient is only about 0.5. The positive coefficient on the percent of LEP students in the regressions that used control variables indicates that limited English proficiency may not be a substantial handicap on math tests. Indeed, international studies consistently rank U.S. school children among the lowest in math performance. Perhaps in schools with higher proportions of LEP students, the students are able to draw more from their prior mathematics knowledge. In future analyses, I will consider performance in the other NELS subjects as well. I expect, for example, that the coefficient on LEP students will be negative on the reading test.
Conclusions and Directions for Future Research
This paper contributes to the understanding of the effects of school expenditures on student achievement by drawing on three nationwide data sets which are merged to create a rich sample for the empirical analysis. I expected to find (1) that the relationship between student achievement and nominal expenditures would be weak, and (2) that the relationship between achievement and costadjusted expenditures would be stronger and positive, when controlling for the population of specialneeds students. Instead, I consistently found a small positive relationship that was relatively insensitive to the costadjustments and specialneeds controls. These results provide evidence that the lack of a strong relationship between student achievement and school expenditures cannot simply be attributed to mismeasurement of the schools fiscal resources.
In future research I intend to test the robustness of these results. I will consider alternative model specifications and methods of accounting for differential resource costs and student needs. It may be that I find no support for my hypothesis no matter which model or adjustment factors are used, but given the dearth of work in this area, further exploration is warranted. I will examine the degree to which my results are due to assumptions linearity of the models functional form. I will also examine the extent to which these results are dependent on my choice of costadjustment: Chambers TCI. These and other avenues of exploration should shed further light on the potential effectiveness of school finance reform in affecting student equity.
Appendix A. Teacher Cost Index^{13}
The theoretical basis for Chambers teacher cost index (TCI) is the hedonic wage model. In this model, teachers care about both the quality of their work environment and the monetary rewards associated with particular employment opportunities. School districts care about the characteristics of their workers and the costs of hiring those workers. The hedonic wage model assumes that the simultaneous matching of teachers with school districts reveals the differential rates of pay associated with employee attributes and working conditions offered by employers. Thus, the model allows for decomposition of observed variations in wages into the implicit dollar values attached to each unit of the personal and workplace characteristics.
Chambers represents the reduced form of the hedonic wage model for teacher salaries as:
where i indexes individual teachers and j indexes school districts. The dependent variable is the natural logarithm of the annual earnings of the teacher from the school district. The explanatory variables can be divided into two broad categories: cost factors and discretionary factors. The cost factors include district (D) and regional (R) attributes that affect the willingness of teachers to live and work in these localities and that are beyond the control of local decision makers, e.g., competition in the market for teachers, factors underlying costofliving differences, amenities of urban and rural life, climatic conditions, racialethnic mix of students, and district size and growth. These cost factors are directly used in calculating the TCI. The other category of explanatory variables used in the hedonic wage model includes discretionary factorsthose within the control of local school district decision makers in the long run, such as the characteristics of the individual teachers (T), the attributes of the job or classroom to which they are assigned (C), and various school characteristics (S). These discretionary factors are included as control variables in the regression to eliminate their contribution to expenditure differences across districts. (See table 1.1 of Chambers and Fowler, 1995, for details of the specific variables included under each of these categories.)
The data used in the empirical estimation of this model are derived primarily from the Schools and Staffing Survey (SASS). They include responses from 46,750 public school teachers in 8,969 public schools and 4,884 public school districts. These data are supplemented by data from the Common Core of Data, the Census Bureau, the U.S. Geological Survey, and the National Climatic Data Center.
After estimating equation A1, a teacher cost index is calculated for each school district based on the estimated coefficients and values of the cost factors, while controlling for variations in the discretionary factors. The TCI for each school district j is calculated as:
The overall mean value for the TCI is 100. The index is greater than 100 for districts facing higher nondiscretionary costs (e.g., the average TCI for districts in New York City is 130) and is less than 100 for districts in low cost areas (e.g., the average TCI for districts in nonmetropolitan Oklahoma is 80).
Appendix B. Definitions and Sources of the Variables
Unless otherwise noted, the variables described below are based on variables from the NELS Student Component Data Files. Other sources of data include the NELS School Component Data Files (NELS School), the Common Core of Data (CCD), and the Teacher Cost Index (TCI).
Dependent Variable
Math score, 1992: Score on the mathematics achievement test in the spring of 1992, when most of the students were in twelfth grade. Uses NELS variable F22XMTH, the IRT Theta tscore. (See Ingles et al., 1994, p. H33 for a description of the benefits of using this metric.)
Explanatory Variables of Interest
Six variables measuring perpupil expenditures are used in these analyses. These are based on three categories of expenditures (total, core current, and instructional salaries) and two alternative calculations of PPEs (nominal and costadjusted).
The three categories of expenditures are from the CCD for Fiscal Year 1992 (School Year 199192). Expenditures are measured for the entire school district.
 Measure 1 is total district expenditures, field C_TOTEXP.
 Measure 2 is core current expenditures, defined as instructional expenditures, pupil support services, and instructional staff support: C_E13 + C_E17 + C_E07.
 Measure 3 is instructional salaries only, C_Z33.
The two methods of calculating PPEs are described below:
 Nominal PPEs are calculated by simply dividing each of the expenditure measures described above by the total number of students in the school district in School Year 199192 (AG_PK12). For example, the formula for per pupil total expenditures is C_TOTEXP/AG_PK12.
 Costadjusted PPEs are calculated by dividing expenditures by Chambers teacher cost index (TCI) multiplying by 100, then dividing by the number of students in the district, e.g., (C_TOTEXP/TCI*100)/AG_PK12.
Note that the costadjusted measure that is used in the regressions is rescaled to be comparable to the nominal measure within each category. See "Coefficient Comparisons Across Regressions."
Control Variables
Prior Achievement
 Math score, 1988: BY2XMTH, eighth grade IRT Theta tscore.
 Average of other scores, 1988: Average of 1988 IRT Theta tscores in reading, science, and social studies. (BY2XHTH + BY2XSTH + BY2XRTH) / 3. All these test scores are on the same metric; hence the simple average score is appropriate.
Student and Family Characteristics
 Minority: Students race based on F2RACE1, recoded to 1=Black, Hispanic, or Native American; 0=White or Asian.
 Female: Students sex based on F2SEX, recoded to 1=female; 0=male.
 Singleparent family: Adult composition of the students household based on FAMCOMP, recoded to 1=adult female only or adult male only; 0=two parents or guardians.
 Socioeconomic status: F2SES1, SES measure based on fathers education level, mothers education level, fathers occupation, mothers occupation, and family income, and using Duncans Socioeconomic Index (1961).
Student Interest and Effort
 Interest and effort in math: Composite variable based on the students responses to questions F2S21AD: In your current or most recent math class, how often do you:
 Pay attention in class?
 Complete your work on time?
 Do more work than was required of you?
 Participate actively in class?
Composite ranges from 0 (little effort) to 4 (strong effort). Time spent on homework: Sum of categorical data on hours spent on homework in school (F2S25F1) and out of school (F2S25F2). Sum ranges from 0 indicating no time to 16 indicating over 40 hours per week.
 Class attendance: Composite variable (uses F2S9AF) measuring the students attendance in classes, based on how often the student reports he or she:
 Was late for school.
 Cut or skipped class.
 Missed a day of classes.
 Was put on inschool suspension.
 Was suspended or put on probation from school.
Composite ranges from 0 to 5, where 5 indicates the student says he or she "never" did any of the above.
Students View of the School Environment
 Perceives disruptive environment: Composite of the students perception of the schools learning environment, based on how strongly the student agrees with statements F2S7EH:
 I dont feel safe at this school.
 Disruptions by other students get in the way of my learning.
 Fights often occur between different racial or ethnic groups.
 There are many gangs in school.
Composite ranges from 0 to 4, where 4 means the student agreed or strongly agreed with all four statements.
 Experiences disruptive environment: Composite measuring the students personal experiences that indicate a disruptive learning environment. The composite ranges from 0 to 7 and indicates the number of affirmative responses to statements F2S8AG:
 I had something stolen from me at school.
 Someone offered to sell me drugs at school.
 Someone offered to sell me drugs on the way to or from school.
 Someone threatened to hurt me at school.
 Someone threatened to hurt me on the way to or from school.
 I got into a physical fight at school.
 I got into a physical fight on the way to or from school.
Peers Characteristics
(All these variables are based on data from the NELS School File)
 Peers from singleparent homes: F2C23, estimate by school administrator of the percent of twelfth graders (in 1992) from singleparent homes. Coding: 1 indicates less than 10 percent from singleparent homes; 5 indicates more than 75 percent.
 Percent minority peers: Percentage of twelfth graders who are Black, Hispanic, or Native American. F2C22B + F2C22C + F2C22E.
 Peers absenteeism: Based on F2C21, average daily attendance (ADA) rate for twelfth graders, recoded such that 0 indicates 95 percent ADA; 1 indicates 90 percent ADA < 95 percent; 2 indicates 85 percent ADA < 90 percent; 3 indicates ADA < 85 percent. Peers dropout rate: Based on F2C26, estimate of the percent of students who enter the twelfth grade who drop out before graduation. Coded such that 0 means none drop out; 1 means 0 percent dropout rate (DR) < 3 percent; 2 means 3 percent DR < 5 percent; 3 means 5 percent DR < 7 percent; 4 means 7 percent DR < 10 percent; 5 means 10 percent DR < 20 percent; and 6 means 20 percent DR.
Specialneeds students
(From the CCD Agency Database for School Year 199192)
 Percent special education: AG_SPED/AG_PK12*100, number of special education students in the district divided by the total number of students in the district, times 100.
 Percent with limited English proficiency: P7028TP, percentage of children in the district who speak English "not well."
 Percent below poverty level: P7118TP, percentage of children in the district living below the poverty level.
Community Characteristics
(From the CCD Agency Database for School Year 199192)
 Percent adults with at least some college: P120403P + P120404P, percentage of adults in the district with some college, or a bachelors degree or higher degree.
 Median income for household with kids: P3080A01.
Size of Class; Problems in School
(From the NELS School File)
 Twelfth grade enrollment: Enrollment of twelfth graders as of Oct. 1991, based on F2C2.
 Problems in school: Composite of school problems as judged by the school administrator (using NELS variables F2C57A,CP). Composite ranges from 0 to 15, where higher values indicate more of the following problems: tardiness, class cutting, physical conflicts, gang activity, robbery or theft, vandalism, use of alcohol, use of illegal drugs, students under the influence of alcohol or drugs while at school, sale of drugs near school, possession of weapons, physical or verbal abuse of teachers, racial/ethnic conflicts, and teen pregnancy.
School Characteristics
(From the NELS School File)
In the NELS School File, public schools are classified as the following types:
 Comprehensive school (not including magnet school or school of choice);
 Magnet school (including schools with magnet programs, schools within a school); or School of choice (open enrollment/nonspecialized curriculum).
For each of the three types of schools, I assign a 1 if the administrator indicated that the school met the characteristics of that type of school and a 0 if not. Although the definition of comprehensive schools specifically excludes magnet schools or schools of choice, the data reveal that some administrators in magnet schools and/or schools of choice marked that they were also comprehensive schools. In my regression analyses I do not include a variable for comprehensive schools; I do include dummy variables for magnet schools and schools of choice.
Zeroone dummy variables are also included for two other characteristics of schools:
 Yearround schools; and
 Vocationaltechnical schools.
Region of the Country
Zeroone dummy variables indicate in which of four US Census regions the student attended school in 1992, based on G12REGON.
 MidwestEast North Central and West North Central states;
 NortheastNew England and Middle Atlantic states;
 SouthSouth Atlantic, East South Central, and West South Central states; and
 WestMountain and Pacific States.
Urbanicity
Zeroone dummy variables indicate the urbanicity of the school the student attended in 1992, based on G12URBN3.
 Urbancentral city;
 Suburbanarea surrounding a central city within a county constituting an Metropolitan Statistical Area (MSA); and
 Ruraloutside an MSA.
Footnotes
 The many approaches of educational production function studies are reviewed by Hanushek (1979, 1986), Cohne and Geske (1990), and Monk (1992).
 Hanusheks famous 1986 analysis in the Journal of Economic Literature includes 147 regressions from 33 separate education production function studies. His updated 1989 study in Educational Researcher includes 187 regressions from 38 primary studies. He reports the exact same conclusion in the two synthesis studies.
 Hanushek's analytical method of "vote counting" examines only the sign and level of statistical significance of the estimated effects of the seven different school inputs on student performance. He gives one "vote" to each estimated effect with a positive sign. Whether he considers only those effects that are statistically significant or he ignores statistical significance, Hanushek concludes that the proportion of positive effects is too small to indicate a strong relationship between school inputs and student performance.
Hedges, Laine, and Greenwald's "metaanalysis" considers not only the signs but also the magnitudes of the estimated effects of school inputs on student outcomes. Additionally, their more sophisticated methodology accounts for dependence among regressions estimated within the same study using slightly different empirical specifications and among regressions in different studies that used the same data sources.
 The teacher cost index was developed by Jay Chambers of the American Institutes for Research, and like the other data sources, was released by the U.S. Department of Education's .
 Transportation and energy costs vary widely across districts as well, but account for a much smaller portion of schools' expenditures.
 It is essential that adjustments for differential costs of education be based only on factors which are beyond the control of district decisionmakers, so that inefficient spending practices are not encouraged.
 NELS oversampled students in private schools; hence, the large proportion of students who are eliminated given that I consider only students who attend public schools.
 In this paper I do not tackle the potential "pretest to posttest selection problem" discussed by Becker and Walstad, 1990.
 Other fields with missing data include: the percentage of students in the district living in singleparent homes; the percentage of students in the district in minority families; historical dropout rates in the high school; and enrollment in the twelfth grade. In future studies, I intend to impute values for missing data in these fields.
 I use the 1992 math score as the dependent variable and include the 1988 math score as a control variable, rather than using the gain in score as the dependent variable, because the former specification is less restrictive. In particular, the gain score specification implicitly assumes that the coefficient on the 1988 math score should be one. Typically, the coefficient estimate on prior achievement in the same subject is in the range of 0.70 to 0.80.
 The means and standard deviations are weighted to account for the oversampling of certain populations in the NELS threewave panel. The weight used in computing these descriptive statistics is the relative weight, F2PNLWTi / mean (F2PNLWT).
 Because the NELS observations do not come from a random sample, the reported OLS estimates of the standard errors may be understated. Using a hierarchical linear modeling technique to account for the clustering of students within schools, I found that the HLM standard errors were virtually identical to the OLS standard errors. This result is not surprising, since there were only ten students, on average, in each school in 1992, and the magnitude of the bias for the standard errors increases with the average group size. (See Moulton, 1990, p. 335.) Other departures from random sampling (e.g., oversampling minorities) may also require the imposition of higher standards in judging statistical significance. (See Ingels, et al., 1994, pp. 4253.) The root design effect for the full panel, when using the mathematics IRT score as the dependent variable, is 2.273. Multiplying the OLS standard errors by 2.273 will give a conservative standard error to use in judging statistical significance. Even imposing this most stringent standard for the standard errors, all the coefficients of the expenditure variables are statistically greater than zero at the 5 percent level of significance.
 This summary of the TCI draws heavily from Chambers and Fowler, 1995.
References
Becker, William E. and William B. Walstad. 1990. "Data loss from pretest to posttest as a sample selection problem," Review of Economics and Statistics, (1): 1848.
Chaikind, S.; L. C. Danielson, and M. L. Brauen. 1993. "What do we know about the costs of special education: A selected review," Journal of Special Education, 26(4): 344370.
Chambers, Jay G. 1981. "Cost and price level adjustments to state aid for education: A theoretical and empirical review," in Perspectives in State School Support Programs. Eds.: K. Forbis Jordan and Nelda H. CambronMcCabe. Cambridge, MA: Ballinger Publishing, pp. 3985.
Chambers, Jay G. and William J. Fowler, Jr. 1995. Public School Teacher Cost Differences across the United States, Washington, D.C.: Government Printing Office.
Cohn, Elchanan and Terry G. Geske. 1990. The Economics of Education. 3rd ed. New York: Pergamon Press.
Coleman, James S. et al. 1966. Equality of Educational Opportunity. Washington, D.C.: Government Printing Office.
Gamoran, Adam. 1996. "Student achievement in public magnet, public comprehensive, and private city high schools," Educational Evaluation and Policy Analysis, 18(1): 118.
Gamoran, Adam and Robert D. Mare. 1989. "Secondary school tracking and educational inequality: Compensation, reinforcement, or neutrality?" American Journal of Sociology, 94(5): 114683.
Hanushek, Eric A. 1979. "Conceptual and empirical issues in the estimation of educational production functions," Journal of Human Resources, 14: 351388.
Hanushek, Eric A. 1986. "The economics of schooling: Production and efficiency in public schools," Journal of Economic Literature, 24 (Sept.): 11411177.
Hanushek, Eric A. 1989. "The impact of differential expenditures on school performance," Educational Researcher, 18 (May): 4565.
Hedges, Larry V., Richard D. Laine, and Rob Greenwald. 1994. "Does money matter? A metaanalysis of studies of the effects of differential school inputs on student outcomes," Educational Researcher, 23 (April): 514.
Ingels, Steven J., Kathryn L. Dowd, John D. Baldridge, James L. Stripe, Virginia H. Bartot, Martin R. Frankel, and Peggy Quinn. 1994. National Education Longitudinal Study of 1988 Second Followup: Student Component Data File User's Manual. Washington, D.C.: .
Jencks, Christopher S. 1985. "How much do school students learn?," Sociology of Education, 58 (April): 128135.
Kakalik, James; W. S. Furry, M. A. Thomas, and M. F. Carney. 1981. The Cost of Special Education. Santa Monica, CA: The RAND Corporation.
Lucas, Robert E. B. 1972. "Working conditions, wagerates and human capital: A hedonic study," Ph.D. dissertation, Massachusetts Institute of Technology.
McMahon, Walter W. 1995. "Intrastate cost adjustments," paper presented at the National Center for Education Statistics Spring Meeting, Washington.
McMahon, Walter W. 1988. "Geographical cost of living differences: An update," MacArthur/Spencer Series, 7.
Monk, David H. 1992. "Education productivity research: An update and assessment of its role in education finance reform," Educational Evaluation and Policy Analysis, 14 (Winter): 307332.
Moore, Mary T., E. William Strang, Myron Schwartz, and Mark Braddock. 1988. Patterns in Special Education Service Delivery and Cost. Washington: Decision Resources Corporation.
Moulton, Brent R. 1990. "An illustration of a pitfall in estimating the effects of aggregate variables on micro units," Review of Economics and Statistics, (2): 3348.
. 1995. Common Core of Data (CCD): School years 198788 through 199293. Washington, D.C.: US Department of Education, Office of Educational Research and Improvement.
Odden, Allan R. and Lawrence O. Picus. 1992. School Finance: A Policy Perspective. New York: McGrawHill.
Parrish, Thomas B., Christine S. Matsumoto, and William J. Fowler, Jr. 1995. Disparities in Public School District Spending: 198990. Washington: Government Printing Office.
Sander, William. 1993. "Expenditures and student achievement in Illinois: New evidence," Journal of Public Economics, 52: 403416.