Skip Navigation

Selected Papers in School Finance 1994


Costs of Measuring and Providing Opportunity to Learn: Preliminary Thoughts

Allan Odden
Professor
University of Wisconsin-Madison

About the Author

Allan Odden is Professor of Educational Administration at the University of Wisconsin-Madison. He formerly was professor of Education Policy and Administration at the University of Southern California and Director of Policy Analysis for California Education (PACE), an educational policy studies consortium of USC, Stanford University, and the University of California-Berkeley. At Wisconsin, he is Co-Director of the Finance Center of the Consortium for Policy Research in Education (CPRE), which is funded by the U.S. Department of Education. CPRE is a consortium of the University of Wisconsin-Madison, Rutgers, Harvard, Michigan, and Stanford. He is the principal investigator for the CPRE Teacher Compensation project, funded by the Pew Charitable Trusts.

Odden is an expert on school finance, educational policy, teacher compensation, and educational policy implementation. He has worked with the Education Commission of the States for the past decade, serving as Assistant Executive Director, Director of Policy Analysis and Research, and Director of the Commission's educational finance center. He was president of the American Educational Finance Association in 1979-80 and served as Research Director for special state educational finance projects in Connecticut (1974-75), Missouri (1975-77), South Dakota (1975-77), New York (1979-81), Texas (1988), New Jersey (1991), and Missouri (1992-93). He currently is directing research projects on educational finance and productivity, and a large national project on re-inventing teacher compensation. Odden has published over 150 journal articles, book chapters, and research reports, and 10 books and monographs. His books include Educational Leadership for America's Schools (McGraw Hill 1995), Rethinking School Finance: An Agenda for the 1990s (Jossey-Bass 1992), School Finance: A Policy Perspective (McGraw Hill 1992) co-authored with Lawrence Picus, Education Policy Implementation (State University of New York Press 1991), and School Finance and School Improvement: Linkages for the 1980s (Ballinger 1983), co-edited with L. Dean Webb. Currently, he is writing books on teacher compensation and school-based financing. He has consulted for governors, state legislators, chief state school officers, national and local unions, The National Alliance for Business and the Business Roundtable, the U.S. Congress, the U.S. Secretary of Education, and many local school districts.

He was a mathematics teacher and curriculum developer in New York City's East Harlem for 5 years. He received his Ph.D. and M.A. degrees from Columbia University, a Masters of Divinity from the Union Theological Seminary, and his B.S. from Brown University.

Costs of Measuring and
Providing Opportunity to Learn:
Preliminary Thoughts

Allan Odden
Professor
University of Wisconsin-Madison

Opportunity-to-learn standards were projected onto the education agenda only recently. This concept generally concerns the activities and processes of classroom and school behavior relative to student achievement--that is, what is taught and how it is taught (Porter 1993b)--although, in a broader construct called "school delivery standards," it can comprise a broader range of issues that include school organization and culture (Darling-Hammond 1992). Until these standards were catapulted into the nation's policy agenda, the notion of opportunity to learn was discussed mainly within specialized areas of educational research. Those who studied student assessment results, for example, claimed that a large portion of differences in student learning could be attributed to variations in curriculum content exposure (opportunity to learn) (Porter 1993b, Schmidt 1983, Sebring 1987). But, in the 1980s, as education policy switched from an input orientation to a results orientation (National Governors' Association 1986), the concept of opportunity to learn leaped from the arcane halls of education research into the politically charged arena of public policy.

The results orientation of new education policy not only claimed that results--what students know and are able to do--are the key dimension of equal educational opportunity, but that students, as well as schools and school districts, should be held accountable for results--that high stakes should be attached to student performance. High stakes could mean promotion from one school level to another (elementary to middle school, for example), admission to postsecondary education, or position and salary in the labor force. As this accountability dimension was added to the new results orientation, the opportunity-to-learn issue arose. If students were held accountable, the argument went, they would need the same opportunity to learn. A level playing field would be needed to make the consequences of the new policy orientation fair. How could all students be held to the same learning standards, it was argued, if some students attended schools in low-spending districts with lower quality education services and less expert teachers and were not taught a thinking-oriented curriculum?

These issues quickly became the subject of debate and analysis within the education policy analysis community (Darling Hammond 1992, Porter 1993b). The same issues became hotly contested within the education policymaking community after the Clinton Administration submitted its education reform bill, Goals 2000: Educate America Act, to Congress. As a condition of their initial support, several Democrats in the U.S. House of Representatives demanded that the bill include a requirement for states to develop and implement school-delivery standards before results standards could be implemented. To some individuals, such a requirement evoked a vision of more detailed input and process regulations at a time when the education policy system was trying to break away from inputs and focus on outputs, particularly student achievement.

Somewhat lost in these debates was a clear understanding of how opportunity-to-learn standards could be defined and measured, then implemented. This paper addresses these issues, with a focus on the costs of both gathering and implementing opportunity-to-learn standards. The first section provides a conceptual and historical framework within which opportunity-to-learn standards can be defined and identifies several variables that could be selected to represent opportunity to learn. The second section discusses the costs of obtaining measures for these variables and the last section makes some preliminary comments on the costs of implementing opportunity to learn.

The Struggle for Opportunity to Learn

Although recent discussions appear to make opportunity to learn a brand-new issue, Elmore and Fuhrman (1993) demonstrate that states have been trying to provide opportunity to learn on a level playing field during much of the previous 2 centuries. Both concepts include several dimensions of the Nation's efforts to create a common public school system out of the disparate and largely private education system the country had at its birth. Further, it can be argued that both concepts have their roots in various state education clauses and that current deliberations over opportunity to learn are simply the most recent and visible attempts to define and give meaning to those vague phrases requiring "general and uniform," "thorough and efficient," or just plain "free common school" education systems. In a real way, opportunity to learn is the 1990s version of the 1960s phrase "equal education opportunity." While opportunity to learn explicitly includes educational process and student results, the implicit goal of both, as well as of state education clauses, is arguably the same: good education for all children.

One reason the current policy transition to results is somewhat unsettling is that it occurs after nearly a century of focus on inputs. Elmore and Fuhrman's discussion of the historical evolution of the state in providing an equal education program illuminates this point. At the turn of the century, opportunity to learn was embodied in state efforts to create the common school system required by new state education clauses. Thus, states enacted regulations for certifying teachers, accrediting schools, and financing districts according to common, statewide standards.

From about 1920 to 1950, the quest for equal educational opportunity focused on school finance equalization. Primarily through minimum foundation general aid programs, the goal was to provide all school districts with a minimum level of dollars per pupil that would allow them to provide an adequate education program (Odden and Picus 1992).

The next state effort was to consolidate school districts into larger bodies, both to expand and improve, and to make more equitable the education program offered and to make the overall system more efficient. As a result, between 1900 and 1950, the number of school districts dropped from over 130,000 to 84,000; that number dropped to 18,000 by 1970.

In the 1960s and 1970s, the quest for opportunity to learn broadened to include special-needs students. States and the federal government created numerous categorical programs to desegregate students, educate the disabled, serve the economically disadvantaged, and meet the needs of limited-English-proficient (LEP) students. The goal was to provide additional educational services to help ensure that these special-needs students would achieve on a level with the "regular" student (Odden and Picus 1992).

The next step in the 20th-century journey toward opportunity to learn was a renewed school finance reform in the 1970s and 1980s. Emboldened by legal challenges that overturned improved but still inequitable school finance structures, this effort sought to move beyond providing a minimal educational opportunity to creating an overall fiscally neutral system in which all districts would operate as if they had the same local property tax base (Coons, Clune, and Sugarman 1969). In response, states enacted new power-equalizing school finance systems, as well as higher-level foundation programs (Odden and Picus 1992).

While none of these embodiments of equal educational opportunity or opportunity to learn explicitly mentioned student achievement, a reasonable argument was that better achievement was implicitly its objective. Indeed, the original goal of the special-needs programs was to reduce income inequality by raising the educational achievement and thus the earning potential of children from poverty backgrounds (Murphy 1991). While ambitious in its aims, the goal nevertheless was achievement-oriented. School finance reformers often ducked the outcomes issue, but they believed that the quality of the education program and the level of student achievement were determined by spending levels (Coons, Clune, and Sugarman 1969). Further, the consolidation movement was fueled by a desire to ensure that rural children be as well educated as their urban peers.

The explicit transition of equal education opportunity to a results orientation began in the 1970s with the minimum competency movement. During this period, many states enacted laws to ensure that all students learned basic skills and created state testing systems to measure student achievement in reading and mathematics. When some states made passing such a test a requirement for high school graduation, courts ruled (as in Debra P. v. Turlington, 474 F. Supp. 244 [M.D. Fla. 1979]) that the requirements had to be phased in gradually so that students would have an opportunity to learn the material before taking the new high school tests, which held real consequences for them.

As the 1980s unfolded, dissatisfaction with minimum skills grew, and the education excellence movement was launched (Murphy 1990). Although fueled by dissatisfaction with the level of student achievement (National Commission on Excellence in Education 1983), the 1980s state education reforms nevertheless were heavily input-oriented and generally stiffened and strengthened input standards: course content, unit requirements for high school graduation, conditions and knowledge requisites for teacher licensure, and alignment of curriculum, texts, and student tests (Murphy 1990).

However, this movement quickly turned itself into an explicit focus on student achievement as the realization dawned on some that results were indeed the primary objective (National Governors' Association 1986) and that student achievement was inadequate (Applebee, Langer, and Mullis 1989; LaPointe, Mead, and Phillips 1989). As the end of the 1980s drew near, the President and the Nation's governors adopted the first national education goals ever to be explicitly results oriented, with Goal 3 requiring proficiency in the complex subjects of mathematics, science, language arts, ivics, and geography, and Goal 4 requiring U.S. students to be first in the world in mathematics and science achievement.

Setting student achievement results as the key focus for the education system is an important first step. The challenge, of course, is how to structure policy and program systems to produce results. Moreover, as Elmore and Fuhrman note (1993), even after an 84-year focus on equalizing inputs, fiscal disparities have not been eliminated; indeed, in the early 1990s, more than half of the states were embroiled in intense school finance court suits precisely because large disparities in fiscal capacity and educational expenditures per pupil still existed across school districts (Dively and Hickrod 1993). Further, as the results focus narrowed, equally large--some felt intractable--differences in educational achievement appeared between minority and other students (Mullis et al. 1990), low-income and other students (Mullis et al. 1990), and girls and boys (Mullis et al. 1990), as well as among the 50 states (Mullis et al. 1991; Mullis, Campbell, and Farstrup 1993), including rich and poor states (Odden and Kim 1992).

The dilemma is that simply focusing on and measuring student achievement does not necessarily improve it. The intermediate step of focusing on educational processes, while promising (Porter 1993b) in terms of identifying new variables strongly linked to student achievement, still sounds like more sophisticated input and not a results orientation. Moreover, as Elmore and Fuhrman note (1993), simply abandoning any concern with inputs defies common sense because student achievement equity, particularly the current goal to educate all students to high standards, seems unattainable with the rampant disparities in fiscal resources that currently exist in most states (Hertert, Busch, and Odden 1994; Wykoff 1992).

Opportunity-to-Learn Variables

At this point, decisions about a set of opportunity-to-learn variables that could be measured and collected should take a broad rather than a narrow perspective, somewhat reminiscent of an educational indicators approach (Porter 1991). The notion is to be as parsimonious as possible in deciding what variables to collect but not to limit the scope of variables so narrowly as to prematurely eliminate important factors that might be strongly connected to student learning.

A wide range of categories of variables (Darling-Hammond 1992), as well as of specific variables in each category could be justified. Two principles guide the selection of both categories and specific variables: parsimony and a research connection to student achievement. The following suggests collecting variables to represent opportunity to learn in three specific categories: fiscal, educational process, and teacher quality.

The first category of opportunity-to-learn variables is fiscal variables, which (still) vary dramatically across states (Barro 1992b), across state districts (Hertert, Busch, and Odden 1994; Wykoff 1992), and across district schools (Hertert 1993). Although traditional research finds weak, if any, connections of dollars to achievement (Hanushek 1989), recent analyses find much stronger links (Ferguson 1991; Monk 1992; Laine, Hedges, and Greenwald 1993).

The second category of opportunity-to learn variables is educational process. Research documents strong impacts on student learning for such variables as the proportion of instruction time actually spent on instruction; high school course-taking patterns; college entrance requirements; and enacted curriculum, related pedagogy, and instructional resources (Porter 1993a).

The third category of opportunity-to- learn variables describes teacher knowledge, skills, and disposition--another set of factors that determine the extent to which all students can achieve at high levels (Darling-Hammond 1992, 1993).

Attention is given to variables that are connected to student achievement and to variables that are either in the process of being collected or that could be included with modest additional federal or state data collection efforts. Since a long article could be written on the potential of any variable within each of the above three categories, the following is simply a list of some key variables that could be selected. This list should in no way be viewed as exhaustive but as a group of categories and variables that could form a beginning set of opportunity-to-learn variables. Where possible, the variables are identified, then different measures of those variables are described.

Fiscal Variables

Several variables could be identified as fiscal measures of opportunity to learn. Those variables could include revenues and expenditures per pupil. Since the two are strongly linked, expenditures could comprise the variable selected. Within expenditures, there could be several specific variables: total current operating expenditures per pupil, core educational expenditures per pupil (broader than just instructional expenditures but narrower than total current operating expenditures), and instructional expenditures per pupil. If revenues were selected, the largest variable would be total federal, state, and local operating revenues; more restricted figures would include state and local revenues, then state general aid and local revenues, that is, state and local revenues excluding categorical aids. For each variable, three key statistics could be calculated to indicate the degree of inequality: the federal range ratio (used in the Federal Impact Aid program [Odden 1993]), the coefficient of variation, and the McLoone index,1 which provides a measure of dispersion for the bottom 50 percent of all districts (Berne and Stiefel 1984; Odden and Picus 1992). Since data are collected at the district level, these all would be district-level measures; the goal over time would be to collect such measures at the individual school site level.

Before statistics are calculated, it would be desirable to adjust the variables for economies of scale, student need, and price differences. The Finance Center of the Consortium for Policy Research in Education (CPRE) is currently developing methodologies for such adjustments. To adjust the fiscal variables for economies-of-scale a regression analysis of expenditures per pupil from all districts in the country would be required. To calculate uniform student need adjustments, a universe district fiscal file would need to be augmented with a commonly defined number of low income (students eligible for Elementary and Secondary Education Act (ESEA) Chapter 1 services or free or reduced-cost lunch), disabled (Public Law (P.L.) 94-142 mandated annual state reports), and LEP children. CPRE is testing the use of a single adjustment across all states, using standard weights of 0.2-0.4 for low-income and 0.2 for disabled children, derived from calculating the average extra costs of providing effective additional services for these students (Kakalik et al. 1981; Moore, Walker, and Holland 1982; Clune 1994). Currently, there are price adjustments for aggregate state data (Barro, 1992a; Nelson, 1991); CPRE and the National Center for Education Statistics (NCES) are investigating whether a procedure can be developed to use district-level data to create a rough price adjustment at the regional or individual district level.

Within traditional school finance equity frameworks, one also would determine the relationship between expenditure variables and variables such as local property wealth per pupil and median family income, as measured by some correlation or elasticity statistic (Berne and Stiefel 1984; Odden and Picus 1992). However, since opportunity to learn entails differences in inputs or processes per se, whether or not they are related to levels of other economic factors not directly associated with schooling, these traditional measures of fiscal neutrality would not be strong candidates as potential measures of opportunity to learn. On the other hand, as is discussed later, if measures of property wealth per pupil or some household income variable were available, these statistics could easily be calculated and thus take their place in a listing of fiscal opportunity-to-learn variables.

Educational Process Variables

A list of educational process variables could be endless, since many curriculum- and instruction-related variables potentially might be linked to student achievement. Adhering to the two principles of parsimony and research supporting a connection between the variable and achievement helps to winnow this category to a manageable list of five variable types (Porter 1993a). The first would be time spent on classroom instruction; several research studies have shown various time variables, such as time on task and academic learning time, to be strongly linked to student achievement (Cohen, M. 1983; Denham and Lieberman 1980; Fisher and Berliner 1985). Second, high school course-taking patterns also have been shown to be strongly linked to secondary student achievement (Gamoran 1992; Gamoran and Berends 1987; Lee and Bryk 1988). Third, college entrance requirements--primarily for public colleges and universities, but also the Carnegie unit per se--comprise another group of variables that research has shown to positively affect secondary student achievement; these requirements help determine what courses students take in high school, which then has an impact on their learning (Lee, Bryk, and Smith 1993; Porter, Smithson, and Osthoff 1992).

Several curriculum and instructional variables have been shown to have an impact on student achievement. A fourth variable type would be measures of the enacted curriculum; that is the curriculum actually taught in classrooms. Numerous studies have shown that student achievement is strongly determined by what is actually taught in classrooms (McKnight et al. 1987; Schmidt 1983; Sebring 1987). The fifth variable type would be type of pedagogy and instruction used in classrooms to teach a curriculum. Again, research in several areas has shown that teaching strategies affect achievement, including research on effective teaching (Porter and Brophy 1988; Rosenshine and Stevens 1986), on teaching thinking and problem-solving skills, (Palinscar and Brown 1984; Peterson, Fennema, and Carpenter 1991; Resnick 1987), and on teaching problem solving to low-income students (Bryson and Scardamalia 1991; Carpenter etal. 1989; Peterson, Fennema, and Carpenter 1991; Palinscar and Klenk 1991; Resnick etal. 1991; Villasenor 1990).2

Measures of these variables are not so straightforward, although there are several possibilities. The measure for time variables could comprise the actual number of minutes spent on instruction in academic areas, either total time spent or time spent on each curriculum content area. Another measure of time could be time actually used for instruction in core academic subjects as a percent of time available for instruction. Measures of high school course taking could include the total number of academic courses taken in the 4 years of high school, as well as the number of courses taken in each academic content area, such as the number of courses in mathematics, science, language arts, history and social studies, and foreign language.

Measures of public college and university entrance requirements could be obtained the same way, either as total number of academic units required or number of units required in each specific content area.

Measuring curriculum and pedagogical variables poses a somewhat more complex challenge. Porter (1993a) suggests first differentiating among the core curriculum content areas--mathematics, science, language arts, etc. Then, within each content area, he suggests identifying the major subtopics--in mathematics, for example, number and number relations; measurement; probability; statistics; and algebra. Each of these subtopics, moreover, can have various dimensions--in number relations, for example, sets, whole numbers, ratios, percents, and fractions. Measures of the enacted curriculum then would consist of both the actual minutes per time period (day, week, or semester) spent on each content area, subtopic, or dimension within each subtopic. The measure could also be presented as a percent of daily or weekly instructional time.

For instructional strategies, the same procedure could be used, first by identifying various types of instructional strategies, lecture, demonstration, recitation or drill, whole-class discussion, group work, cooperative learning, etc. While some instructional strategies could be generic across content areas, some are more specific to content areas, such as laboratory work in science, multistep problem solving in mathematics, and the writing process in language arts. One issue that would have to be decided is whether information on instruction should be gathered on a general basis or embedded within content areas. Porter, Smithson, and Osthoff (1992) chose the latter route, even connecting instructional strategy to dimensions of content subtopics. At the University of Southern California (USC), under CPRE auspices, Priscilla Wohlstetter is investigating how school-based management (SBM) can be used to restructure curriculum and instruction. Her study includes cataloging of instructional practices specific to each content domain.

Porter (1993a) also suggests embedded instructional resources such as computers, textual materials, laboratory materials, etc. Indeed, such measures were included in his study (Porter et al. 1993) and are now being collected in the USC-SBM study. These measures could also be cataloged under the rubric of curriculum and instructional practices. In short, several measures of educational processes could represent opportunity to learn, have been collected in various research studies, and have been shown to be positively linked to student achievement.

Teacher Quality Variables

Just a few years ago, teacher variables other than education and experience would have been difficult, if not impossible, to list. And in the immediate future, teacher variables other than these measures might not be readily available. However, since several initiatives related to teacher preparation and certification in the near future will likely produce potentially robust variables of teacher quality, this category of variables should be included on a list of opportunity- to-learn measures. As Darling-Hammond (1993) convincingly argues, student learning--especially achievement in thinking and problem solving--depends on teacher expertise. Put a different way, both the enacted curriculum and the pedagogical practices used to teach it can only be as robust as the professional expertise of the teachers who teach it. Thus, sophisticated measures of what teachers know and what they can do might also become powerful indicators of student opportunity to learn.

In the short term, four developing activities will produce quantitative measures of teacher professional expertise that could be used in an opportunity-to-learn framework. First, in September 1994, the National Board for Professional Teaching Standards (NBPTS) began certifying advanced, expert teachers. Experienced teachers will need to pass a rigorous assessment of their content knowledge, pedagogical expertise, and collegial working skills to be certified. Thus, in the very near future, schools could easily identify the number of NBPTS-certified teachers in the school, or at the secondary level, both at the school and within each department.

Teacher preparation and licensing also are evolving through potentially major changes. One initiative ensures that, over time, all licensed teachers will be trained in a fully accredited program, particularly a program accredited by the National Council for Accreditation of Teacher Education (NCATE), just as with the medical and other professions. The notion is that fully qualified teachers must be trained in universities with accredited programs (Wise and Leibbrand 1993). In the next few years, nearly all programs that seek accreditation will have been reviewed according to the new and upgraded NCATE standards. Thus, another measure of teacher quality could be the number and percent of teachers trained in an NCATE-accredited program, on a total school or content area basis.

Third, many states are developing structures to license teachers not on the basis of their taking an approved program of courses at a college or university, but on what they actually know and teach. As these programs become operational, states will produce information on beginning teachers, indicating their expertise in content areas and across several instructional practices. Although each state might develop its own mechanism for gathering these data, the Educational Testing Service (ETS) is developing the PRAXIS system to measure the same competencies. The latter could provide a nationally comparable set of measures, but since opportunity to learn is likely to be more of an interstate issue for the next few years, a set of PRAXIS measures tailored to a specific state or state set of measures could provide detailed information on beginning-teacher knowledge and expertise.

If such information is provided on a pass/fail basis, it might not provide data useful for an opportunity-to-learn assessment. If, however, scaled measures of beginning teacher content mastery and instructional expertise are taken (such as those provided by the ETS National Teacher Examination (NTE)), a list of the number or percent of beginning teachers at or above a given score could provide indicators of beginning teacher expertise. It would also be possible to form some combination of beginning teacher and board-certified teacher measures of teacher expertise in schools.

Fourth, even before the above variables become available, it might be possible to gather more sophisticated measures of teacher expertise than just education and experience. Monk, for example, has shown that the number of courses individual teachers have taken in mathematics and science content and methodology, as well as the total number of courses all faculty in a department have taken, can affect student achievement in those subject areas (Monk, forthcoming). Thus, simple counts of the numbers of courses taken in content areas and subject-specific pedagogy could provide more detailed information on teacher expertise than do current measures of unidentified educational units.

Costs of Collecting Opportunity-to-Learn Variables

As might be expected from the above discussion, not all of the suggested opportunity-to-learn variables could be collected immediately. These variables, however, could be available in the near future at modest cost. This section provides approximate cost estimates for most of the variables identified in the previous section.

Costs of Collecting Fiscal Variables

Because of the major changes NCES has made in the collection of school district fiscal data, the additional costs of collecting fiscal variables of opportunity to learn would be minimal. Indeed, this year NCES provided fiscal data on CD-ROM for every school district in the country, (the F-33 data system, together with the Common Core of Data (CCD) on pupils, staff, and schools.) In the future, NCES plans to collect such data every 5 years as part of the Census of Governments surveys, and perhaps at an additional time corresponding with the decennial census. These data include revenues by source, as well as expenditures by several functional categories. The manner in which expenditure data are provided allows for analysis of the fiscal opportunity-to-learn variables suggested above: current operating expenditures, core instructional expenditure, and instructional expenditures.

In 1992, NCES began to provide more detail for both the revenue and expenditure data, including revenues for different categories of state and federal aid (equalization aid versus categorical aid, such as compensatory, disabled, bilingual, and transportation), and more detailed expenditure categories, such as transportation expenditures and expenditures by program. This allows for even more finely tuned revenue or expenditure variables. But the detail already provided allows for straightforward calculation of all fiscal variables and equity statistics discussed in the previous section. Indeed, the CD-ROM data base has built-in programming to calculate some of the equity statistics and allows for easy transformation of data into standard statistical files from which all other equity statistics could be calculated. It would be very possible in the future to expand the program to include such equity statistics as coefficient of variation, the federal range ratio, and the McLoone index. With the current CD-ROM file, however, the appropriate equity statistics could be calculated at low cost, $50,000 to $100,000 per year. Although one consultant already has calculated several equity statistics for each state from the raw data in the current file, with hardly any external support (Toenges 1993), the data need substantial work to provide a file for analysis: eliminating special districts, occasionally adjusting student counts, and clean-up tasks associated with working with large data sets.

Three advances in fiscal equity statistics would require modest additional resources. First, some additional developmental work might be required for economies of scale, pupil need, and price adjustments. The CPRE Finance Center is confident of developing straightforward adjustments for the former two resources. As data from the 1990 census mapped by school districts become available, counts of at-risk, limited-language, disabled, and poverty students will be obtainable, allowing for pupil-need adjustments. In the medium term, however, actual Chapter 1-eligible, disabled (P.L. 94-142), and LEP counts would be preferred for making vertical equity adjustments. NCES has already sponsored research to develop district-by-district or regional price adjustments. Walter McMahon is developing cost- of-living adjustments, and Jay Chambers is developing wage adjustments; both will allow corrections to be made for the varying purchasing power of the education dollar. The price adjustments that can be developed are possible only because the census data have been mapped within school district boundaries--one more reason to underscore the need for this NCES-sponsored activity once each decade.

Second, the current F-33 file does not include a measure of property wealth for each district nor a measure of household income--variables required for calculation of any fiscal neutrality statistics. Since property tax administration varies across and within states, gathering either intra- or interstate comparable property valuation data on an annual basis poses something of a challenge. Nevertheless, it would be possible to compile, and quite easily, through the same procedure with which the F-33 data are now collected, the measures of fiscal capacity used in the states' equalization formulas. While not fully comparable across school districts in all states, such data would allow a rough calculation of intrastate fiscal neutrality statistics. Further, median family income (and numerous other variables) from the 1990 census soon will be available for each school district to use for fiscal neutrality equity calculations. But since the census is conducted only once every 10 years, another strategy would be needed to provide household income data for each district annually or biennially.

Third, recent research suggests that district fiscal equity does not necessarily produce school-level fiscal equity (Hertert 1993). Thus, for the long term, it would be desirable to have fiscal variables available on a school basis. This more detailed type of fiscal data, however, would require substantially more resources, not only in federal data collection, but also in redesigning state fiscal accounting structures. While the issue would technically entail adding a school code to the current revenue and expenditure accounting system--a code now included in many state/district systems--few states currently collect school-level data, and several technical issues would need to be resolved to collect valid, reliable, and comparable fiscal data at the school level. The long-term goal for both education fiscal data in general and for opportunity-to-learn fiscal data in particular should be to collect data on both a school and district basis.

NCES would be wise to put the issue of collecting school-level fiscal data on a fast-track feasibility study agenda. The need for these data is rapidly rising to the forefront, given the policy attention focused on and actual policy funding of schools (as contrasted with districts). It would be ironic for NCES to have finally produced detailed and accessible district-level data just at the time when the demand and need for school-level data took center stage.

Costs of Collecting Educational Process Variables of Opportunity to Learn

This section discusses the feasibility and costs of collecting information on educational processes on national, state, and district levels. The goal would be to collect the information on a universe district level to match it with the fiscal data; a longer-term goal would be to collect the data at the universe school level. This section suggests using current national data collection efforts, such as those used in the National Assessment of Educational Progress (NAEP) and the Schools and Staffing Survey (SASS), to collect educational process data for national and state comparisons, and using annual state survey collection efforts involving all teachers to collect data on these variables at the district and school levels.

Data on Time, Courses, and College Requirements

The NCES SASS survey, administered every 2 years, asks a series of questions of elementary teachers on the percentage of time allocated to instruction in core content areas and asks secondary teachers questions about the content classes they teach. With only modest adjustment in the questions asked, changes that could be included in modifications of the survey instrument between administrations and the particular questions on time or courses taught (see the discussion above) could be included in this established instrument. The results would allow a portrayal of time allocated to instruction in elementary school core courses and courses offered in secondary schools on a national basis. The SASS survey currently collects percents of time spent on general elementary and special education and other topics at the elementary level, and on courses taught in mathematics, science, english, and social studies, as well as other subjects at the secondary level. Since the SASS sample can be arranged by various categories of variables, such as region of the country and district or school size, a portrayal of this dimension of opportunity to learn by these subnational categories is possible. Further, the current SASS sample of 65,000 teachers provides valid data for each of the 50 states as well, which allows for interstate comparisons.

Expanding SASS to a size that would provide valid data for each district in the country, as was done for federal fiscal data collection, might be possible in the far future but is probably not realistic for any intermediate future. Many states today, however, administer an annual survey of teachers, by which they collect data on various characteristics of teachers, such as education units and years of experience, courses taught, and numbers of students in each course. While teachers generally are not paid extra to complete this survey, a common practice is to dismiss them for half a day to do so. Since completion of the survey takes less than half a day, the teacher benefits by having a few extra free hours. The cost to the system is half a day of release time, but this cost tends to be built into teacher contracts. Thus, states could consider expanding teacher questionnaires with a survey on curriculum and instructional practices; the price could remain the half-day release time. The curriculum survey would simply require more teacher time to complete. Compiling the results into a usable data file would require more resources, but the actual cost of having the teacher provide the data might only be the duplication and physical collection costs for the new curriculum surveys.

State teacher questionnaire data could easily be aggregated to indicate the nature of courses offered in a school and the number of students taking such courses. If desired, the survey could be modified to include information on instruction time spent by using the questions on the current SASS questionnaire. The instrument could also include questions, such as those used in the Longitudinal Study of American Youth, on content courses taken. The results could be aggregated to both the school and district levels to show the percent of time spent on instruction in core academic subjects in elementary schools, what content courses are actually provided in secondary schools, and the numbers of content courses taken by teachers in elementary and secondary schools. If student achievement data at the school level were also available from other state sources, which is increasingly the case, analysis could be made of the interactions among the different categories of fiscal, educational process, teacher-related, and student achievement measures, and a rich picture of student achievement and opportunity to learn could be created.

Most public college and university entrance requirements are readily available at the state level and could easily be entered into a 50-state data base.

Data on the Enacted Curriculum and Related Instruction

Just a few years ago, collecting measures of enacted curriculum and related instruction might have been viewed as impossible. Conceptualizing the types of data needed for such an exercise was a major challenge. The assumption was that such measures would have to be collected via direct observation of classrooms or through extensive teacher-generated logs of classroom behavior. But in the past few years, several research efforts have shown that reliable data on the enacted curriculum can be collected through questionnaires (see Guiton and Burstein 1993 for a brief discussion of technical issues that must be addressed in collecting measures of educational processes and practices).

Porter (1993a) and Porter et al. (1993) studied the content of new courses offered in the 1980s in response to new state requirements that students take additional mathematics and science courses. They collected data on the enacted curriculum and related instructional practices, as well as on curriculum-embedded resources, using three methods: direct observations of teachers, extensive teacher logs, and questionnaires. They concluded that, while observations provided the most robust indicators of these variables, the questionnaire data correlated surprisingly high with both the observation and the log data and provided sound indicators of the enacted curriculum. While reliability among the various means of data collection varied by content area, subtopic, and dimension within each content area, the research team nevertheless concluded that measures of the enacted curriculum and related pedagogy could be collected with a sufficient degree of confidence through the use of teacher questionnaires.3

Guiton and Burstein (1993) came to a similar conclusion about the potential use of surveys to collect data on curriculum and instructional practices based on their analysis of data from international assessments of student achievement. In developmental work, they found high degrees of agreement between survey data and more detailed information collected directly from classroom practice (Burstein et al. 1991; Guiton 1992). Their conclusions were somewhat more cautious than Porter's, but they suggested that collecting enacted-curriculum data via teacher questionnaires --especially when the information is divorced from any individual accountability--offered promising potential.

This paper assumes that relatively valid and reliable data on curriculum and instruction can be collected through detailed teacher questionnaires. The issue then becomes one of determining strategy for and costs of such data collection. Several potential strategies are outlined below in order of their cost.

The first strategy would be to revise similar data now being collected in the NAEP program. As part of each NAEP survey, teachers are asked a series of questions about the curriculum content they teach and their related instructional practices. These questions could be replaced with questions developed by Guiton, Burstein, and Porter, and from other more focused work on collecting curriculum and instruction information. For example, the questionnaires developed for use in the USC-SBM study are based on instruments created by Porter et al. and are quite similar in size to the current NAEP questionnaire. The additional cost of such an approach could be minimal, but there would be some additional developmental costs. Depending on the size of the questionnaire, the additional costs for actual collection could be zero if the new questions merely replaced current NAEP questions. This paper assumes a simple replacement of new questions for old questions, with a negligible net cost increase.

The state NAEP assessment sampling procedure could be used to produce valid information for each state. When NAEP administers an assessment to produce comparable data for each state, the sample size is increased (a different amount for each state, depending on the size of its student population). This requires increasing the number of teachers that must complete questionnaires. Again, if current NAEP questions on curriculum and instruction were simply replaced with new questions, the costs of gathering data comparable for each state would be negligible.

Use of NAEP might not be the appropriate strategy for collecting curriculum and instruction data if the more detailed information described in the Porter et al. (1993) study is desired. Since the major purpose of NAEP is to collect student achievement data, adding a lengthy and extensive survey on detailed curriculum and instructional practices could overload the NAEP program.

A more feasible way to collect national data on curriculum and instruction practices would be to expand the NCES-administered SASS. This nationally representative sample of teachers provides information that can be arranged by several factors, including state characteristics. It is, therefore, a data collection mechanism that could be used to collect detailed national data on the curriculum and instruction actually delivered in classrooms (by content area, subtopic within content, dimension within subtopic, related pedagogy, and curriculum-embedded resources).

There could be several strategies for using the SASS teacher questionnaire to collect detailed curriculum and instruction data. One strategy would be to simply expand the current SASS teacher questionnaire. But this questionnaire requires 45 minutes to an hour to complete, and expanding it would nearly double the time, since the more detailed enacted curriculum questions also require 45 minutes to an hour to complete. In addition, while expansion is technically possible, more data collection resources would be required to keep the response rate at the current 85 percent or higher; indeed, a large portion of current cost is follow-through work to get the current teacher-questionnaire response rate up to 85 percent.

It is difficult to predict how much more effort would be required if the questionnaire were to double in length, but it could require considerable resources. Another strategy would be to take a parallel sample of 65,000 teachers from the same districts and schools as the current SASS sample and ask them to complete only the enacted curriculum materials; this would allow setting of the enacted curriculum data in the appropriate teacher, school, and district contexts. Another strategy could be to ask a smaller sample of teachers to collect just the enacted curriculum and instruction data, but it is unclear whether this procedure would save much in the form of collection costs, since the sample would need to provide valid data for each state. Since the current SASS teacher questionnaire requires about $2 million to administer, an upper limit for collecting this type of detailed curriculum and instruction information could be $2 million (for the second strategy), high but perhaps worth the price given the important role such new and rich data could provide.

Of course, the most desirable information would be on curriculum and instruction within state levels, that is the data for each district and school. There would be at least two possible strategies for gathering this type of information: one focused on getting the data on a reliable basis for each district and another for getting the data on a reliable basis for each school. For the former, the information then could be matched with other district-level variables, and an analysis of interactions among fiscal, curriculum, teacher, and achievement variables could be conducted. To provide district-level indicators, a representative sample of teachers could be drawn from each district, and the survey document on curriculum and instruction practices could be administered solely to this sample. Assuming the SASS cost of about $30 per teacher, the cost would vary by state; but the aggregate national cost would be substantially higher than the $2 million required for the current SASS. In other words, this approach would entail a new, separate, and costly data collection effort.

A potentially more powerful and undoubtedly less costly approach would be to combine the elements of a survey of curriculum and instruction practices with the teacher survey many states already administer on a yearly basis. These latter surveys are often used for pension purposes and provide detailed information on teacher load, courses taught, actual class size, and teacher characteristics. For example, a large portion of the information included in the California Basic Education Data System is derived from universe teacher surveys administered annually. Again, many states also collect information from teachers through this type of universe survey. Thus, states could consider expanding these teacher questionnaires with a detailed survey on curriculum and instructional practices, with the price remaining the half-day release time. The curriculum survey would simply require more teacher time (a maximum of 1 hour) to complete.

In short, NAEP teacher questionnaires could be modified to include a more limited set of curriculum and instructional practice data for both national and state comparisons. The cost would be negligible. For more specific and comprehensive curriculum and instructional data, the SASS would need to be enhanced, perhaps even expanded to include valid data for each state. The costs would be greater, perhaps adding $2 million to current SASS costs. To get more focused or more comprehensive curriculum and instruction data for each district and school, expansion of current annual state surveys of teachers would be the most likely route. Costs would be higher for development (data entry) than for collection, since teachers are already relieved of duty for half a day to complete a questionnaire.

Costs of Collecting Teacher Quality Variables

The teacher quality variables identified in the previous section, such as the number of board-certified teachers and scores on teacher licensure examinations, could easily be included on the annual state teacher survey form or the NAEP/SASS questionnaires. Currently, such forms collect information on years of experience and educational units, the current basis for teacher compensation and the current, generally used indicators of teacher quality. Some states even include scores on the NTE when it is required as part of state licensing procedures. However, as national board certification becomes more standard practice, as individuals take results-oriented assessments for licensure instead of just an approved set of university courses, these more robust indicators of teacher expertise could just as well be added to the state, NAEP, and SASS teacher surveys. Further, all surveys could collect information on the number of content-oriented courses teachers take to obtain additional information on teacher preparation.

Such a strategy would entail simply adding a few relatively straightforward questions to the data collection efforts now conducted. The extra costs would be negligible. Moreover, the universal teacher data from the state surveys could be aggregated to the school level, thus allowing creation of professional expertise descriptors on a school-by-school basis and providing an additional set of potentially powerful opportunity-to-learn indicators.

Costs of Implementing Opportunity to Learn

The final step in discussing the costs associated with opportunity to learn is to provide some estimates of the implementation costs. This section of the paper must be tenuous. Since the concept of opportunity to learn has not yet been fully clarified in the literature, trying to cost out what it would take to provide opportunity to learn is a hazardous task. This section does provide some suggestions on how this task might be conceptualized, with the understanding that conclusions and cost figures must be viewed as preliminary at best. With that in mind, this section identifies some potential implementation costs for the four categories of variables: fiscal, educational process, teacher quality, and student performance.

Costs of Implementing Fiscal Opportunity to Learn

Fairly precise cost figures can be calculated for various measures of fiscal opportunity to learn. The dilemma, of course, is that fiscal variables may be the least precise indicators of opportunity to learn. Thus, the specificity of dollar estimates of providing fiscal opportunity to learn should be viewed with caution, as the costs could be much higher or lower if the issue were to accomplish the goal of having all students achieve at high levels.

Several estimated costs can be provided. Toenges (1993) estimated the costs first of raising the expenditure per pupil in each district to equal the expenditure per pupil at the 75th percentile within each state, then of raising it to a $5,000 minimum nationwide (just slightly below the national average expenditure per pupil in 1990-1991). The total cost of reaching the former goal was about $24 billion, an increase of about 11.5 percent relative to the total revenues/sources (local, state, and federal) and an increase of 25 percent in state revenues. The cost of raising each district to a minimum expenditure of $5,000 per pupil was $17.4 billion. Toenges also estimated the cost of accomplishing both goals; that is, increasing each district's expenditure to that of the district at the 75th percentile within a state, then an additional increase to $5,000 if applicable. The cost of this improvement in fiscal opportunity to learn was estimated at $31.1 billion, an increase of about 15 percent over the total revenues of $208 billion in his sample.

The Toenges estimates are all somewhat understated because the data set he used excluded about 20 percent of the districts in seven states, and his sample excluded Hawaii and Washington, DC. Further, he did not adjust the figures for differences in the price of education across states, which would affect the cost of raising all districts to a national $5,000 minimum. His cost estimates should be somewhat inflated to indicate costs in 1994 dollars. Nevertheless, he showed that even substantial increases in per-pupil expenditure equity could be accomplished for less than the revenue increase each decade for the past 40 years (Odden 1992).

Using data from the NCES F-33 universe data file for all districts in all states for the 1989-90 fiscal year allows more complete cost estimates for achieving various levels of fiscal opportunity to learn. In a recent CPRE study of potential federal roles in school finance equalization, Hertert, Busch, and Odden and Conley (1994) provided various estimates for reducing the McLoone index to 1.0 and for raising expenditures per pupil across the 50 states to a national average level.

Table 1 provides the projected 1989-90 costs of raising expenditures (grades K-12) per pupil to various median and average levels (in 1990 dollars). The cost of raising the per pupil amount from state and local revenue sources to the median for each state would have been $8.7 billion, an overall increase of 4.2 percent of total operating revenues for education. This would have produced a McLoone index of 1.0 for all states and substantially reduced the coefficient of variation. The cost of raising per-pupil revenues to a regional median would have risen to $13.6 billion or 6.5 percent of total operating revenues.

The last two rows of data in the table show the equalization issue from a more national perspective for which the McLoone index is 0.81 and the coefficient of variation is 0.33, considering all districts in the country without regard for state boundaries. The cost of raising each district to the national median in 1989-1990 would have been $17 billion--an 8.1 percent increase; this would have produced a McLoone index of 1.0. The cost would have risen to $23 billion for raising each district to the national average; this also would have produced a McLoone index of 1.0 and would have reduced the coefficient of variation to 0.22, still far above the 0.10 standard for equity (Odden and Picus 1992).4

Although the overall costs of these equity advances are in the billions of dollars, they are relatively modest when considered as a percent of total operating school revenues. They are well within the range of revenue increases provided to schools on a periodic basis, which for the three decades from 1960 to 1990 averaged just over 2 percent per year in real terms (Odden, forthcoming).

Costs of Implementing Curriculum-Related Opportunity to Learn

The methodology for estimating the costs of meeting the various curriculum-based definitions of opportunity to learn is unclear. Increasing the percent of time spent on instruction in core content areas in elementary schools or increasing the number of academic courses offered in secondary schools could be viewed as add-on costs or simply as replacement of current time, or courses with time or courses focused on core academic subjects.

The argument for the latter approach is threefold. First, research in elementary schools shows that only a small portion of time is spent on instruction in academic content areas (Karweit 1989) but that with a clear focus on academic learning and training in effective teaching and classroom management, substantially more time within the current school day could be used for academic instruction (Fisher and Berliner 1985). Second, a result of the early 1980s education reforms was replacement of watered-down courses with those offering more academic content (Porter, forthcoming). Third, a major reform in vocational education, which in the past provided a very different curriculum in academic content, is to use vocational courses to teach the higher level academics required in the core curriculum (Raizen 1989); thus, vocational education becomes an alternative route for teaching content at a high standard, rather than a separate and less rigorous curriculum.

Table 1. Projected costs of raising public K-12 expenditures per pupil to various levels in the United States, 1989-90 (in 1990 dollars)

------------------------------------------------------------------------
Level of expenditure	Cost (in billions)	Percent increase
------------------------------------------------------------------------
State median	          $ 8.7 billion	              4.2
Regional median	          $13.6 billion	              6.5
National median	          $17.0 billion	              8.1
National average	  $23.0 billion	             11.0
------------------------------------------------------------------------

SOURCE: Hertert, Busch, and Odden (1994), with additional calculations from the same data base.


All three examples suggest that improvements in the enacted curriculum can be made without increasing class time by using teachers and classes more effectively, at no increase in operational costs. However, since there is not yet wide agreement on what a national core set of high curriculum standards would be, it is not possible to conclude that providing full opportunity to learn under such curriculum standards can be accomplished by simply using current time and courses differently. It is possible to conclude that substantial progress could be made toward this goal by using current time and courses more effectively.

There is ample evidence that both restructured preservice and substantial inservice teacher training will be required to enable all to teach a new, thinking-oriented curriculum; that is, to provide the 1990s curriculum version of opportunity to learn.5 Research on the implementation of the California curriculum frameworks suggests that while teachers are willing to work hard to change their classroom curriculum and instructional practices, more professional development is needed to accomplish a complete transformation of the school curriculum (Cohen and Peterson 1990; Marsh and Odden 1991). Further, Little (1993) argues that the professional development required for accomplishing current education reforms that include completely restructuring curriculum and instruction should be more substantial, more intense, and longer lasting than what typically has been provided in the past.

Putting a price tag on such robust professional development is not easy. In the corporate sector, however, organizations engaging in successful restructuring--similar in intensity to what is needed in education--often spend 2 to 4 percent of their budget on ongoing training. There are no comparable figures for education. One study of statewide expenditures for professional development in education concluded that slightly less than 1 percent of total expenditures were for all types of training (Little et al. 1987). For purposes of (rough) calculation, let us also assume that this figure can be generalized to the nation. Let us also assume that the corporate-sector figure for needed costs of ongoing training can apply to education. Thus, the professional development needed to implement a thinking-oriented curriculum in all schools and thus provide full-curriculum-related opportunity to learn would require 2 to 4 percent of school expenditures, less the approximately 1 percent already spent (assuming such funds could be reallocated for these new curriculum and instructional purposes). Using the $300 billion being spent for public elementary and secondary schools in 1993-1994 as a base, professional development costs would total $6 to $12 billion, less $3 billion now spent, or between $3 and $9 billion more.

In short, providing the opportunity for all students to be exposed to a thinking-oriented core curriculum, such as that being implemented in California, would cost about $3 billion to $9 billion more in ongoing professional development. This amount is considerably less than that required to provide fiscal opportunity to learn.

If fiscal opportunity to learn were not provided, it clearly should be possible to include the above professional development costs in education system budgets over a short period, since they represent an increase of only 1 to 3 percent and a refocusing of current professional development funds. If fiscal opportunity to learn were provided, moreover, the above curriculum-related opportunity-to-learn costs could be subsumed under those of implementing fiscal opportunity to learn, with the simple requirement that the first 3 percent real increase in educational revenues be spent for ongoing professional development.

Costs of Implementing Teacher-Related Opportunity to Learn

Teacher-related opportunity to learn overlaps considerably with curriculum-related opportunity to learn. Apart from training new teachers, the issue would center on the cost of producing teachers who could be certified by the NBPTS, and of increasing the number of curriculum relevant content courses that teachers would be motivated to take.

For the former, there is no obvious methodology for determining cost, since NBPTS certification is yet to begin and there is no empirical data base from which to estimate costs. A reasonable argument would be that preparation for NBPTS certification could entail the same process as preparation for teaching under the new curriculum standards, since both are targeted on similar evolving national curriculum content standards. Under this argument, the costs would be the same as for ongoing professional development, or between $3 billion and $9 billion above current costs for such activities. This argument would also mean that two definitions of opportunity to learn could be realized simultaneously: preparation of teachers to teach a thinking-oriented curriculum and to obtain NBPTS certification.6

An additional cost could be the price of taking the NTE, now estimated at about $1,500 per teacher. Assuming that 10 percent of the nation's 2.2 million public school teachers would take the test each year, the total cost of taking the examination thus would be $330 million, which potentially could be covered by the funds set aside for ongoing training.

The cost of taking additional content courses is also very difficult to calculate. Thus, this author will make a suggestion: that districts reimburse teachers for taking courses, rather than reward them a higher salary each year as a result. If this procedure were to mean reimbursing each teacher for taking one course per year at a cost of $500 per course (a rough average for courses offered at both public and private postsecondary institutions), the cost would be $1.1 billion ($500 x 2.2 million teachers). While this is a high price, it is considerably less than what teachers are now paid on an ongoing basis for taking courses that may or may not be related to what they teach. The net cost of this proposal could potentially be lower, since many districts today already pay the expenses of additional higher education courses.

Further, the approach of paying for continuing postsecondary education could also be a mechanism for directly including the higher education system in the ongoing professional development and training of teachers. Since higher education faculty and the education system would be deciding which courses are sufficiently targeted to the professional development needs of teachers and thus would be determining which courses would qualify for reimbursement, the costs could potentially be subsumed within the overall professional development budget of an extra $3 billion to $9 billion. In this way, higher education could remain a central provider of professional development; there would be procedures to determine what courses would count; teachers would be relieved of paying for postsecondary credits; and the costs would be included in the school or district professional development budget.

Finally, this approach of providing substantial ongoing professional development, including paying for approved higher education courses, could be combined with a gradual shift to a knowledge- and skills-based pay system (Odden and Conley 1992; Mohrman, Mohrman and Odden 1993) both as an incentive for, and as a way of, rewarding teachers for developing the expertise needed for this decade's education goals and curriculum standards.

Concluding Comments

Identifying the costs of measuring and implementing opportunity to learn is difficult, if not impossible, since the definition of opportunity to learn has not yet been solidified. Thus, the points in this paper must be taken as only beginning steps on the trek of more firmly identifying such costs. The author hopes that the structure of this paper contributes to conceptualizing the task of identifying such costs. Perhaps its claims also can be used to urge both federal and state governments to proceed in collecting new types of information, such as data on the enacted curriculum, that could become part of an opportunity-to-learn indicator system. Once only a dream, it now appears that collection of these variables can be accomplished through surveys and questionnaires; given their potentially powerful connection to student achievement, every effort should be made by all governments to provide the education system with this information at the school, district, state, and national levels.

It also appears that the opportunity-to-learn variables identified in this paper could be collected through current data collection efforts with only modest increases in resources, although the costs of collecting enacted curriculum information through the federal teacher questionnaire of the SASS could approach an extra $2 million. But, given the potentially important uses for which this type of information could be used, the cost might well be worth the effort. Interestingly, because of advances already made in fiscal data collection efforts, many of the fiscal opportunity-to-learn variables are already being collected.

While the projected costs of implementing opportunity to learn must be viewed with extreme caution, two conclusions may be drawn. First, the national costs of providing fiscal opportunity to learn would seem to be far less than the amount the nation typically adds to the school system each decade, and although distributing new dollars across districts in a way that would provide fiscal opportunity to learn would require a new political will, the fiscal point is that the overall cost of doing so would be well within traditional bounds. Second, the costs of providing curriculum- and teacher-related opportunity to learn could be subsumed within the costs of providing fiscal opportunity to learn. This suggests, once again, that the ways in which new education dollars are distributed, allocated, and spent—not just the total amount of money—are critical issues.

References

Applebee, A. N., Langer, J. A., & Mullis, I. V. S. 1989. Crossroads in American education. Princeton, NJ: Educational Testing Service.

Barro, S. 1992a. Cost-of-education differentials across the states. Washington, DC: SMB Economic Research.

Barro, S. 1992b. What does the education dollar buy? Relationships of staffing, staff characteristics, and staff salaries to state per-pupil spending. Madison, WI: University of Wisconsin, Wisconsin Center for Education
Research, Finance Center of the Consortium for Policy Research in Education.

Berne, R., & Stiefel, L. 1984. The measurement of equity in school finance. Baltimore: Johns Hopkins University Press.

Bryson, M., & Scardamalia, M. 1991. "Teaching writing to students at risk for academic failure." In Teaching Advanced Skills to Educationally Disadvantaged Students. Ed. B. Means & M. S. Knapp. Washington, DC: U.S. Department of Education, Office of Planning, Budget, and Evaluation.

Burstein, L., et al. 1991. Compilation of items measuring mathematics and science learning opportunities and classroom processes from large-scale educational surveys. Paper prepared for the Survey of Mathematics and Science Opportunities Projects. East Lansing, MI: Michigan State University.

Carpenter, T. P., et al. 1989. "Using knowledge of children's mathematics thinking in classroom teaching: An experimental Study." American Educational Research Journal 26(4): 499-531.

Clune, W. 1994. "The shift from equity to adequacy in school finance." Educational Policy 8(4): 376-394.

Cohen, D., & Peterson, P. L. 1990. Educational evaluation and policy analysis. Analysis 9 (entire issue).

Cohen, M. 1983. "Instructional management and social conditions in effective schools." In School Finance and School Improvement: Linkages for the 1980s. Ed. A. R. Odden & L. D. Webb. Cambridge, MA: Ballinger.

Coons, J., Clune, W., & Sugarman, S. 1969. Private wealth and public education. Cambridge, MA: Belknap Press of Harvard University Press.

Darling-Hammond, L. 1992. "Creating standards of practice and delivery for learner-centered schools." Stanford Law and Policy Review 4: 37-52.

Darling-Hammond, L. 1993. "Reframing the school reform agenda." Phi Delta Kappan 74(10): 753-761.

Denham, C., & Lieberman, A., eds. 1980. A time to learn. Washington, DC: National Institute of Education.

Dively, J. A., & Hickrod, G. A. 1993. Status of school finance constitutional litigation. Normal, IL: Illinois State
University, College of Education, Center for the Study of Educational Finance.

Elmore, R., & Fuhrman, S. 1993. Opportunity to learn and the state role in education. Paper prepared for the National Governors' Association.

Ferguson, R. 1991. "Paying for public education: New evidence on how and why money matters." Harvard Journal on Legislation 28(2): 465-98.

Fisher, C. W., & Berliner, D. C., eds. 1985. Perspectives on instructional time. New York: Longman.

Gamoran, A. 1992. "The variable effects of high school tracking." American Sociological Review 57: 812-28.

Gamoran, A., & Berends, M. 1987. "The effects of stratification in secondary schools: Synthesis of survey and
ethnographic research." Review of Educational Research 57: 415-35.

Guiton, G. 1992. Developing indicators of educational equality: Conceptual and methodological dilemmas. Los Angeles: University of California, Los Angeles, Center for Research on Educational Standards and Student Testing.

Guiton, G., & Burstein, L. 1993. Indicators of curriculum and instruction. Paper presented at the annual meeting of the American Educational Research Association, Atlanta.

Hanushek, E. 1989. "The impact of differential expenditures on school performance." Educational Researcher 18(4): 45-65.

Hertert, L. 1993. "School finance equity: An analysis of school level equity in California." Diss. University of Southern California.

Hertert, L., Busch C., & Odden, A. R. 1994. "School financing inequities among the states: The problem from a national perspective." Journal of Education Finance 19: 231-255.

Kakalik, J., et al. 1981. The cost of special education. Santa Monica, CA: The RAND Corporation.

Karweit, N. 1989. "Time and learning: A review." In School and Classroom Organization. Ed. R. E. Slavin. Hillsdale, NJ: Lawrence Erlbaum Associates.

Laine, R. D., Hedges, L. V., & Greenwald, R. 1993. Does money matter? A meta-analysis of studies of the effects of differential school inputs on student outcomes. Paper presented at the annual meeting of the American Education Finance Association, Albuquerque.

LaPointe, A. E., Mead, N. A., & Phillips, G. W. 1989. A world of differences. Princeton, NJ: Educational Testing Service.

Lee, V. E., & Bryk, A. 1988. "Curriculum tracking as mediating the social distribution of high school achievement." Sociology of Education 62: 78-94.

Lee, V. E., Bryk, A., & Smith, J. B. 1993. "The organization of effective secondary schools." Review of Research in Education: 171-268. Washington, DC: American Educational Research Association.

Little, J. W., et al. 1987. Staff development in California. Berkeley, CA: University of California, Policy Analysis for California Education.

Little, J. W. 1993. "Teachers' professional development in a climate of educational reform." Educational Analysis and Policy Analysis 15(2): 129-52.

Marsh, D. D., & Odden, A. R. 1991. "Implementing the California mathematics and science curriculum frameworks." In Education Policy Implementation. Ed. A. R. Odden. Albany, NY: State University of New York Press.

McKnight, C. C., et al. 1987. The underachieving curriculum: Assessing U.S. school mathematics from an international perspective. Champaign, IL: Stipes.

Mohrman, A., Mohrman, S. A., & Odden, A. R. 1993. The linkages between systemic reform and teacher compensation. Madison, WI: University of Wisconsin, Wisconsin Center for Education Research, Finance Center of the Consortium for Policy Research in Education.

Monk, D. 1992. "Educational productivity research: An update and assessment of its role in education finance reform." Educational Evaluation and Policy Analysis 14(4): 307-32.

Monk, D. 1994. "Subject matter preparation of secondary mathematics and science teachers and student achievement." Economics of Education Review 13(2): 579-600.

Moore, M., Walker, L., & Holland, R. P. 1982. Finetuning special education finance: A guide for policymakers.
Princeton, NJ: Educational Testing Service.

Mullis, I. V. S., et al. 1990. Accelerating academic achievement. Princeton, NJ: Educational Testing Service.

Mullis, I. V. S. 1991. The state of mathematics achievement: NAEP's 1990 assessment and the trial assessment of the states. Princeton, NJ: Educational Testing Service.

Mullis, I. V. S., Campbell, J. R., & Farstrup, A. E. 1993. The NEAP 1992 reading report card for the nation and the states. Washington, DC: U.S. Department of Education.

Murphy, J. 1991. "Title I of ESEA: The politics of implementing federal education reform." In Education Policy Implementation. Ed. A. R. Odden. Albany, NY: State University of New York Press.

Murphy, J., & Hallinger, P. 1989. "Equity as access to learning: Curricular and instructional treatment differences." Journal of Curriculum Studies 19: 341-60.

Murphy, J., ed. 1990. The education reform movement of the 1980s: Perspectives and cases. Berkeley, CA: McCutchan.

National Commission on Excellence in Education. 1983. A nation at risk. Washington, DC: U.S. Department of Education.

National Governors' Association. 1986. Time for results. Washington, DC.

Nelson, H. 1991. "An interstate cost-of-living index." Educational Evaluation and Policy Analysis 13(1): 103-12.

Odden, A. R. 1992. Rethinking school finance: An agenda for the 1990s. San Francisco: Jossey-Bass.

Odden, A. R. 1993. "Broadening impact aid's view of school finance equalization." Journal of Education Finance 18(1): 63-88.

Odden, A. R., & Conley, S. 1992. "Restructuring teacher compensation systems." In Rethinking School Finance: An Agenda for the 1990s. Ed. A. R. Odden. San Francisco: Jossey-Bass.

Odden, A. R., & Conley, S. 1994. "Decentralized management and school finance." Theory Into Practice 33(2): 104-111.

Odden, A. R., & Kim, L. 1992. "Reducing disparities across the states: A new federal role in school finance." In Rethinking School Finance: An Agenda for the 1990s. Ed. A. R. Odden. San Francisco: Jossey-Bass.

Odden, A. R., & Picus, L. 1992. School finance: A policy perspective. New York: McGraw Hill.

Palinscar, A. S., & Brown, A. L. 1984. "Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities." Cognition and Instruction. 1(2): 117-75.

Palinscar, A. S., & Klenk, L. J. 1991. "Learning dialogues to promote text comprehension." In Teaching Advanced Skills to Educationally Disadvantaged Students. Eds. B. Means & M. S. Knapp. Washington, DC: U.S. Department of Education, Office of Planning, Budget, and Evaluation.

Peterson, P. L., Fennema, E., & Carpenter, T. P. 1991. "Using children's mathematical knowledge." In Teaching Advanced Skills to Educationally Disadvantaged Students. Eds. B. Means & M.S. Knapp, pp. 103-128. Washington, DC: U.S. Department of Education, Office of Planning, Budget, and Evaluation.

Porter, A. C. 1991. "Creating a system of school process indicators." Educational Evaluation and Policy Analysis 13(1): 13-29.

Porter, A. C. 1993a. Defining and measuring opportunity to learn. Paper prepared for the National Governors' Association. Washington, DC.

Porter, A. C. 1993b. "Opportunity to learn." Educational Researcher 22(5): 24-30.

Porter, A. C. 1994. "National standards and school improvement in the 1990s: Issues and promise." American Journal of Education 102: 421-449.

Porter, A. C., & Brophy, J. 1988. "Good teaching: Insights from the work of the institute for research on teaching." Educational Leadership 45(8): 75-84.

Porter, A. C., Smithson, J., & Osthoff, E. 1992. Standard setting as a strategy for upgrading high school mathematics and science. Madison, WI: University of Wisconsin, Wisconsin Center for Education Research, Consortium for Policy Research in Education.

Porter, A. C., et al. 1993. Reform up close: A classroom analysis. Madison, WI: University of Wisconsin, Wisconsin Center for Education Research, Consortium for Policy Research in Education.

Raizen, S. 1989. Reforming education for work: A cognitive science perspective. Berkeley, CA: National Research Center on Vocational Education.

Resnick, L. 1987. Education and learning to think. Washington, DC: National Academy of Education.

Resnick, L., et al. 1991. "Thinking in arithmetic class." In Teaching Advanced Skills to Educationally Disadvantaged Students. E. B. Means & M. S. Knapp. Washington, DC: U.S. Department of Education, Office of Planning, Budget, and Evaluation.

Rosenshine, B., & Stevens, R. 1986. "Teaching functions." In Handbook of Research on Teaching. Ed. M. Wittrock. New York: Macmillan.

Schmidt, W. 1983. "High school course taking: Its relationship to achievement." Journal of Curriculum Studies 15(3): 311-32.

Sebring, P. A. 1987. "Consequences of differential amounts of high school coursework: Will the new graduation requirements help?" Educational Evaluation and Policy Analysis 9(3): 257-73.

Toenges, L. 1993. Interstate revenue disparities and equalization costs: Exploratory estimates based on the NCES' Common Core of Data (unpublished copy).

Villasenor, A. 1990. Teaching the first grade mathematics curriculum from a problem-solving perspective. Diss. University of Wisconsin-Milwaukee.

Wise, A., & Leibbrand, J. 1993. "Accreditation and the creation of a profession of teaching." Phi Delta Kappan 75(2): 133-36.

Wykoff, J. 1992. "The interstate equality of public primary and secondary education resources in the U.S., 1980-1987." Economics of Education Review 11(1): 19-30.

FOOTNOTES:

  1. The McLoone Index is the sum of the expenditures per pupil for each district spending below the median divided by the sum as if each district were spending at the median. Usually each district?s expenditure per pupil is also multiplied by the number of pupils, so the McLoone Index indicates the ratio of actual spending on students in districts below the median to spending if all districts were at (or raised to) the median.
  2. Darling-Hammond (1992) and others, such as Lee, Bryk, and Smith (1993), as well as Porter (1991), would argue for collecting variables related to school organization, structure, and culture. Clearly, there is research that shows these factors can and do affect achievement. For the purpose of this paper, however, the emphasis is placed on collecting and measuring curriculum and instruction variables, because these variables are potentially the most powerful factors affecting student achievement and because, while the power of these variables has been documented, there has not yet been a concerted effort, either at the national or the state level, to collect them. Thus, by highlighting these variables to the exclusion of other process variables, this paper hopes to imply the importance of actually allocating resources to create a data base that includes valid and reliable measures of curriculum and instruction actually provided in classrooms. Further, opportunity to learn is conceptualized in this paper as a narrower issue than service delivery standards; by this definition, opportunity to learn does not include measures of school organization such as structure and culture (Porter 1993b).
  3. Porter (1993a) asserts that questionnaires work reliably well under the condition that the information be used only for analytic accountability purposes. No study has yet validated the use of surveys in a context in which teachers would be held accountable for the enacted curriculum as indicated by the survey results.
  4. Technically, the goal should be to increase funding in a way that brings the equity statistics of the federal range ratio, coefficient of variation, or McLoone index within some normative target. Bringing all districts up to the median would produce a McLoone index of 1.0, indicating perfect equity. Bringing expenditures up to either the median of some average expenditure level would likely also reduce the coefficient of variation; the goal would be to reduce that statistic to below 0.10, an equity standard some have suggested for the coefficient of variation (Odden and Picus 1992).
  5. Because of limited space and data, this paper does not discuss the costs of changes in teacher preparation, nor does it discuss possible changes in instructional materials costs.
  6. This argument assumes that through intensive professional development, all teachers in the education system could be trained to teach according to the new curriculum standards, regardless of preservice training. While this assumption might be somewhat optimistic, it is reasonable until empirical evidence emerges to show that it is not.



[Intrastate Cost Adjustments] Prev Table of Contents[Table of Contents]