Skip Navigation
Developments in School Finance, 1997- Does Money Matter?

Introduction and Overview

William J. Fowler, Jr.


The education finance scholars that assembled for the 1997 National Center for Education Statistics (NCES) Summer Conference brought with them diverse views of the effects of money on elementary secondary student outcomes. Some were sanguine because they had suggested new or alternative statistical or research designs from traditional "production function" studies that yielded empirical evidence of the positive effects of resources on student achievement. Others had examined in detail how a few schools with good student outcomes reallocated resources, in an effort to understand how changes in resources are linked to student effects. Still others demonstrated what they believed to be a consistent set of evidence that disadvantaged students who received more resources demonstrated higher gain scores. At least one scholar, however, found a large urban school district where, as a result of alleged mismanagement, high per-pupil spending was not reaching the classroom and student outcomes seemed below those of comparable school districts. Another academic sought to design a school finance system that would provide an "adequate" level of education, where adequacy is defined in terms of minimum standards of student performance.

Not all of the presentations involved finance and student outcomes. A few researchers were involved in what might be considered the "cutting edge" of education finance research, rather than reexamining the much-debated, albeit extremely relevant, question of the nexus of financing and student outcomes. One handful of researchers aspired to revolutionize the traditional manner of measuring fiscal equity. Their approach when measuring fiscal equity makes adjustments to equity measures for differences in geographic costs and the educational needs of students in a school district. Another set of enterprising scholars endeavored to devise a mechanism for collecting school-level financial data with the next administration of the NCES Schools and Staffing Survey (SASS), which is scheduled to be administered for the fourth time in 1999-2000. Should they succeed, the SASS finance survey would represent the first collection of traditional finance data from a nationally representative sample of private schools in 20 years, and the first ever for public schools (at the school level).

The first three papers presented in this collection of the proceedings reflect "real world" examinations of finances and student outcomes, rather than traditional education finance research designs that employ sophisticated statistical analyses of large-scale data bases that merge thousands of school district finances with thousands of students' achievements. While many in the statistical and research community would desire statistical support and replication for the "real world" findings, other researchers point out the value of qualitative and case studies in gaining insights that the "black box" statistical analyses cannot address. David Grissmer, Ann Flanagan, and Stephanie Williamson , from RAND, pose the intriguing question of whether money matters for minority and disadvantaged students. They argue that evidence is accumulating that may replace the "Money Doesn't Matter" hypothesis. This new hypothesis asserts that additional money matters for students from less advantaged backgrounds and minority students, but may not matter for students from more highly advantaged backgrounds. They first explain the evidence they see in the NCES National Assessment of Educational Progress (NAEP) concerning resource growth and targeting. They then discuss large score gains of black students and class size changes. Finally, they look to new, experimental studies, rather than the quasi-experimental research that has been the cornerstone of the traditional education finance findings.

Grissmer, Flanagan, and Williamson argue that the widely-accepted evidence that real per-pupil resources doubled in education from the late 1960s to the early 1990s, while NAEP scores stagnated, is incorrect for a variety of reasons. First, disaggregating the national average scores suggests that scores for all racial/ethnic groups rose in reading and mathematics for all age groups. Second, using a different deflator from the Consumer Price Index (CPI) suggests the real increase (that is, after inflation) in educational expenditures was much lower than a doubling of resources. Third, most of this increase went for disabled students, who are not included in NAEP. Fourth, of the resources not directed toward disabled students, a disproportionate amount of resources was directed at minority students and students in poverty.

Since Grissmer, Flanagan, and Williamson assert that family changes can only explain one-third of the NAEP gains of black students, they examine what happened in the educational system. During these years, preschools and kindergartens flourished, desegregation occurred in the South, class sizes decreased, and teachers' age, experience, and education increased. Grissmer, Flanagan, and Williamson reject changes to preschools and kindergartens as having the kind of sustained gains in the test scores of black students they observe. Desegregation explains some of the gains, but not all. Instead, they believe that the school changes are the better candidates for explaining substantial parts of the NAEP gains, and of those, class size decreases seems the strongest.

Grissmer, Flanagan, and Williamson contend that class size effects have the virtue of experimental evidence supporting their relationship just as quasi-experimental research techniques are being called into serious question. The large, multi-district study in Tennessee where students were randomly assigned to smaller classes found significant positive effects on achievement, and larger effects for black students. Unfortunately, in the Tennessee experiment, students were returned to large classes after third grade, so we do not know what would have happened if students had remained in small classes until the end of school. In addition, class sizes fell in the 1960s, as well as the 1970s. If smaller classes had conferred long-term benefits, 17-year-olds who entered school in 1968 should have outscored those who entered in 1960, but this did not occur.

Karen Hawley Miles, an independent education consultant, and Linda Darling-Hammond, from Teachers College, Columbia University, find little attention has been given to rethinking the use of existing instructional resources, especially teachers who are schools most important and expensive resource. Miles and Darling-Hammond examine five schools that demonstrate that it is possible to support student achievement at extraordinarily high levels by reallocating instructional resources to maximize individual attention for students and learning time for teachers. They assert that it is unlikely that schools can find ways to create more individual time for students or more shared planning time for teachers without prohibitively raising costs, unless they rethink the existing reorganization of resources.

Miles and Darling-Hammond suggest that they focus primarily on the assignment and use of teaching staff because it is the most sizable and the most underexplored area for potential resource reallocation. They cite studies that demonstrate that few of new teaching staff were deployed to reduce class sizes for regular education students; most went to provide small classes to the growing number of special students, or for teacher release time. Only 43 percent of school staff are regularly engaged in classroom teaching, in comparison to 60 percent or more in European countries, enabling more time for collaborative planning and professional development.

Miles and Darling-Hammond describe six practices widely found in schools, and portray the impact of each on the use of teaching resources:

  • Specialized programs conducted as add-ons;

  • Isolated instruction-free time for teachers;

  • Formula-driven student assignments;

  • Fragmented high school schedules and curriculum;

  • Large high schools; and

  • Inflexible teacher work day and job definition.

The schools attempting to change these conditions used several strategies of resource reallocation. Reallocation involved reduction of specialized programs, more flexible student groupings, longer and varied blocks of instructional time to create more personalized environments, expanded common planning time for staff, and creative work schedules and staffing roles.

Only two of the five schools Miles and Darling-Hammond studied actually reallocated and restructured existing programs and staff, while the others were brand-new schools which did not suffer from the six standard practices found in schools. Three were elementary schools, and two were secondary schools. Traditional elementary schools served regular education students in age-graded, self-contained classrooms. Three-quarters of the school's teaching staff worked with regular education students, the remainder with Title 1 and special education students who were pulled out of their regular classes for such special instruction. Class composition and class size stayed the same all day, for all subjects, except for special instruction. The elementary classroom teacher instructed all subjects except specialties like art, music, and gym, which were taught by specialists during the classroom teacher's free period. Teachers had 45 minutes 3 to 5 times a week free from instruction for planning, uncoordinated with other teachers' free time.

The high-performing schools changed this organization, increasing the percentage of teachers who worked with heterogeneous groups of students to 90 percent. The anomalous elementary schools teachers' adapted instructional grouping to student needs. These atypical schools kept teachers with the same students for 3 years, usually with the same homeroom class. Some teachers received as few as nine new students a year. All the elementary eccentric schools created more common planning time, although only one made dramatic changes. These novel elementary schools created master-teacher and other instructional adults in the classroom. Similar changes occurred in the two high schools studied.

To accomplish these changes, the uncommon schools directly challenged policies, regulations, and collective bargaining agreements. Changing school organizations to better fit an instructional vision does require schools to confront a host of obstacles. However, the biggest constraint may be a lack of vision. The sample schools described by Miles and Darling-Hammond are intended to assist those who lack a vision of what may be done.

Joyce Ladner, a member of the District of Columbia Control Board, describes the condition of the District of Columbia public schools. The Control Board concluded that the D.C. public schools were in crisis, by every important educational and management measure. As seen by the Control Board, the District of Columbia Public Schools (DCPS) were simply failing in their mission to educate the children of the District of Columbia, by neither providing a quality education nor a safe environment in which to learn.

The Financial Authority was created by the U.S. Congress in 1995 to repair the District of Columbia's failing financial condition and to improve the management effectiveness of government agencies. In November 1997, the Authority removed the superintendent and stripped the Board of Education of most of its power to control the schools. In their place, the Authority appointed a new Chief Executive Officer and an Emergency Board of Trustees.

Ladner describes DCPS from a report entitled Children in Crisis: A report on the failure of the D.C.'s Public Schools

The DCPS students score significantly lower on standardized academic achievement tests than their peers in comparable districts around the nation. In 1994, only 22 percent of DCPS fourth-grade students scored at or above the basic NAEP levela decrease of 6 percent from 1992. Many students are dropping out or leaving DCPS (40 percent) for neighboring districts and private schools. DCPS teachers in NCES School and Staffing Survey (SASS) believed that a variety of serious problems affected their schools, more than in other states. These problems included student unpreparedness; disrespect for teachers; absenteeism; and apathy. Compared with the national average, more DCPS teachers and students report being threatened with violence. The infrastructure of the Districts public schools is collapsing: boilers bust, roofs leak, firedoors stick, bathrooms crumble, and poor security permits intruders. The system does not know how many students it has, with estimates varying from 65,000 to 81,000.

Comparisons of the Districts school expenditures with other jurisdictions are difficult because of the above managerial irregularities, which extend into the fiscal area. However, the Districts per-pupil expenditure exceeds the national average, and is substantially higher than many comparable urban school districts and neighboring districts (one exception is Newark, New Jersey, which spent $2,512 more per student than the $7,655 DCPS spent in 199495). DCPS employs 16 teachers for every central administrator employed, compared with its peers who employ 42 teachers for every central administrator. In 1996, DCPS allocated more toward its Office of the Superintendent than the Fairfax County, Montgomery County, and Baltimore City public school systems combined. It also spent more than twice as much on the Office of the Board of Education than peer and neighboring district averages.

Ladner concludes with some of the steps of the new management team, such as imposing a hiring freeze, closing 11 schools and replacing 50 roofs, increasing security with security guards and new metal detectors, establishing a teacher evaluation program, ending social promotions, terminating a large contracted school maintenance contractor, and initiating new contracted services for school breakfasts and lunches.

While the above studies described "real world" evidence of how school districts deploy staff resources, three educational researchers conducted more traditional production-function studies from large databases. These studies use econometric techniques to find relationships between educational outcomes to school resources, while statistically controlling for student background characteristics. In the first of these, Corrine Taylor, University of Wisconsin, argues that none of the previous studies adequately accounted for geographic cost variations nor the costs created by the proportion of students with special needs. She hypothesizes that a stronger relationship between student achievement and school expenditures will emerge once these costs are taken into account.

Taylor employs NCES data sources: the National Education Longitudinal Study of 1988 (NELS); the Common Core of Data (CCD); and a district-level teacher cost index (TCI). NELS contains a nationally-representative sample of students who were followed at the 8th, 10th, and 12th grade level in 1988, 1990, and 1992, who took cognitive tests, and who completed questionnaires about a wide variety of education issues and students interest and effort in school. Their parents, teachers, and school administrators also completed questionnaires regarding SES and school conditions. The TCI is a geographic cost adjustment that is an index that reflects the cost of employing teachers in particular regions of the country, based upon job and location amenities. The SASS provided the data to apply a regression analysis of the factors that influence teacher salaries, including those that are under the control of school districts, such as teacher experience and degree status, and those that are not, such as cost of living and quality of life. The estimates of these characteristics on teacher salaries results in an index number, where 100 is the national norm, that can be used to estimate teacher costs, holding constant discretionary factors. TCI index numbers are available at the state, county, and school district levels (Chambers and Fowler, 1995).

The paper does not mention the extensive work that is required to combine the data sets, NELS, CCD, and TCI. That integrating the data sets is problematic may be deduced by Taylors description of her sample of students, she includes only public school students who participated in all three waves of the National Education Longitudinal Study of 1988 (NELS) study (11,598 students), who never dropped out of school (11,503), and who attended the same high school in both 1990 and 1992 (11,167). These restrictions were necessary if one wishes to consider only those students who are consistently associated with school resources at particular schools.

Taylor is very comprehensive. She examines three expenditures per pupil: total current; core; instructional salaries. She then cost adjusts each for both geographic differences and student need differences. The geographic cost adjustment is accomplished by dividing each nominal per-pupil expenditure by the TCI (times 100). Student educational need is measured by including separate control variables for the proportion of students in special education, limited English proficiency, and compensatory education, based on information from the Common Core of Data (CCD) school district data.

Using the 1992 mathematics score as the dependent variable, she includes prior achievement, student and family characteristics, student interest and effort, student view of the school environment, peers characteristics, special needs students, community characteristics, and school characteristics. Although she finds the consistently positive and statistically significant effects of per-pupil expenditures on high school students academic achievement, the effects do not increase appreciably when per-pupil costs are adjusted for geographic or student need. She concludes that these results demonstrate that the lack of a strong relationship between student achievement and school expenditures cannot simply be attributed to mismeasurement of the schools fiscal resources.

In yet another production function study, Harold Wenglinsky, of ETS, uses a different NCES national sample of student achievement, the National Assessment of Educational Progress (NAEP). He applies structural equations and hierarchical linear modeling (HLM) in an attempt to address what he believes to be the shortcomings of standard production function studies. Reviewing some 30 years of production-function studies in education (approximately 400 studies), he restates the imperfections in most of the studies: the data were not nationally representative, but instead use data from a particular state or school district; the usual measure was per-pupil expenditure, rather than more discrete measures, such as administrative overhead, or per-pupil instructional expenditure; the process of schooling was not considered, which may mediate the relationship between expenditures and student outcomes; measures of student background were inadequate; differences in costs caused by geography were not considered; especially in the early studies, measures of outcomes were unsophisticated, such as graduation rates.

Wenglinsky attempts to remedy these problems in his study via differences in the nature of the data base he employs and the nature of the analyses undertaken. He uses a NCES data base, NAEP, that contains a nationally-representative sample of student and school information from 4th-, 8th-, and 12th-graders, and information from their teachers and principals. The subject areas tested vary, but have included at various times mathematics, reading, history, geography, and science. Wenglinsky uses the 1992 mathematics assessment of students attending fourth grade, which contains measures of mathematics achievement, school environment, teacher education levels, teacher-student ratios, and student and school-level SES. He combines this data set with the CCD fiscal data for school districts. Note that this yields the same expenditure for every child in a school district, regardless of the school that they attend. Wenglinsky then adjusts these expenditures using the state-level cost adjustment, the TCI, rather than the school-district-level TCI adjustment that Taylor uses. For states which experience large within-state differences, such as New York, or Illinois, use of the state geographic cost adjustment will be less precise than the use of a county or school-district measure.

Wenglinsky links these data bases to yield a district level and student level. The district level was produced by aggregating NAEP data to the district level and linking it to the district-level CCD. Since NAEP is a sample of students, only the 203 school districts that could be matched were included in the analysis. The district-level database was used for all analyses except a multi-level approach. Let us explore for a moment why some multi-level approach may be desirable.

A common dilemma of education researchers is that the data often come from hierarchical data structures. For example, expenditure data are from the school district level, the teachers are employed and conduct their classes at the school level, and students achievements are at the student level. Since traditional statistical techniques for modeling hierarchy have been inadequate, the usual choice has been to ignore these differences, and to combine all characteristics at one level of aggregation. Assume this is done in work similar to the Wenglinsky paper. The resulting data set may be at the student level, with a similar expenditure for every child in the same school district, and a similar school environment measure for every child in the same school (or aggregated in the opposite direction, so that there is only a average student achievement score at the school district level). In reality, we know that every child in a school district receives different allocations of resources, so the per-pupil expenditure should vary for every student, and the environmental measures should vary for every classroom (if not for every student). Ignoring these limitations have resulted in a variety of statistical problems, which make such studies vulnerable to legitimate criticisms by other education researchers.

Recent developments (such as Hierarchical Linear Modeling (HLM)), however, have led to the development of several approaches to analyzing hierarchical data sets, in which the researcher may retain data at the appropriate level, and then run analyses that compare these attributes properly (Bryk and Raudenbush, 1992). As will be discussed later, NCES is exploring the possibility of collecting a student-level resource measure with other student-level data in its sample surveys.

Returning to a discussion of the Wenglinsky paper, he uses two statistical analyses: LISREL, in which the educational researcher specifies how he believes each variable effects others; as well as HLM. Wenglinsky hypothesizes that a students academic achievement is modified by the school environment and the teachers highest degree and the student-teacher ratio, as intervening variables between the school districts resource choices and student achievement. He also examines more discrete expenditures than the total per-pupil expenditure, utilizing the school district instructional per-pupil expenditure, the central administration per-pupil expenditure, and the school administration per-pupil expenditure. The HLM analysis consisted of student achievement as the dependent variable and two resources (teachers highest degree and teacher-student ratios) as independent variables. Wenglinsky finds that expenditures on instruction and central office administration affect teacher-student ratios, which, in turn, affect student achievement. The relationships also persisted when subjected to multilevel analysis using HLM. Interestingly, unlike Taylor, Wenglinsky finds that the relationships were affected by modifying the expenditures for geographic cost differences.

Andrew Reschovsky and Jennifer Imazeki of the University of Wisconsin-Madison explore the quandary of developing a school finance formula that guarantees the provision of an adequate education to low-income students. Imazeki and Reschovsky recognize that the cost of education can be defined as the minimum amount of money that a school district must spend in order to achieve a given education outcome. In comparing two districts with equal spending per pupil, educational performance may be lower in one of the districts if the costs of providing any given level of education are higher in that district, or if that district is more inefficient in its use of resources.

They stress that the importance of costs in any discussion of equity in the financing of schools is that the achievement of equity (in outcomes) will require higher spending in districts facing high costs. The courts are moving from a focus on equity in spending to one of educational adequacy, where adequacy is defined in terms of minimum standards of student performance. Imazeki and Reschovsky believe a prerequisite for designing a outcome-equitable school finance system is knowledge about how much it will cost each school district to provide an adequate education for its students. In their paper, they review traditional school aid distribution formulas, as well as other cost measures, and then go on to develop their own cost index for school districts in Wisconsin, which takes into account student educational needs. They then develop a simulation of a school aid formula designed to achieve education adequacy.

The traditional way that states finance the education of students with special educational needs is by "weighting" them; that is, if a school district receives $1,000 for a regular student, a handicapped student might generate $2,300 in state aid for the school district, or 2.3 times as much. These weights typically have been derived from episodic studies of a few school districts or states where information exists regarding what some school districts spend for the education of such children. Other geographic cost indexes, such as McMahon or Chambers, do not consider student educational need in an explicit way. As such, they understate the costs of some school districts, since some school districts will have to hire more teachers (perhaps at a premium) and spend more on non-teacher resources (social workers, drug counselors) in order to achieve any specific educational goal. Indeed, even more sophisticated efforts (such as Duncombe, Ruggiero, and Yinger, 1996) that include student need typically measure the cost of purchasing a given set of inputs to be used in providing the education of students.

Imazeki and Reschovsky specify a regression equation where student outcomes are a function of school resources, the characteristics of students, and the family and neighborhood. They consider such student need variables as the percent of students with disabilities (and severe disabilities), and the percent of students eligible for free and reduced-price lunch. They also use a "value-added" measure of student achievement; that is, the change in test scores over time. Because of the complexities involved, Imazeki and Reschovsky decided not to include a measure of efficiency. As has been previously found, they find that there is a "U-shaped" relationship between spending per pupil and school district size, and, as expected, higher proportions of students from poor families and those with disabilities are associated with higher costs.

Setting the tenth-grade score at the average for all Wisconsin districts as the adequacy standard, Imazeki and Reschovsky construct a cost index by using the results of the regression to predict hypothetical spending for each district. These predictions are then compared to actual spending in a average district with average costs and average levels of educational outcomes. They then go on to develop a state-aid formula to fund the "adequate" level. Surprisingly, while per pupil aid remains substantially higher in low-property wealth districts as compared to high-property wealth districts, the largest percentage increases in aid go to high-wealth school districts.

Let us now turn to two presentations that did not involve finance and its relationship to student outcomes. These researchers were involved in what might be considered the "cutting edge" of education finance research by examining the effect on traditional fiscal equity measures of applying geographic cost adjustments and student need adjustments. Lauri Peternick and Becky Smerdon, American Institutes for Research, William Fowler, NCES, and David H. Monk, Cornell University, were struck by the dramatic differences in the coefficient of variation (CV) when geographic cost adjustments were applied to nominal per-pupil expenditures. Using New York State school districts expenditures per pupil from the CCD, they examined financial equity within the state by conducting two sets of analyses, including and excluding New York City. One set of per-pupil expenditures were nominal, another were adjusted for student needs, a third used the geographic cost adjustment of the TCI, and a fourth used both student needs and geographic cost. Student needs used 2.3 weights for students with an "individual educational plan (IEP)," and 1.2 for students at-risk (in poverty) and limited English proficient (LEP). Four equity measures were examined: the CV; the Gini coefficient; the McLoone Coefficient; and the slope.

Peternick, Smerdon, Fowler, and Monk find the CV is greatest when measuring nominal per-pupil expenditures. Employing a geographic cost adjustment reduces the CV, as does the needs adjustment. Applying both adjustments almost halves the CV. The Gini is similarly affected. The McLoone Index, which measures equity for the lowest half of the distribution, however, demonstrates the largest inequity with a geographic cost adjustment. Including New York City in the analysis, the nominal data show increased equity. The opposite occurs when the per-pupil expenditures are adjusted for geographic cost differences or needs.

The slope demonstrates the relationship between median household income and per-pupil expenditures. Cost adjustments hinder the explanatory power of median household income (or housing value), while student needs adjustments serve to increase incomes explanatory value. Median household income has a larger effect when New York City is included.

Peternick, Smerdon, Fowler, and Monk conclude that the results presented demonstrate the varying impact different adjustments may have on equity measures. In addition, the inclusion of a single large urban school district may have dramatic effects upon the results of equity measures.

The final article is from researchers seeking to address the demand for school-level resource data. Aside from the interest of parents and taxpayers about resource allocation and productivity of their school, there are questions of accountability and management, as well as equity and adequacy that are feeding the thirst for financial information at the school level. Julia Isaacs and Michael Garet of American Institutes for Research, and Stephen Broughman, NCES, are attempting to design a method to collect school-level financial data during the next administration of the NCES Schools and Staffing Survey (SASS), which is scheduled to be administered for the fourth time in 19992000. SASS provides nationally-representative and state-representative data about schools, and any financial data would permit baseline estimates of spending in the nations schools. Presently, only a few states have accounting systems that extend to the school level, and the existence of more than 84,000 public schools in the country make it unlikely that any uniform reporting system would be quickly adopted. Most financial reporting is still contained at the school-district level.

Only two extant systems are in use when school-level fiscal resources are reported. The most commonly used is a simple extension of the existing accounting system to the school level. Coopers and Lybrand developed a software package of this type that the lay person could use to recode the school district budget to the school level. It contains algorithms to allocate expenditures from the school district level to the school for some functions (such as student transportation), using some information pertinent to the activity (such as numbers of student transported). Every state that has implemented school-level financial reporting has used the traditional accounting system extended to the school-level.

However, Isaacs, Garet, and Broughman describe an alternative approach from their AIR colleagues, called the Resource Cost Model (RCM). The RCM is a "bottom-up" approach to school resources, aggregating from the school-level the number of staff in certain assignments, and the time they spend in certain activities. Prices are then assigned to each person for each assignment. In this way, the "service delivery" system can be described, as can its cost. For example, two schools may give compensatory students additional instruction: one in a "pull-out" service delivery system; the other by having an aide assist the student. As one can imagine, the "pull-out" delivery system, where a student is sent to another class with another teacher, will be much more expensive than the simple assistance of an aide.

Isaacs, Garet, and Broughman have developed a proposal for collecting school-level financial data via a questionnaire to the school business official (who usually resides at the school district level). The business official would report school expenditures (if he has them), and expenditures at unspecified locations. These unspecified expenditures would then be prorated, using additional information needed for prorations.

A group of education finance experts convened by Isaacs, Garet, and Broughman in January, 1998, suggested that a synthesis of the two approaches be attempted. Work on refining the public school expenditure instrument is still underway.

One prospective note about a comment made earlier. Much of this volume revolved around the connection between per-pupil expenditures and student achievement, and the difficulties researchers encountered because financial data were at a higher level of aggregation than the rich student-level information that NCES obtains through its student-level surveys. The Education Finance Statistical Center (EFSC) within NCES is conducting work to see if a student-level resource measure can be developed in time to accompany a longitudinal study of students that will begin in 1999 for kindergartners, termed the Early Childhood Longitudinal Survey (ECLS). For the most up-to-date information on the work of the EFSC, NCES finance publications, finance graphics, and finance data sets, including those containing geographic cost adjustments, readers are urged to visit the web site http://nces.ed.gov/edfin where readers may also email finance questions, if they are not already answered in "frequently asked questions."



Helpdesk Table of Contents Top of Page