Skip Navigation
Effects of Energy Needs and Expenditures on U.S. Public Schools
NCES: 2003018
June 2003

Appendix A: Survey Methodology

Fast Response Survey System

The Fast Response Survey System (FRSS) was established in 1975 by the National Center for Education Statistics (NCES), U.S. Department of Education. FRSS collects data from state education agencies, local education agencies, public and private elementary and secondary schools, public school teachers, and public libraries. It is designed to collect small amounts of issue-oriented data with minimal burden on respondents and within a relatively short timeframe. Surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly. Data are weighted to produce national estimates of the sampled education sector. The sample size permits limited breakouts by classification variables. However, as the number of categories within the classification variables increases, the sample size within categories decreases, which results in larger sampling errors for the breakouts by classification variables.

Sample Selection

The sample for the FRSS survey on the effects of energy needs and expenditures on U.S. public schools consisted of 1,000 regular public school districts in the 50 states and the District of Columbia. The sample was selected from the 1999–2000 NCES Common Core of Data (CCD) Local Education Agency Universe (LEA) file. The initial sampling frame consisted of almost 17,000 district records. This was reduced to include only those districts that met all of the following conditions:

  • The district was a local school district that was not a component of a supervisory union or a local school district component of a supervisory union sharing superintendent and administrative services with other local school districts (these are called "regular" school districts by NCES and CCD).

  • The district had not closed since the 1998–99 CCD report.

  • The district had at least one student enrolled according to the 1999–2000 CCD report.

  • The district was located within the United States (all districts in outlying territories were excluded).

The district sampling frame was stratified by district size (less than 1,000, 1,000 to 2,499, 2,500 to 9,999, 10,000 to 99,999, and 100,000 or more), metropolitan status (urban, suburban, rural), region (Northeast, Southeast, Central, West), and poverty concentration 10 (less than 10 percent, 10 to 19 percent, 20 to 29 percent, and 30 percent or more). After the stratum sample sizes were determined, a sample of 1,000 districts was selected systematically from the sorted file using independent random starts.

Respondents and Response Rates

Questionnaires and cover letters were mailed in early November 2001. The cover letter indicated that the survey was designed to be completed by the district chief financial officer (CFO) or other person in the district who was most knowledgeable about the requested information on energy needs and expenditures. The respondent section on the front of the questionnaire indicated that 55 percent of the questionnaires were completed by CFOs, 28 percent were completed by district superintendents or assistant superintendents, 12 percent were completed by district facilities managers, and 5 percent were completed by others.

Telephone followup was conducted from late November 2001 through February 2002 with districts that did not respond to the initial questionnaire mailing. Of the 1,000 districts selected for the sample, 4 were found to be out of the scope of the survey. This left a total of 996 eligible districts in the sample. Completed questionnaires were received for 851 districts, or 85 percent of the eligible districts (Table A-1). The weighted response rate was 84 percent. Weighted item nonresponse rates for individual questionnaire items ranged from 0 to 2 percent.11 Imputation for item nonresponse was not implemented.

Sampling and Nonsampling Errors

The responses were weighted to produce national estimates (Table A-2). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.

The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection process or that data external to the study be used.

To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents similar to those who completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.

Variances

The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of districts that locked in rates with one or more energy providers during fiscal year 2001 is 39.1 percent, and the estimated standard error is 2.3 percent. The 95 percent confidence interval for the statistic extends from [39.1 – (2.3 times 1.96)] to [39.1 + (2.3 times 1.96)], or from 34.6 to 43.6 percent. Tables of standard errors for each table and figure in the report are provided in appendix B.

Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped individually to define 50 jackknife replicates. A computer program (WesVarPC) was used to calculate the estimates of standard errors.

The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In particular, an adjusted chisquare test using Satterthwaite's approximation to the design effect was used in the analysis of the two-way tables. Finally, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/g significance level to control for the fact that g differences were simultaneously tested. The Bonferroni adjustment results in a more conservative critical value being used when judging statistical significance. This means that comparisons that would have been significant with a critical value of 1.96 may not be significant with the more conservative critical value. For example, the critical value for comparisons between any two of the four regions is 2.64, rather than 1.96. This means that there must be a larger difference between the estimates being compared to detect a statistic ally significant difference.

However, the information presented in Table 2 is complicated by the presence of a small amount of missing data. For example, the mean energy expenditures per pupil for FY 2000 are based on the 841 cases where we have total expenditure and enrollment figures for FY 2000. Similarly, the mean energy expenditures for FY 2001 are based on the 847 cases where we have total expenditure and enrollment figures for FY 2001. The same procedures were used for mean energy budgets per pupil for FY 2001 and FY 2002.

Although the amount of missing data for each year was relatively small, when 2 years were paired for difference calculations, the resulting N was smaller than for each year separately. If the differences were calculated only on data from districts that provided complete information, the differences in some instances would not be identical to the arithmetic differences calculated from the ratios in the table. This discrepancy, though trivial, might be confusing.

Definitions of Analysis Variables

District enrollment in 1999–2000 — Total number of students enrolled during the 1999–2000 school year, as indicated in the 1999–2000 CCD file:

1 to 2,499
2,500 to 9,999
10,000 or more

Metropolitan status — As defined in the 1999– 2000 Common Core of Data (CCD):

Urban – a large or midsized central city of a Metropolitan Statistical Area (MSA)
Suburban – serves a noncentral city of an MSA
Rural – serves a non-MSA

Geographic region — One of four regions used by the Bureau of Economic Analysis of the U.S. Department of Commerce, the National Assessment of Educational Progress, and the National Education Association. Obtained from the 1999–2000 CCD.

Northeast Connecticut, District of Columbia, Delaware, Massachusetts, Maryland, Maine, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont
Southeast – Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia
Central – Iowa, Illinois, Indiana, Kansas, Michigan, Minnesota, Missouri, North Dakota, Nebraska, Ohio, South Dakota, and Wisconsin
West – Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, New Mexico, Nevada, Oklahoma, Oregon, Texas, Utah, Washington, and Wyoming

Poverty concentration — Poverty estimates for school districts were based on Title I data provided to the U.S. Department of Education by the Bureau of the Census and contained in "U.S. Department of Commerce, Bureau of the Census, Current Population Survey (CPS) Small Area Income and Poverty Estimates, Title I Eligibility Database, 1999." The No Child Left Behind Act of 2001 directs the Department of Education to distribute Title I basic and concentration grants directly to school districts on the basis of the most recent estimates of children in poverty. For income year 1999, estimates were derived for districts according to their 2001–02 boundaries based on Census 2000 data and model-based estimates of poverty for all counties. For detailed information on the methodology used to create these estimates, please refer to www.census.gov/hhes/www/saipe.html. This item served as a measurement of the concentration of poverty in the district. Data were missing for 11 cases in the sample.

Less than 10 percent
10 to 19 percent
20 percent or more

Overall fiscal year 2001 budget per pupil — This was based on responses to question 1b (overall budget for fiscal year 2001) and question 8a (district enrollment as of October 1, 2000). Data were missing for three cases in the sample. The questionnaire defined overall budget as including amounts for all programs and activities conducted by the district such as the general operating funds, physical plant and equipment repair, construction, capital outlay, student activities, cafeteria and food service, transportation, federal programs such as Title I, and insurance/liability.

Low – Less than $6,500
Mid-level – $6,500 to $8,999
High – $9,000 or more

Fiscal year 2001 energy budget sufficiency status — This was based on responses to question 2d, part 1 (fiscal year 2001 budgeted energy expenditures) and part 2 (fiscal year 2001 actual energy expenditures). Data were missing for eight cases in the sample.

Sufficient – FY 01 budget for energy was equal to or greater than FY 01 energy expenditures
Insufficient
– FY 01 budget for energy was less than FY 01 energy expenditures

Percent of budget allocated for energy — This was based on responses to question 1b (overall budget for FY 01) and 2d, part 1 (FY 01 budgeted energy expenditures). Data were missing for 10 cases in the sample.

1 percent or less – includes districts that allocated less than 1.5 percent for energy
2 percent – includes those that allocated from 1.5 percent to less than 2.5 percent for energy
3 percent or more – includes those that allocated 2.5 percent or more for energy

It is important to note that many of the school characteristics used for independent analyses are related to each other. For example, district enrollment in 1999–2000 and region are related, with districts in the Southeast typically being larger than districts in other regions. Relationships also exist between other analysis variables, such as enrollment size and region, metropolitan status and poverty concentration, and per pupil expenditure and percent of budget allocated for energy. Because of the relatively small sample size used in this study, it is difficult to separate the independent effects of these variables. Their existence, however, should be considered in the interpretation of the data presented in this report.

Definitions of Other Created Variables Used in the Analysis

Mean energy expenditure per pupil — The mean energy expenditure per pupil in FY 00 and FY 01 were calculated using the mean energy expenditure in FY 00 and FY 01, and district enrollment during the 2000–2001 school year. Districts were asked to report enrollment for the 2000–2001 school year, but not for the 1999–2000 school year (the timeframe corresponding to FY 00). Therefore, the enrollment during the 2000– 2001 school year was used to estimate the mean energy expenditure per pupil in FY 00.

Change in mean energy expenditure per pupil — The percentage change in mean energy expenditure per pupil from FY 00 to FY 01 was calculated using the mean energy expenditure per pupil calculated in each year, and is based on cases for which data from both years were available.

Mean energy budget per pupil — The mean energy budget per pupil for FY 01 and FY 02 were calculated using the mean energy budget for FY 01 and FY 02, and district enrollment during the 2000–2001 and 2001–02 school years, respectively.

Change in mean energy budget per pupil —The percentage change in mean energy budget per pupil from FY 01 to FY 02 was calculated using the mean energy budget per pupil calculated for each year, and is based on cases for which data from both years were available.

Small/large surplus — A small surplus was defined as an energy budget surplus below the median surplus ($7 per student) among districts that had sufficient funds allocated for energy in FY 01. A large surplus was defined as an energy budget surplus at or above the median surplus.

Small/large shortfall — A small shortfall was defined as an energy budget shortfall below the median surplus ($18 per student) among districts that had insufficient funds allocated for energy in FY 01. A large shortfall was defined as an energy budget shortfall at or above the median shortfall.

Survey Sponsorship and Acknowledgments

The survey was performed under contract with Westat. Bernie Greene was the NCES Project Officer; the data were requested by William Fowler (NCES) of the U.S. Department of Education. Westat's Project Director was Elizabeth Farris, and the survey manager was Tim Smith. Debbie Alexander directed the data collection efforts, assisted by Ratna Basavaraju and Anjali Pandit. Rachel Jiang was the programmer, and Rebecca Porch was the analyst. Carol Litman was the editor, and Catherine

Marshall and Sylvie Warren were responsible for formatting the report.

This report was reviewed by the following individuals:

Outside NCES

  • Alicia R. Williams, Director of Survey Research, Educational Research Service

Inside NCES

  • Katie Freeman, Early Childhood, International, and Crosscutting Studies Division

  • Lee Hoffman, Elementary/Secondary and Libraries Studies Division

  • Karen O'Conor, Office of the Deputy Commissioner

  • William Hussar, Early Childhood, International, and Crosscutting Studies Division

For more information about the FRSS or the Survey on the Effects of Energy Needs and Expenditures on U.S. Public Schools, contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, 1990 K Street, NW, Washington, DC 20006, e-mail: frss@ed.gov


10 Poverty estimates for school districts were based on Title I data provided to the U.S. Department of Education by the Bureau of t he Census and contained in "U.S. Department of Commerce, Bureau of the Census, Current Population Survey (CPS) Small Area Income and Poverty Estimates, Title I Eligibility Database, 1999." The No Child Left Behind Act of 2001 directs the Department of Education to distribute Title I basic and concentration grants directly to school districts on the basis of the most recent estimates of children in poverty. For income year 1999, estimates were derived for districts according to their 2001–02 boundaries based on Census 2000 data and model-based estimates of poverty for all counties. For detailed information on the methodology used to create these estimates, please refer to www.census.gov/hhes/www/saipe.html.

11 The base weight was used to determine the weighted item nonresponse rates.

Top