Skip Navigation
Condition of America's Public School Facilities: 1999
NCES: 2000032
June 2000

Appendix A—Survey Methodology

Fast Response Survey System

The Fast Response Survey System (FRSS) was established in 1975 by the National Center for Education Statistics (NCES), U.S. Department of Education. FRSS is designed to collect small amounts of issue-oriented data with minimal burden on respondents and within a relatively short timeframe. Surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly. Data are weighted to produce national estimates of the sampled education sector. The sample size permits limited breakouts by classification variables. However, as the number of categories within the classification variables increases, the sample size within categories decreases, which results in larger sampling errors for the breakouts by classification variables. FRSS collects data from state education agencies, local education agencies, public and private elementary and secondary schools, public school teachers, and public libraries.

Sample Selection

The sample for the FRSS survey on the condition of public school facilities consisted of 1,004 regular public elementary, middle, and high schools in the 50 states and the District of Columbia. The sample was selected from the 1996-97 NCES Common Core of Data (CCD) School Universe File. The sampling frame constructed consisted of 80,238 regular public schools. Excluded from the sampling frame were special education, vocational, and alternative/ other schools, schools in the territories, and schools with a high grade lower than one or ungraded. The frame contained 49,266 regular elementary schools, 14,808 regular middle schools, and 16,164 regular high/combined schools. A school was defined as an elementary school if the lowest grade was less than or equal to grade 3 and the highest grade was less than or equal to grade 8. A middle school was defined as having a lowest grade greater than or equal to grade 4 and a highest grade less than or equal to grade 9. A school was considered a high school if its lowest grade was less than or equal to grade 9 and the highest grade was greater than or equal to grade 10. Combined schools were defined as having a lowest grade less than or equal to grade 3 and a highest grade greater than or equal to grade 9 or the lowest grade is in grades 4 through 8 and the highest grade is in grades 10 through 12. High schools and combined schools were combined into one category for sampling. The public school sampling frame was stratified by instructional level (elementary, middle, and high school/combined), locale (city, urban fringe, town, rural), and school size (less than 300, 300 to 499, 500 to 999, and 1,500 or more). Within the primary strata, schools were also sorted by geographic region and percent minority enrollment in the school to produce additional implicit stratification. Within each primary stratum, the specified sample size was then allocated to size classes in rough proportion to the aggregate square root of the enrollment of the schools in the class. After the stratum sample sizes were determined, a sample of 1,004 schools was then selected systematically from the sorted file using independent random starts. The sample contained 401 elementary schools, 301 middle schools, and 302 high/combined schools. The 1,004 schools were located in 838 school districts.

Respondent and Response Rates

Questionnaires and cover letters were mailed in early July 1999. While individual elementary, middle, and high schools were sampled, the questionnaires were mailed to the districts with which the schools were associated. A separate questionnaire was enclosed for each sampled school. This is the same approach used by the U.S. General Accounting Office (GAO) to conduct their study of school facilities in 1994.

The cover letter indicated that the survey was designed to be completed by district-level personnel who were very familiar with the school facilities in the district. Often this was a district facilities coordinator (although the title of the position varied). The letter indicated that the respondent might want to consult with other district-level personnel or with school-level personnel, such as the principal of the selected school, in answering some of the questions. The respondent section on the front of the questionnaire indicated that while most questionnaires were completed by district-level respondents, some were completed by school level respondents (usually the school principal). To maintain the focus on schools, which are the sampled unit, the report refers to schools indicating or reporting various findings, even though respondents were primarily district-level personnel reporting about the sampled school. Telephone follow-up was conducted from late July through September 1999 with districts that did not respond to the initial questionnaire mailing.

Of the 1,004 schools selected for the sample, 14 were found to be out of the scope of the survey, usually because the school was no longer in existence. This left a total of 990 eligible schools in the sample. Completed questionnaires were received for 903 schools, or 91 percent of the eligible schools. The weighted response rate was also 91 percent. Weighted item nonresponse rates for individual questionnaire items ranged from 0 percent to 0.7 percent. Because the item nonresponse rate was so low, imputation for item nonresponse was not implemented.

Sampling and Nonsampling Errors

The responses were weighted to produce national estimates (see table A-1). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.

The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.

To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents like those who completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics and the Office of the Under Secretary, U.S. Department of Education. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.

Variances

The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of schools with all building types in adequate or better condition is 76.1 percent, and the estimated standard error is 1.8 percent. The 95 percent confidence interval for the statistic extends from [76.1 - (1.8 times 1.96)] to [76.1 + (1.8 times 1.96)], or from 72.6 to 79.6 percent. Tables of standard errors for each table and figure in the report are provided in appendix B.

Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVarPC) was used to calculate the estimates of standard errors. WesVarPC is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).

The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In particular, an adjusted chisquare test using Satterthwaite's approximation to the design effect was used in the analysis of the two-way tables. Finally, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/g significance level to control for the fact that g differences were simultaneously tested.

The Bonferroni adjustment results in a more conservative critical value being used when judging statistical significance. This means that comparisons that would have been significant with a critical value of 1.96 may not be significant with the more conservative critical value. For example, the critical value for comparisons between any two of the four categories of poverty concentration is 2.64, rather than 1.96. This means that there must be a larger difference between the estimates being compared for there to be a statistically significant difference.


Definitions of Analysis Variables

Categories of the analysis variables are those used by GAO for their 1994 study.

School instructional level - Schools were classified according to their grade span in the 1996-97 Common Core of Data (CCD) School Universe File.

Elementary school - had grade 6 or lower and no grade higher than grade 8.

Secondary school - had no grade lower than grade 7 and had grade 7 or higher.

Combined school - had grades higher than grade 8 and lower than grade 7.

School enrollment size - total number of students enrolled on October 1, 1998, based on responses to question 17 on the survey questionnaire.

Less than 300 students 300 to 599 students 600 or more students

Locale - as defined in the 1996-97 Common Core of Data (CCD).

Central city - a large or mid-size central city of a Metropolitan Statistical Area (MSA).

Urban fringe/large town - urban fringe is a place within an MSA of a central city, but not primarily its central city; large town is an incorporated place not within an MSA, with a population greater than or equal to 25,000.

Small town/rural - small town is an incorporated place not within an MSA, with a population less than 25,000 and greater than or equal to 2,500; rural is a place with a population less than 2,500 and/or a population density of less than 1,000 per square mile, and defined as rural by the U.S. Bureau of the Census.

Geographic region -

Northeast - Maine, New Hampshire, Vermont, Massachusetts, Rhode Island, Connecticut, New York, New Jersey, Pennsylvania

Midwest - Ohio, Indiana, Illinois, Michigan, Wisconsin, Minnesota, Iowa, Missouri, North Dakota, South Dakota, Nebraska, Kansas

South - Delaware, Maryland, District of Columbia, Virginia, West Virginia, North Carolina, South Carolina, Georgia, Florida, Kentucky, Tennessee, Alabama, Mississippi, Arkansas, Louisiana, Oklahoma, Texas

West - Montana, Idaho, Wyoming, Colorado, New Mexico, Arizona, Utah, Nevada, Washington, Oregon, California, Alaska, Hawaii

Percent minority enrollment in the school -

The percent of students enrolled in the school whose race or ethnicity is classified as one of the following: American Indian or Alaskan Native, Asian or Pacific Islander, black, or Hispanic, based on data in the 1996-97 CCD file.

5 percent or less

6 to 20 percent

21 to 50 percent

More than 50 percent

Percent of students at the school eligible for free or reduced-price lunch - This was based on responses to question 20 on the survey questionnaire; if it was missing from the questionnaire, it was obtained from the 1996-97 CCD file. This item served as the measurement of the concentration of poverty at the school.

Less than 20 percent

20 to 39 percent

40 to 69 percent

70 percent or more

It is important to note that many of the school characteristics used for independent analyses may also be related to each other. For example, enrollment size and instructional level of schools are related, with secondary schools typically being larger than elementary schools. Similarly, poverty concentration and minority enrollment are related, with schools with a high minority enrollment also more likely to have a high concentration of poverty. Other relationships between analysis variables may exist. Because of the relatively small sample size used in this study, it is difficult to separate the independent effects of these variables. Their existence, however, should be considered in the interpretation of the data presented in this report.

Comparisons to the 1994 U.S. General Accounting Office Study

The U.S. General Accounting Office (GAO) conducted a study in 1994 on the condition of public school facilities. The sample for the GAO survey was the public school sample from the NCES 1993-94 Schools and Staffing Survey (SASS). In May 1994, GAO mailed questionnaires for 9,956 sampled schools to the 5,459 districts in which these schools were located. While individual schools were sampled, the questionnaires were mailed to the districts with which the schools were associated. A separate questionnaire was enclosed for each sampled school. Completed questionnaires were accepted through early January 1995. Of the 9,956 schools in the sample, 393 were found to be ineligible, resulting in an adjusted sample of 9,563 schools. There were 7,478 completed, usable questionnaires returned to GAO, for a school response rate of 78 percent. The responses were weighted to adjust for non-response and produce national estimates.

Many of the items on the FRSS questionnaire were taken directly from the questionnaire used by GAO in 1994. The same questionnaire items and analysis variables were used with the intention of providing information about change in the condition of public school facilities between 1994 when GAO conducted its survey and 1999 when NCES conducted its survey.

However, the GAO information included in this report is provided as contextual information only. Statistical comparisons are not provided because GAO does not provide standard errors for the data in their reports, and exact point estimates are also missing for some comparative statements from the GAO reports.

In addition, in some cases the data are not completely comparable between the two studies. In particular, the way in which the cost estimates were obtained differed in the two studies. Both studies used the same wording for the cost question, which asked what would probably be the total cost of all repairs, renovations, and modernizations required to put the school's onsite buildings into good overall condition. In the FRSS study, schools that reported in the first question on the survey that the condition of any type of onsite building (original building, permanent addition, or temporary building) or any building feature (e.g., roofs, plumbing, electric power) was less than good (i.e., was given a rating of adequate, fair, poor, or replace) provided information about the cost of the needed repairs, renovations, and modernizations. The GAO study, however, asked about the condition of the types of onsite buildings, followed by the question about the cost to bring the onsite buildings into good overall condition. The question about the condition of various building features was asked several pages later in the GAO study. Thus, even though the wording of the cost question was the same in the FRSS and GAO studies, the two studies may include costs for different things, since respondents to the GAO study were not explicitly prompted to include costs associated with building features. However, since building features (e.g., roofs and plumbing) are important aspects of the condition of buildings, respondents to the GAO study may have included costs associated with these features in their cost estimates. Because of the methodological differences between the two studies, the cost estimates from them should not be directly compared.

When the FRSS data are reanalyzed to include only those schools that reported on the questionnaire that the condition of any type of onsite building was less than good, the percentage of schools that reported needing to spend money to bring the onsite buildings into good overall condition drops from 76 percent to 52 percent.

The total amount needed for the repairs, renovations, and modernizations for this group of schools was estimated to be approximately $111 billion, down from the approximately $127 billion needed by the schools with any type of building or any building feature in less than good condition.47 However, the data are still not completely comparable to the GAO study due to the different ordering of the questions.

GAO also presented two cost estimates based on data from their 1994 study: $112 billion and $101 billion (U.S. GAO 1995a). GAO derived the estimate of $112 billion by summing the amount reported for the cost of all repairs, renovations, and modernizations to put schools into good overall condition (estimated at $101 billion), and the amounts reported that would need to be spent in the next 3 years to comply with various federal mandates, such as asbestos removal and accessibility for students with disabilities (estimated at $11 billion). The $101 billion and the $11 billion were collected in two separate questions at different points in the survey. It is possible that the $112 billion includes some duplication of money needed, since the $11 billion needed to comply with federal mandates may or may not have been included by respondents in the $101 billion needed to put schools into good overall condition. The FRSS survey did not collect information about spending on federal mandates.

When the 1994 GAO estimate of $101 billion needed for repairs, renovations, and modernizations is adjusted for inflation to 1999 dollars, the inflation-adjusted estimate is $112 billion needed for repairs, renovations, and modernizations.48 However, since GAO does not provide either a confidence interval or a standard error for the estimate of $101 billion dollars, it is not possible to do a statistical test of differences between the FRSS and GAO estimates.

Other cost estimates provided in the GAO reports include a combined dollar estimate for both the amount needed for repairs, renovations, and modernizations, and the amount needed to comply with federal mandates. For example, GAO reports that 84 percent of schools needed to spend money, and of those schools needing to spend money, the average amount needed per school was $1.7 million (U.S. GAO 1996). However, the percentage of schools needing to spend money and the estimate of $1.7 million includes money needed to comply with federal mandates. GAO does not report the percentage of schools needing to spend money or an average amount needed per school for just repairs, renovations, and modernizations, which is what is reported for the FRSS survey.

GAO also reports more differences by school characteristics than are found in the FRSS study. For example, according to GAO, "…on every measure-proportion of schools reporting inadequate buildings, inadequate building features, and unsatisfactory environmental conditions; proportion of schools reporting needing to spend above the national average; and number of students attending these schools-the same subgroups consistently emerged as those with the most problems. These subgroups included central cities, the western region of the country, large schools, secondary schools, schools reporting student populations of at least 50.5 percent minority students, and schools reporting student populations of 70 percent or more poor students. The differences between subgroups, however, were often relatively small." (U.S. GAO 1996, p. 2). However, GAO provides no information about whether statistical testing was done, and if so, what critical values were used to indicate statistically significant differences. In addition, the sample size in the GAO study was much larger than in the FRSS study (7,478 versus 903 respondents). Estimates from larger samples typically have smaller standard errors than estimates from smaller samples; consequently, smaller differences tend to be statistically significant in surveys with larger samples compared to the same differences in surveys with smaller samples. Thus, the "relatively small" differences that GAO refers to would be more likely to be significantly different in the GAO study than in the FRSS study in any statistical testing that was done. That is, FRSS may have identified fewer differences as significant while the GAO study may have identified more differences as significant only as a function of differences in sample size.

Comparisons to the 2000 National Education Association Study

The National Education Association (NEA) recently published a report that provided a cost estimate of $322 billion needed for school modernization (NEA 2000). The study on which this report is based differs in many ways from the FRSS study. The major difference is what is included in this estimate. The NEA estimate has two components: funds for school infrastructure needs (estimated at $268 billion), and funds for education technology needs (estimated at $54 billion). School infrastructure needs consisted of new school construction (including the buildings, grounds [purchase, landscaping, and paving], and the fixtures, major equipment, and furniture necessary to furnish it); additions to existing facilities (including the fixtures, major equipment, and furniture necessary to furnish it); renovation of an existing facility; retrofitting of an existing facility (including for technology readiness, such as phone lines and fiber optic cable); deferred maintenance (maintenance necessary to bring a school facility up to good condition, or to replace a facility if it is in such poor condition that it cannot be brought up to good condition); and major improvements to grounds, such as landscaping and paving.

Education technology needs consisted of computers and peripherals; software; connectivity (including Internet access); networks; technology infrastructure (including electrical upgrades, and wiring and cables to, within, and between schools); distance education; maintenance and repair of technology equipment; and technology-related professional development and support for teachers. In contrast, the FRSS study asked for an estimate of the total cost of all repairs, renovations, and modernizations required to put the school's onsite buildings into good overall condition. Thus, the cost estimate from the FRSS study encompasses only a small part of what is included in the cost estimate from the NEA study.

In addition, the two studies obtained information in very different ways. The FRSS study was designed to be completed about sampled schools by district-level personnel who were very familiar with the school facilities in the district, in consultation with school-level personnel if necessary. These data were then weighted to produce national estimates that represent all regular public schools in the United States. In contrast, the NEA report obtained state-level data from numerous sources to come up with their cost estimate. These sources included policy and research literature, policy and research databases, the NEA annual Survey of State School Finance Legislation, and the NEA Modernization Needs Assessment Questionnaire, conducted in 1999. It should also be noted that while NEA describes their study as a 50-state report of school modernization needs, the study received usable responses about school infrastructure from only 24 states, and about education technology from only 2 states. The remaining data were derived by various estimation techniques. The FRSS study, on the other hand, had a 91 percent response rate, and used a weighting process designed to adjust for variable probabilities of selection and differential nonresponse.

Background Information

The survey was performed under contract with Westat, using the Fast Response Survey System (FRSS). Westat's Project Director was Elizabeth Farris, and the Survey Managers were Laurie Lewis and Kyle Snow. Bernie Greene was the NCES Project Officer. The data were requested by the Office of the Under Secretary, U.S. Department of Education. Within the Office of the Under Secretary, input was provided by Thomas Corwin, James Houser, and Stephanie Stullich.

This report was reviewed by the following individuals:

Outside NCES

  • William Brenner, National Clearinghouse for Educational Facilities


  • Richard DiCola, Office of Vocational and Adult Education, U.S. Department of Education


  • Mary Filardo, 21st Century School Fund


  • Kimberly Jenkins, Office of the General Counsel, U.S. Department of Education


  • Judy Marks, National Clearinghouse for Educational Facilities


  • Eileen O'Brien, Office of Educational Research and Improvement, U.S. Department of Education


  • Mary Schifferli, Office for Civil Rights, U.S. Department of Education

Inside NCES

  • Ellen Bradburn, Early Childhood, International, and Crosscutting Studies Division


  • Kerry Gruber, Elementary/Secondary and Libraries Studies Division


  • Lee Hoffman, Elementary/Secondary and Libraries Studies Division


  • Marilyn McMillen, Chief Statistician


  • Valena Plisko, Associate Commissioner, Early Childhood, International, and Crosscutting Studies Division


  • John Ralph, Early Childhood, International, and Crosscutting Studies Division

For more information about the Fast Response Survey System (FRSS) or the survey on the condition of public school facilities, contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Office of Educational Research and Improvement, U.S. Department of Education, 1990 K Street, NW, Washington, DC 20006, e-mail: Bernard_Greene@ed.gov, telephone (202) 502- 7348.

Photo Credits

All photographs on the cover are from Photodisc, Inc., except the photograph in the third row, middle position is from Contact Press Images, © Alon Reininger, and the photograph in the bottom row, left position is from Stock, Boston, © Richard Pasley. 47 The standard error for the 52 percent of schools that needed to spend money is 1.7. The standard error for the $111 billion is 7.1 billion. 48 This inflation-adjusted estimate of $112 billion should not be confused with the estimate of $112 billion that GAO derived in 1994 by combining the estimate of $101 billion needed for repairs, renovations, and modernizations with the estimate of $11 billion needed to comply with federal mandates.

Top