The Fast Response Survey System (FRSS) was established in 1975 by the National Center for Education Statistics (NCES), U.S. Department of Education. FRSS is designed to collect small amounts of issue-oriented data with minimal burden on respondents and within a relatively short timeframe. Surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly. Data are weighted to produce national estimates of the sampled education sector. The sample size permits limited breakouts by classification variables. However, as the number of categories within the classification variables increases, the sample size within categories decreases, which results in larger sampling errors for the breakouts by classification variables. FRSS collects data from state education agencies, local education agencies, public and private elementary and secondary schools, public school teachers, and public libraries.
The sample for the FRSS survey on programs for adults in public library outlets consisted of 1,011 public library outlets in the 50 states and the District of Columbia. The sample was selected from the NCES Fiscal Year 1997 Public Libraries Survey (PLS) Public Library Outlet File. The sampling frame consisted of 16,918 public library outlets, of which 8,954 were central/main library outlets, 7,120 were branch outlets, and 844 were bookmobiles or books-by-mail only services.23 The public library outlet sampling frame was stratified by type of outlet (central/main, branch, bookmobile/books-by-mail), metropolitan status (urban, suburban, rural), and size of the library outlet based on estimated size of the population served by the outlet (less than 5,000, 5,000 to 9,999, 10,000 to 24,999, 25,000 to 99,999, 100,000 to 249,999, 250,000 or more), for a total of 54 primary strata. Within the primary strata, outlets were also sorted by geographic region (Northeast, Southeast, Central, West) to induce implicit geographic stratification. The allocation of the total sample to a particular stratum was made in proportion to the aggregate square root of the estimated size of the population served within the stratum. Libraries were then selected systematically and with equal probabilities at rates that depended on the allocation indicated above. In effect, with the given sample allocation, libraries were selected with probabilities approximately proportionate to the square root of the population size. After the stratum sample sizes were determined, a sample of 1,011 outlets was selected systematically from the sorted file using independent random starts within each stratum. The sample contained 461 central/main libraries, 485 branch libraries, and 65 bookmobiles.
Questionnaires and cover letters were mailed to the library directors in the sampled library outlets in mid-October 2000. Library outlets rather than administrative entities were sampled because the survey was seeking information about what individual outlets were doing to provide adult programming in selected areas. In addition, past experience with library surveys indicated that outlets are better sources for questions about services to library users, whereas administrative entities are better sources for policy questions. The cover letter indicated that the survey should be completed by the person who was most knowledgeable about programs for adults in that individual library outlet. The respondent information section on the front of the questionnaire indicated that the library director completed the questionnaire at 75 percent of the outlets, the assistant director completed it at 15
percent of the outlets, other personnel completed it at 8 percent of the outlets, and the title of the respondent was not known at 2 percent of the outlets. Telephone follow up was conducted from November 2000 through mid-January 2001 with outlets that did not respond to the initial questionnaire mailing. Of the 1,011 outlets selected for the sample, 27 were found to be out of the scope of the survey, primarily because the outlet was no longer in existence. This left a total of 984 eligible outlets in the sample. Completed questionnaires were received for 954 outlets, or 97 percent of the eligible outlets. The weighted response rate was also 97 percent. Weighted item nonresponse rates for individual questionnaire items ranged from 0 percent to 1 percent. Imputation for item nonresponse was not implemented.
The responses were weighted to produce national estimates (see table A-1). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.
The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.
To minimize the potential for nonsampling errors, the questionnaire was pretested several times with respondents like those who completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. In addition, NCES convened a meeting with practitioners in the field for advice on the questionnaire design and the appropriate respondent. The questionnaire and instructions were also extensively reviewed by NCES. The survey pretests and the various reviews during survey development indicated that the outlet was the appropriate sampling and data collection level for this survey, and that outlet staff were knowledgeable respondents about adult programming within their individual library outlet. Manual and machine editing of the questionnaire responses was conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.
The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of outlets offering adult literacy programs is 17.0 percent, and the estimated standard error is 1.3 percent. The 95 percent confidence interval for the statistic extends from [17.0 – (1.3 times 1.96)] to [17.0 + (1.3 times 1.96)], or from 14.5 to 19.5 percent. Tables of standard errors for each table and figure in the report are provided in appendix B.
Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVar) was used to calculate the estimates of standard errors. WesVar is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).
The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In addition, Bonferroni adjustments were made to control for multiple comparisons where appropriate (see Miller 1966, pp. 67-70). Bonferroni adjustments correct for the fact that a number of comparisons (g) are being made simultaneously. The adjustment is made by dividing the 0.05 significance level by g comparisons, effectively increasing the critical value necessary for a difference to be statistically different. This means that comparisons that would have been significant with an unadjusted critical t value of 1.96 may not be significant with the Bonferroni-adjusted critical t value. For example, the Bonferroni-adjusted critical t value for comparisons between any two of the three categories of metropolitan status is 2.40, rather than 1.96. This means that there must be a larger difference between the estimates being compared for there to be a statistically significant difference when the Bonferroni adjustment is applied than when it is not used.
Number of library visits per week—number of persons who entered the library outlet in a typical week during spring 2000, based on responses to question 1 on the survey questionnaire. This provides one measure of outlet size.
Metropolitan status—from the metropolitan status variable (C_MSA) on the NCES Fiscal Year 1997 Public Libraries Survey (PLS) Public Library Outlet File.
The survey was performed under contract with Westat, using the Fast Response Survey System. Westat's Project Director was Elizabeth Farris, and the Survey Manager was Laurie Lewis. Bernie Greene was the NCES Project Officer. The data were requested by three groups within the U.S. Department of Education: the Elementary, Secondary, and Library Studies Division at NCES, represented by Adrienne Chute; the National Institute on Postsecondary Education, Libraries, and Lifelong Learning of the Office of Educational Research and Development, represented by Barbara Humes; and the National Library of Education, represented by Christina Dunn.
This report was reviewed by the following individuals:
For more information about the Fast Response Survey System or the FRSS survey on programs for adults in public library outlets, contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Office of Educational Research and Improvement, U.S. Department of Education, 1990 K Street, NW, Washington, DC 20006, telephone (202) 502-7348, e-mail: Frss@ed.gov
More information and publications about public libraries in the United States, based on information collected by the NCES Library Statistics Program, is available on the NCES World Wide Web page: http://nces.ed.gov/