Skip Navigation
Service-Learning and Community Service in K-12 Public Schools
NCES: 1999043
September 1999

Technical Notes

The sample of public schools for the Fast Response Survey System (FRSS) survey on service-learning and community service was selected from the 1996-1997 Common Core of Data (CCD) public school universe file. This was the most up-to-date file that was available at the time the sample was drawn. Over 79,000 regular schools were included in the CCD universe file, of which 49,000 were elementary schools, 15,000 were middle schools, and 16,000 were high schools or schools with combined elementary/ secondary grades. For this survey, elementary, middle, and high schools (including combined schools) were selected. Special education, vocational education, and alternative schools were excluded from the survey along with schools that did not have at least first grade as their highest grade and those outside the 50 states and the District of Columbia.

FRSS surveys generally have relatively small sample sizes of no more than 2,000 schools. A stratified sample of 2,000 schools was selected for the survey on service-learning and community service. The sample was allocated to three instructional-level categories as follows: 200 elementary schools, 500 middle schools, and 1,300 secondary/ combined schools. This sample design was developed based on feasibility calls and a survey pretest that indicated that few elementary schools had service-learning. This distribution of schools by instructional level was designed primarily to enable a relatively detailed analysis of secondary/ combined schools where most service-learning was expected to occur. The much smaller samples of elementary and middle schools were intended to provide some limited information on the prevalence of service-learning and community service among these types of schools.

Within each instructional level, the specified sample sizes were allocated to "substrata" defined by type of locale (city, urban fringe, town, and rural) and size class in rough proportion to the aggregate square root of the enrollment of the schools in the substratum. The use of the square root of enrollment to determine the sample allocation gave greater selection probabilities to the larger schools within a given instructional level, and thus was expected to provide reasonably good sampling precision for estimates that are correlated with enrollment (e. g., the number of students in the school who are involved with service-learning or community service).

Prior to sample selection, schools in the FRSS frame were sorted by region (Northeast, Southeast, Central, West) within primary strata defined by instructional level (elementary, middle, secondary), type of locale, and enrollment size class (under 300, 300-499, 500-999, 1000-1499, 1500 or more). The specified number of schools was selected from each primary stratum with equal probabilities. Although the school sample was self-weighting within each primary stratum, the overall probabilities varied by instructional level and by size class within level.

The 3-page survey instrument was designed by Westat and NCES in collaboration with the Corporation for National Service and Alan Melchior of the Center for Human Resources, Brandeis University. The questions included on the survey addressed the policies and support for community service and service-learning in K-12 public schools. The survey began with a brief section on community service, including questions on whether students participated in community service activities, whether participation in these activities was required, and whether the school arranged community service opportunities. The majority of the survey, however, focused on service-learning. Specifically, the survey results provide reliable national data on:

The 3-page survey instrument was designed by Westat and NCES in collaboration with the Corporation for National Service and Alan Melchior of the Center for Human Resources, Brandeis University. The questions included on the survey addressed the policies and support for community service and service-learning in K-12 public schools. The survey began with a brief section on community service, including questions on whether students participated in community service activities, whether participation in these activities was required, and whether the school arranged community service opportunities. The majority of the survey, however, focused on service-learning. Specifically, the survey results provide reliable national data on:

  • The percentage of public schools with service-learning activities,
  • The percentage of students participating in service-learning activities,
  • The percentage of school districts and schools with policies encouraging or requiring the integration of service-learning in the course curriculum,
  • The ways in which schools are implementing service-learning and the specific academic subjects in which it is occurring,
  • Support for teachers interested in integrating service-learning into their course curriculum,
  • Public schools' main reasons for encouraging student involvement in service-learning activities, and
  • Student participation in organizing and evaluating activities for service-learning.

The survey findings also provide reliable national estimates on sources of funding and volunteer participation in service-learning and community service activities taking place in K-12 public schools.

In March 1999, questionnaires were mailed to the principals in the 2,000 sampled schools. The principal was asked to forward the questionnaire to the person most knowledgeable about community service activities and service-learning at the school. Telephone followup of nonrespondents was initiated in mid-March, and data collection was completed in May. A total of 1,832 schools completed the survey, and 15 other schools were found to be outside the scope of the survey. Thus, the unweighted response rate was 92 percent (1,832 of the eligible 1,985 schools). The weighted response rate was 93 percent.

Survey responses were weighted to produce national estimates. For estimation purposes, sampling weights were attached to each school data record. The sampling weights reflect the schools' overall probabilities of selection and include upward adjustments to compensate for differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability. The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is known as a 95 percent confidence interval. For example, the estimated percentage of public schools with service-learning is 32 percent, and the standard error is 2.0 percent. The 95 confidence interval for the statistic extends from 32 -(2.0 x 1.96) to 32 + (2.0 x 1.96), or from 28.1 percent to 35.9 percent.

To properly reflect the complex features of the sample design, standard errors of the survey-based estimates were calculated using jackknife replication. Under the jackknife replication approach, 50 subsamples or "replicates" were formed in a way that preserved the basic features of the full sample design. A set of estimation weights (referred to as "replicate weights") were then generated for each jackknife replicate. Using the full sample weights and the replicate weights, estimates of survey statistics were calculated for the full sample and each of the 50 jackknife replicates. The sum of the squared deviations of the replicate estimates then provided a measure of the variance (standard error) of the survey statistic. The relative standard errors (i. e., coefficients of variation) of estimates from this study ranged from 3 percent to 12 percent for most national estimates. These measures express the standard errors as a percentage of the estimates.

Standard errors for all of the estimates are presented in the tables. The standard errors for figures 1-3 follow the references. All specific comparative statements made in this report have been tested for statistical significance through chi-squared tests or t-tests adjusted for multiple comparisons using the Bonferroni adjustment and are significant at the .05 level or better.

The standard errors reported for some statistics in this report reflect design effects ranging from 1 to 5 or more. Design effects are an integral part of the standard error. They either inflate or attenuate the simple random sample standard error. For example, a design effect of 1.5 means that the variance of an estimate is 1.5 times the corresponding variance that would have been obtained from a simple random sample of the same size. Design effects vary by statistic and domain of analysis.

The large design effects of 5 or more generally applied to estimates for all levels combined and arose primarily from the use of the disproportionate allocation of the total sample to the three instructional levels. This allocation was based on the erroneous assumption that the prevalence of service-learning in elementary and middle schools was virtually nonexistent, and was intended to provide excellent representation of high schools where most service-learning was expected to occur, but only limited representation of elementary and middle schools. Variable sampling fractions within and across the three instructional levels also contributed to the total design effects.

The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in the collection of the data. These errors can sometimes bias the data. Nonsampling errors may include such problems as the differences in the respondents' interpretation of the meaning of the questions; memory effects; misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time and place the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are difficult to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used. To minimize the potential for nonsampling errors, the survey was pretested with public school service-learning coordinators and other individuals knowledgeable about service activities. During the survey design process and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous terms as a result. As previously mentioned, however, there may have been some problems in the way schools interpreted the definitions of community service and service-learning. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics (NCES). Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone to resolve problems. Data were keyed with 100 percent verification.

Top