Skip Navigation
Internet Access in U.S. Public Schools and Classrooms: 1994-2002
NCES: 2004011
October 2003

Appendix A Methodology and Technical Notes

The Fast Response Survey System (FRSS) was established in 1975 by the National Center for Education Statistics (NCES), U.S. Department of Education. FRSS is designed to collect small amounts of issue-oriented data with minimal burden on respondents and with a quick turnaround from data collection to reporting.

Sample Selection

The sample of elementary and secondary schools for the FRSS survey on Internet access in public schools was selected from the 2000–2001 NCES Common Core of Data (CCD) Public School Universe File, the most up-to-date file available at the time the sample was drawn. Over 96,600 schools are contained in the 2000–2001 CCD Public School Universe File. For this survey, regular elementary and secondary/combined schools were selected. Special education, vocational education, and alternative schools were excluded from the sampling frame, along with schools with a highest grade below first grade and those outside the 50 states and the District of Columbia. With these exclusions, the final sampling frame consisted of about 83,500 schools, of which about 62,500 were classified as elementary schools and about 21,000 as secondary/combined schools.15

A sample of 1,206 schools was selected from the public school frame. To select the sample, the frame of schools was stratified by instructional level (elementary, secondary/combined schools), enrollment size (less than 300 students, 300 to 999, 1,000 to 1,499, 1,500 or more), and percentage of students eligible for free or reduced-price lunch (less than 35 percent, 35 to 49 percent, 50 to 74 percent, 75 percent or more). Schools in the highest poverty category (schools with 75 percent or more students eligible for free or reduced-price lunch) were oversampled to permit analyses for that category.


Respondents and Response Rates

The three-page survey instrument was designed by Westat and NCES. The questions included on the survey addressed access to the Internet in public schools and classrooms, the types of Internet connections used, student access to the Internet outside of regular school hours, laptop loans, hand-held computers for students and teachers, school web sites, teacher professional development on how to integrate the use of the Internet into the curriculum, and technologies and procedures used to prevent student access to inappropriate material on the Internet.

In early October 2002, questionnaires were mailed to the principals of the 1,206 sampled schools. The principal was asked to forward the questionnaire to the technology coordinator or person most knowledgeable about Internet access at the school. Telephone follow-up of nonrespondents was initiated later in October, and data collection was completed in December. The respondent information section on the front of the questionnaire indicated that the technology coordinator completed the questionnaire at 34 percent of the schools, the principal completed it at 31 percent of the schools, and other personnel completed it at 35 percent of the schools. Seventeen schools were outside the scope of the survey, and 1,095 schools completed the survey. Thus, the final response rate was 92 percent (1,095 of 1,189 eligible schools). The weighted response rate was 93 percent. With the exception of the question on the number of hand-held computers provided to teachers and students for instructional purposes (which had an item nonresponse rate of 9.4 percent), weighted item nonresponse rates ranged from 0 percent to 3.1 percent.


Imputation for Nonresponse

Although item nonresponse for key items was very low, missing data were imputed for the 14 items listed in Table A-1. The missing items included both numerical data such as counts of instructional rooms and computers, as well as categorical data such as the provision of hand-held computers to students and teachers. The missing data were imputed using a "hot deck" approach to obtain a "donor" school from which the imputed values were derived. Under the hot deck approach, a donor school that matched selected characteristics of the school with missing data was identified. The matching characteristics included level, enrollment size class, type of locale, and total number of computers in the school. Once a donor was found, it was used to derive the imputed values for the school with missing data. For categorical items, the imputed value was simply the corresponding value from the donor school. For numerical items, an appropriate ratio (e.g., the proportion of instructional rooms with Internet access) was calculated for the donor school, and this ratio was applied to available data (e.g., reported number of instructional rooms) for the recipient school to obtain the corresponding imputed value. All missing items for a given school were imputed from the same donor.


Sampling and Nonsampling Errors

The survey responses were weighted to produce national estimates (Table A-2). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are based on the sample selected and, consequently, are subject to sampling variability. The standard error is the measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of public schools with a web site in 2002 is 86 percent, and the estimated standard error is 1.1 percent. The 95 percent confidence interval for the statistic extends from 86 – (1.1 times 1.96) to 86 + (1.1 times 1.96), or from 84 to 88 percent. The coefficient of variation ("c.v.," also referred to as the "relative standard error") expresses the standard error as a percentage of the quantity being estimated. The c.v. of an estimate (y) is defined as c.v. = (s.e./y) x 100. Throughout this report, for any coefficient of variation higher than 50 percent, the data are flagged with the note that they should be interpreted with caution, as the value of the estimate is very unstable.

Because the data from this survey were collected using a complex sampling design, the sampling errors of the estimates from this survey (e.g., estimates of proportions) are typically larger than would be expected based on a simple random sample. Not taking the complex sample design into account can lead to an underestimation of the standard errors associated with such estimates. To generate accurate standard errors for the estimates in this report, standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variance of the statistic. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVar) was used to calculate the estimates of standard errors. WesVar is a stand-alone Windows application that computes sampling errors from complex samples for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).

The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflect the complex nature of the sample design. In particular, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/g significance level to control for the fact that g differences were simultaneously tested. The Bonferroni adjustment was also used for previous FRSS Internet reports. The Bonferroni adjustment is appropriate to test for statistical significance when the analyses are mainly exploratory (as in this report) because it results in a more conservative critical value for judging statistical significance. This means that comparisons that would have been significant with a critical value of 1.96 may not be significant with the more conservative critical value. For example, the critical value for comparisons between any two of the four categories of poverty concentration is 2.64 rather than 1.96.

When comparing percentage or ratio estimates across a family of three or more ordered categories (e.g., categories defined by percent minority enrollment), regression analyses were used to test for trends rather than a series of paired comparisons. For proportions, the analyses involved fitting models in WesVar with the ordered categories as the independent variable and the (dichotomous) outcome of interest (e.g., whether or not the school made computers with Internet access available before school) as the dependent variable. For testing the overall significance, an analysis of variance (ANOVA) model was fitted by treating the categories of the independent variables as nominal categories. For the trend test, a simple linear regression model was used with the categories of the independent variable as an ordinal quantitative variable . In both cases, tests of significance were performed using an adjusted Wald F-test. The test is applicable to data collected through complex sample surveys and is analogous to F-tests in standard regression analysis. For estimated ratios, similar tests of overall significance and linear trends were performed using procedures analogous to those described by Skinner, Holt, and Smith.16 A test was considered significant if the p-value associated with the statistic was less than 0.05.

The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in collection of the data. These errors can sometimes bias the data. Nonsampling errors may include such problems as the difference in the respondents' interpretation of the meaning of the question; memory effects; misrecording of responses; incorrect editing, coding, or data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used. To minimize the potential for nonsampling errors, the questionnaire on Internet access in public schools was pretested in 1994, and again each time it was substantially modified. The questionnaire was last pretested for the fall 2001 survey, since a few new topics were introduced in the survey. The pretesting was done with public school technology coordinators and other knowledgeable respondents like those who would complete the survey. During the design of the survey, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were intensively reviewed by NCES.

Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone to resolve problems. Data were keyed with 100 percent verification.


Definitions of Terms Used in the Questionnaire

Types of Internet connections

T3/DS3—Dedicated digital transmission of data and voice at the speed of 45 MB per second; composed of 672 channels.

Fractional T3—One or more channels of a T3/DS3 line. Used for data and voice transmission at the speed of less than 45 MB per second.

T1/DS1—Dedicated digital transmission of data and voice at the speed of 1.5 MB per second; composed of 24 channels.

Fractional T1—One or more channels of a T1/DS1 line. Used for data and voice transmission at the speed of less than 1.5 MB per second.

Cable modem—Dedicated transmission of data through cable TV wires at a speed of up to 2 MB per second.

DSL (Digital Subscriber Line—Refers collectively to ADSL, SDSL, HDSL, and SDSL. DSLs have a dedicated digital transmission speed of up to 32 MB per second.

ISDN (Integrated Services Digital Network)—Sends voice and data over digital telephone lines or normal telephone wires at the speed of up to 128 KB per second.

56 KB—Dedicated digital transmission of data at the speed of 56 KB per second.

Dial-up connection—Data transmission through a normal telephone line upon command, at the maximum speed of 56 KB per second (for example, AOL or Earthlink).

Types of technologies to prevent student access to inappropriate material on the Internet

Blocking software—Uses a list of web sites that are considered inappropriate and prevents access to those sites.

Filtering software—Blocks access to sites containing keywords, alone or in context with other keywords.

Monitoring software—Records e-mails, instant messages, chats, and the web sites visited.

Intranet—Controlled computer network similar to the Internet, but accessible only to those who have permission to use it. Intranet system managers can limit user access to Internet material.


Definitions of Analysis Variables

Instructional level—Schools were classified according to their grade span in the 2000–2001 Common Core of Data (CCD) School Universe File. Data for combined schools are included in the totals and in analyses by other school characteristics, but are not shown separately.

    Elementary school—Had grade 6 or lower and no grade higher than grade 8.

    Secondary school—Had no grade lower than grade 7 and had grade 7 or higher.

School size—Total enrollment of students based on the 2000–2001 CCD School Universe File.

    Less than 300 students
    300 to 999 students
    1,000 or more students

Locale—Is defined in the 2000–2001 CCD School Universe File.

    City—A central city of a Consolidated Metropolitan Statistical Area (CMSA) or Metropolitan Statistical Area (MSA).

    Urban fringe—Any incorporated place, Census-designated place, or non-place territory within a CMSA or MSA of a large or mid-size city and defined as urban by the Census Bureau.

    Town—An incorporated place or Census-designated place with a population greater than or equal to 2,500 and located outside a CMSA or MSA.

    Rural—Any incorporated place, Census-designated place, or non-place territory designated as rural by the Census Bureau.

Percent minority enrollment—The percent of students enrolled in the school whose race or ethnicity is classified as one of the following: American Indian or Alaskan Native; Asian or Pacific Islander; Black, non-Hispanic; or Hispanic, based on data in the 2000–2001 CCD School Universe File.

    Less than 6 percent
    6 to 20 percent
    21 to 49 percent
    50 percent or more

Percent of students eligible for free or reduced-price school lunch—This was based on responses to question 27 on the survey questionnaire; if it was missing from the questionnaire (1.5 percent of all cases), it was obtained from the 2000–2001 CCD School Universe File. This item served as a measurement of the concentration of poverty at the school.

    Less than 35 percent
    35 to 49 percent
    50 to 74 percent
    75 percent or more

Geographic region—One of four regions used by the Bureau of Economic Analysis of the U.S. Department of Commerce, the National Assessment of Educational Progress, and the National Education Association. Obtained from the 2000–2001 CCD School Universe File.

    Northeast—Connecticut, Delaware, District of Columbia, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.

    Southeast—Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia.

    Central—Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin.

    West—Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oklahoma, Oregon, Texas, Utah, Washington, and Wyoming.

It is important to note that many of the school characteristics used for independent analysis may also be related to each other. For example, enrollment size and instructional level of schools are related, with secondary schools typically being larger than elementary schools. Similarly, poverty It is important to note that many of the school characteristics used for independent analysis may also be related to each other. For example, enrollment size and instructional level of schools are related, with secondary schools typically being larger than elementary schools. Similarly, poverty concentration and minority enrollment are related, with schools with a higher minority enrollment also more likely to have a high concentration of poverty. Other relationships between analysis variables may exist. Because of the relatively small sample size used in this study, it is difficult to separate the independent effects of these variables. Their existence, however, should be considered in the interpretation of the data.


Survey Acknowledgements

The survey was performed under contract with Westat. Westat's Project Director was Laurie Lewis, and the survey manager was Anne Kleiner. Bernie Greene was the NCES Project Officer. Debbie Alexander directed the data collection efforts, assisted by Ratna Basavaraju and Anjali Pandit. Alla Belenky was the programmer, Carol Litman was the editor, and Sylvie Warren was responsible for formatting the report.

    This report was reviewed by the following individuals:

    Outside NCES

  • John Bailey, Director, Office of Educational Technology, U.S. Department of Education
  • Stephanie Cronen, American Institutes for Research
  • Peirce Hammond, Mathematics and Science Initiative, U.S. Department of Education
  • Lawrence Lanahan, American Institutes for Research
  • Jenelle Leonard, Office of Elementary and Secondary Education, U.S. Department of Education
  • Barbara Means, SRI International
  • Ram Singh, National Center for Education Research, Institute of Education Sciences, U.S. Department of Education
  • Bernadette Adams Yates, Policy and Program Studies, Office of the Under Secretary, U.S. Department of Education
    Inside NCES
  • William Hussar, Early Childhood, International, and Crosscutting Studies Division
  • Edith McArthur, Early Childhood, International, and Crosscutting Studies Division
  • Jeffrey Owings, Associate Commissioner, Elementary/Secondary and Libraries Studies Division
  • Val Plisko, Associate Commissioner, Early Childhood, International, and Crosscutting Studies Division
  • Marilyn Seastrom, Chief Statistician
  • Lisa Ward, Assessment Division


15During data collection, a number of sampled schools were found to be outside the scope of the survey, usually because they were closed or merged. This reduced the number of schools in the sampling frame to an estimated 82,036.

16C.J. Skinner, D. Holt, and T.M.F. Smith, Analysis of Complex Surveys (Chichester: John Wiley & Sons, 1989).

Top