Skip Navigation
Use of Educational Research and Development Resources by Public School Districts
NCES: 90084
April 1990

Survey Methodology and Data Reliability

In early January 1989, questionnaires (see attachment) were mailed to a national probability sample of 1,093 public school districts from a universe of approximately 15,100 public school districts. Districts were asked to have the questionnaire completed by the person most knowledgeable about the district's use of R&D resources, and were encouraged to have that person check with other persons in the district who might also be familiar with the use of R&D resources. Telephone followup of nonrespondents was initiated in late January, and data collection was completed in March. The overall response rate was 95 percent: 1,039 of 1,091 eligible districts. Item nonresponse was low--1 percent or less for most items.

The sampling frame used for the survey was the Common Core of Data Public Education Agencies 1987-88. The sample was stratified by size of district using seven size categories. Within the sampling strata, schools were further sorted by the nine regions used for the Regional Educational Laboratories (Northeast, Mid-Atlantic, Appalachia, North Central, Mid-continent, Southwest, Northwest, Far West, and Southeast) and metropolitan status. The sample was allocated in size classes approximately in proportion of the aggregate square root of enrollment of the districts in the size class, and adjusted to yield a minimum of approximately 100 districts from each region and a total of about 250 urban districts. The survey data were weighted to reflect these sampling rates (probability of selection) and were adjusted for nonresponse. Numbers in the tables and text have been rounded. Percentages and averages have been calculated based on the actual estimates rather than the rounded values.


The standard error is a measure of the variability due to sampling when estimating a statistic. It indicates how much variance there is in the population of possible estimates of a parameter for a given size sample. Standard errors can be used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, for the percentage of districts recognizing Regional Educational Laboratories, the estimate for all districts is 71.8 and the standard error is 2.1. The 95 percent confidence interval for this statistic extends from 71.8- (2. 1 times 1.96) to 71.8 + (2.1 times 1.96) or from 67.7 to 75.9.

Estimates of standard errors were computed using a variance estimation procedure for complex sample survey data known as jackknife. Table 13 presents standard errors for some statistics. Standard errors for statistics not included in this table can be obtained upon request.

In some cases, standard errors were relatively large because statistics were based on a small number of cases. This was true for statistics concerning the nine regions used for the Regional Educational Laboratories, especially if the estimates required further subsetting of the districts (e.g., the percentage of districts in Appalachia that reported very frequent use of R&D resources from the Regional Educational Laboratories, which is based only on those districts in Appalachia that both recognized the Regional Laboratories and reported receiving resources from them). In this report, an asterisk (*) is used to indicate those estimates greater than or equal to .10 (i.e., 10 percent) that had a 95 percent confidence interval greater than or equal to .10, and those estimates less than .10 that had a 95 percent confidence interval greater than or equal to .05. For example, the percentage of districts in the Southeast entirely paying for at least some R&D resources from the Regional Laboratories is estimated at 21 percent, with a 95 percent confidence interval of 11; the asterisk is included to warn readers that the estimate should not be considered as highly precise. Estimates lower than .10 are flagged when the confidence interval is greater than .05 (rather than .10) because the standard error is a relatively high proportion of the estimate; however, for practical purposes, the proportion of districts holding a particular characteristic would remain quite small. The largest 95 percent confidence interval occurring in the text of this report is. 18.


For categorical data, relationships between variables with 2 or more levels have been tested in a two-way analysis, using chi-square tests at the .05 level of significance, adjusted for average design effect. If the overall chi-square test was significant, it was followed with tests using a Bonferroni t statistic, which maintained an overall 95 percent confidence level or better. Unless noted otherwise, all comparisons made in this report were statistically significant using these tests.

Some of the variables used to classify districts were correlated (such as enrollment size and metropolitan status). However, the sample size of this survey limits our ability to understand the full multivariate nature of the responses by correlated classification variables. For example, less than 25 of the sampled districts were both small and urban, and only about 10 were both large and rural.

Survey estimates are also subject to errors of reporting and errors made in the collection of the data. These errors, called nonsampling errors, can sometimes bias the data. While general sampling theory can be used to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and usually require that an experiment be conducted as part of the data collection procedures or the use of data external to the study.

Nonsampling errors may include such things as differences in the respondents' interpretation of the meaning of the questions, differences related to the particular time the survey was conducted, or errors in data preparation. During the design of the survey and survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire was pretested with respondents like those who completed the survey, and the questionnaire and instructions were extensively reviewed by the National Center for Education Statistics (NCES), Programs for the Improvement of Practice, and Information Services, all part of the Office of Educational Research and Improvement (OERI) in the U.S. Department of Education, and by the Committee for Evaluation and Information Systems (CEIS) of the Council of Chief State School Officers. Manual and machine editing of the questionnaires was conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone; data were keyed with 100 percent verification.


Data are presented for all districts and by the following characteristics: region, metropolitan status, and size of enrollment. For size of enrollment, small districts are those with fewer than 2,500 students, medium-size districts are those with 2,500-9,999 students, and large districts are those with 10,000 or more students.

Regional Classifications

Regional classifications are those used for the Regional Educational Laboratories funded by the U.S. Department of Education. The Northeast includes districts in Connecticut, Maine, Massachusetts, New Hampshire, New York, Rhode Island, and Vermont. The Mid-Atlantic includes districts in Delaware, the District of Columbia, Maryland, New Jersey, and Pennsylvania. The Appalachia region includes districts in Kentucky, Tennessee, Virginia, and West Virginia. The Southeast includes districts in Alabama, Florida, Georgia, Mississippi, North Carolina, and South Carolina. The North Central region includes districts in Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, and Wisconsin. The Mid-continent includes districts in Colorado, Kansas, Missouri, Nebraska, North Dakota, South Dakota, and Wyoming. The Southwest includes districts in Arkansas, Louisiana, New Mexico, Oklahoma, and Texas. The Northwest includes districts in Alaska, Hawaii, Idaho, Montana, Oregon, and Washington. The Far West includes districts in Arizona, California, Nevada, and Utah.

Coding Specifications for Resources That Had Been Particularly Useful

The responses have been grouped by provider (see Table 11). There were many sources identified beyond the four OERI programs that are the primary focus of this survey. The information below provides illustrations of cited sources that were grouped in each designated category.


Providers

Other OERI: e.g., National Center for Education Statistics, LEAD centers, Principal Selection Guide.

Other U.S. Department of Education: e.g., Drug education programs, bilingual education resource centers.

Other Federal units: e.g., The General Accounting Office, U.S. Government Printing Office, Office of Technology Assessment.

Institutions of Higher Education: Institutions and institutional organizations other than those operating a National Research and Development Center.

Public Schools: Those other than ones cited as Developer Demonstrators of the National Diffusion Network.

State Intermediate Units: e.g., County offices of education, regional service organizations, cooperative service agencies.

State-wide central units: Includes, in addition to the several State education agencies or departments cited, special divisions at the State level, the governor's office, and technical assistance centers.

Associations, Foundations, Professional Societies: e.g., The Association for Supervision and Curriculum Development, Charles Stewart Mott Foundation, and Phi Delta Kappa.

Research Services: Almost exclusively the Educational Research Service. Authors, Consultants, Private Corporations: e.g., Madeline Hunter, Harold Hodgkinson, Quest International, RMC.


Content Area


The "most useful" products and services identified by the respondents in Question 4 on the survey questionnaire have been grouped by content area to correspond to the content areas as defined in Question 3. The information below provides illustrations of the specific kinds of publications, programs, and other assistance reported. To help clarify these items, the provider named has also been shown when available.

Student Populations


At Risk

: e.g., National Diffusion Network Developer Demonstrator models, "Early Prevention of School Failure," and "Reading Recovery;" technical assistance from the Miami desegregation center OERI'S handbook "Dealing with Dropouts;" "The Urban Superintendents Call to Action," by OERI in the U.S. Department of Education.

Handicapped

: e.g., State special education division materials.

Gifted

: e.g., State education department contact on programs for the gifted and talented.

Demographics

: e.g., Educational Research Service (ERS) bulletin on enrollment data.

Bilingual

: e.g., Title VII evaluation workshop by the U.S. Department of Education.

Rural

: e.g., Rural education materials from the Appalachia Educational Laboratory.

Indian

: Indian education program (no provider named).

Staffing and Staff Development

Staff development/teacher evaluation

: e.g., "Continuing to Learn: A Guidebook for Teacher Development" by the Regional Laboratory for Educational Improvement of the Northeast and Islands; publications and training by the Center for Research on Elementary and Middle Schools.

Administrator development/evaluation

: e.g., Educational management leadership job performance inventory by the Texas LEAD Center.

Curriculum

Drug education

: e.g., "Drug Avengers," a U.S. Department of Education video; booklets from the National Parents Resource Institute for Drug Information.

Health and safety, general

: e.g., Asbestos removal training through the School Boards Association.

Language arts

: e.g., Curriculum guides in reading and language from the California State Department of Education; "Becoming a Nation of Readers" from OERI.

Math and science

: e.g., Research on math development from the Southeastern Educational Improvement Laboratory; [one respondent's district] piloted an earth science program by the University of North Dakota.

Technology

e.g., "Power On" by the U.S. Office of Technology Assessment.

Thinking Skills

: e.g., Thinking skills tapes from the Association for Supervision and Curriculum Development (ASCD).

International/multicultural education

: e.g., ERIC search on foreign language programs in the middle schools.

Vocational

: e.g., Vocational curriculum development program out of Oklahoma State University.

Curriculum development

: e.g., "How to Conduct a Curriculum Audit" by the National Association of School Executives.

School and Classroom Management

Effective Schools/proven practices/models

: e.g., "Onward to Excellence" program of the Northwest Regional Educational Laboratory; effective schools project of the Southwest Educational Development Laboratory; "Educational Programs That Work," description of NDN Developer Demonstrator projects; outcome-based education by the North Central Laboratory.

Miscellaneous research results

: e.g., "New Dimensions in Education" by Northwest Regional Educational Laboratory.

Teaching/learning strategies

: Teacher Expectations and Student Achievement (TESA) material from Phi Delta Kappa.

Choice/magnets/restructuring/school-based management

: "Public School Choice: National Trends and Initiatives" by the New Jersey State Department of Education; assistance with shared governance by Research for Better Schools (Mid-Atlantic Laboratory).

School size/Class size

: e.g., "Class Size and Public Policy," publication from OERI.

Grouping

: e.g., ERIC research on graded organizational patterns..

Middle school education

: e.g., Middle school research from the Far West Laboratory for Educational Research and Development.

Extended year

: e.g., ERS article on year-round schools.

Discipline

: e.g., Workshop on group conflict at educational service center #1 in Illinois

Policymaking/strategic operations

: e.g., "Developing Business-Education Partnerships" by the National School Volunteer Association; Administrative services from the county (Riverside, CA) office of education.

Student Testing and Evaluation


e.g., Student Assessment Handbook by the Georgia Department of Education; ERIC literature search on weighted scores.

Early Childhood Education


e.g., Minnesota early childhood family education project.

Other


e.g., Technical assistance from the New York State Education Department.

Top