The Postsecondary Education Quick Information System (PEQIS) was established in 1991 by the National Center for Education Statistics, U.S. Department of Education. PEQIS is designed to conduct brief surveys of postsecondary institutions or state higher education agencies on postsecondary education topics of national importance. Surveys are generally limited to two or three pages of questions, with a response burden of about 30 minutes per respondent. Most PEQIS institutional surveys use a previously recruited, nationally representative panel of institutions. The PEQIS panel was originally selected and recruited in 1991-92. In 1996, the PEQIS panel was reselected to reflect changes in the postsecondary education universe that had occurred since the original panel was selected. A modified Keyfitz approach was used to maximize overlap between the 1996 panel and the 1991-92 panel. The sampling frame for the PEQIS panel recruited in 1996 was constructed from the 1995- 96 IPEDS Institutional Characteristics file. Institutions eligible for the PEQIS frame for the panel recruited in 1996 included 2-year and 4- year (including graduate-level) institutions (both institutions of higher education and other postsecondary institutions), and less-than-2-year institutions of higher education located in the 50 states and the District of Columbia: a total of 5,353 institutions.
The PEQIS sampling frame for the panel recruited in 1996 was stratified by instructional level (4-year, 2-year, less-than-2-year), control (public, private nonprofit, private for-profit), highest level of offering (doctor's/first professional, master's, bachelor's, less than bachelor's), total enrollment, and status as either an institution of higher education or other postsecondary institution. Within each of the strata, institutions were sorted by region (Northeast, Southeast, Central, West), whether the institution had a relatively high minority enrollment, and whether the institution had research expenditures exceeding $1 million. The sample of 1,669 institutions was allocated to the strata in proportion to the aggregate square root of total enrollment. Institutions within a stratum were sampled with equal probabilities of selection. The modified Keyfitz approach resulted in 80 percent of the institutions in the 1996 panel overlapping with the 1991-92 panel.
Panel recruitment was conducted with the 338 institutions that were not part of the overlap sample. During panel recruitment, 20 institutions were found to be ineligible for PEQIS, primarily because they had closed or offered just correspondence courses. The final unweighted response rate at the end of PEQIS panel recruitment with the institutions that were not part of the overlap sample was 98 percent (312 of the 318 eligible institutions). The final participation rate across the 1,669 institutions that were selected for the 1996 panel was 1,628 participating institutions out of 1,634 eligible institutions. There were 1,634 eligible institutions because 15 institutions in the overlap sample were determined to be ineligible for various reasons.
Each institution in the PEQIS panel was asked to identify a campus representative to serve as survey coordinator. The campus representative facilitates data collection by identifying the appropriate respondent for each survey and forwarding the questionnaire to that person.
The sample for this survey consisted of two-thirds of the institutions in the PEQIS panel,15 for a sample of 1,084 institutions. In January 1998, questionnaires (see appendix B) were mailed to the PEQIS coordinators at the institutions. Coordinators were told that the survey was designed to be completed by the person or office at the institution most knowledgeable about students with disabilities, and the services provided to these students by the institution. Fifteen institutions were found to be out of the scope of the survey because they were closed, leaving 1,069 eligible institutions. These 1,069 institutions represent the universe of approximately 5,040 2-year and 4-year (including graduate-level) postsecondary education institutions in the 50 states and the District of Columbia. Telephone followup of nonrespondents was initiated in early February 1998; data collection and clarification was completed in early April 1998. For the eligible institutions that received surveys, an unweighted response rate of 91 percent (977 responding institutions divided by the 1,069 eligible institutions in the sample) was obtained. The weighted response rate for this survey was also 91 percent. The unweighted overall response rate was 91 percent (99.6 percent panel recruitment participation rate multiplied by the 91.4 percent survey response rate). The weighted overall response rate was also 91 percent (99.7 percent weighted panel recruitment participation rate multiplied by the 91.2 percent weighted survey response rate).
Weighted item nonresponse rates ranged from 0 percent to 3 percent. Item nonresponse rates for most items were less than 1 percent. Because the item nonresponse rates were so low, imputation for item nonresponse was not implemented. Instead, item nonresponse for ratios was handled by dropping cases with missing values from both the numerator and denominator for the calculation of affected percents. For sums, item nonresponse was handled by adding footnotes to the text and tables.
The response data were weighted to produce national estimates (see table 18). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability. The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.
To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents at institutions like those that completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics and the Office of Special Education and Rehabilitative Services, U.S. Department of Education. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.
The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of institutions reporting that they enrolled students with disabilities in 1996-97 or 1997-98 is 72.1 percent, and the estimated standard error is 1.3 percent. The 95 percent confidence interval for the statistic extends from [72.1 - (1.3 times 1.96)] to [72.1 + (1.3 times 1.96)], or from 69.6 to 74.6 percent. Tables of standard errors for each table and figure in the report are provided in appendix A.
Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean squared error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics (Wolter, 1985). To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates (Wolter, 1985, p. 183). A computer program (WesVarPC), distributed free of charge by Westat through the Internet,16 was used to calculate the estimates of standard errors. WesVarPC is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).
The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In particular, an adjusted chisquare test using Satterthwaite's approximation to the design effect was used in the analysis of the two-way tables.17 Finally, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/g significance level to control for the fact that g differences were simultaneously tested.
The following institutional characteristics were used as variables for analyzing the survey data:
The survey was performed under contract with Westat, using the Postsecondary Education Quick Information System (PEQIS). This is the eighth PEQIS survey to be conducted. Westat's Project Director was Elizabeth Farris, and the Survey Manager was Laurie Lewis. Bernie Greene was the NCES Project Officer. The Office of Special Education and Rehabilitative Services, U.S. Department of Education requested the data. The following individuals reviewed this report:
Outside NCES
Inside NCES
For more information about the Postsecondary Education Quick Information System or the Survey on Students with Disabilities at Postsecondary Education Institutions, contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Office of Educational Research and Improvement, 555 New Jersey Avenue, NW, Washington, DC 20208-5651, e-mail: Bernard_Greene@ed.gov, telephone (202) 219-1366.
Horn, L., and Berktold, J. (1999). Students with disabilities in postsecondary education: A profile of preparation, participation, and outcomes. National Center for Education Statistics, U.S. Department of Education, Statistical Analysis Report No. 1999-187. Washington, DC: U.S. Government Printing Office.
Lewis, L., and Farris, E. (1994). Deaf and hard of hearing students in postsecondary education. National Center for Education Statistics, U.S. Department of Education, Statistical Analysis Report No. 94-394. Washington, DC: U.S. Government Printing Office.
Rao, J.N.K., and Scott, A. (1984). On chi-square tests for multi-way contingency tables with cell proportions estimated from survey data. Annals of Statistics, 12, 46-60. U.S. Department of Education, National Center for Education Statistics. (1999). Digest of Education Statistics 1998 (NCES 1999-036). Washington, DC: U.S. Government Printing Office.
Wolter, K. (1985). Introduction to variance estimation. Springer-Verlag.
15 The PEQIS panel is divided into three subpanels. Surveys
typically use two out of three of the subpanels on a rotating basis
to reduce respondent burden.
16 WesVarPC version 2 is available through the Internet at
http://www.westat.com/wesvar /.
17 For example, see Rao and Scott, 1984.
18 Definitions for level are from the data file documentation for the
IPEDS Institutional Characteristics file, U.S. Department of
Education, National Center for Education Statistics.