The Postsecondary Education Quick Information System (PEQIS) was established in 1991 by the National Center for Education Statistics (NCES), U.S. Department of Education (ED). PEQIS is designed to conduct brief surveys of postsecondary institutions or state higher education agencies on postsecondary education topics of national importance. Surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Most PEQIS institutional surveys use a previously recruited, nationally representative panel of institutions. The PEQIS panel was originally selected and recruited in 1991–92. In 1996, the PEQIS panel was reselected to reflect changes in the postsecondary education universe that had occurred since the original panel was selected. A modified Keyfitz approach was used to maximize overlap between the panels; this resulted in 80 percent of the institutions in the 1996 panel overlapping with the 1991–92 panel. The PEQIS panel was reselected again in 2002. A modified Keyfitz approach was used to maximize the overlap between the 1996 and 2002 samples; 81 percent of the institutions overlapped between these two panels.
At the time the 1991–92 and 1996 PEQIS panels were selected, NCES was defining higher education institutions as institutions accredited at the college level by an agency recognized by the Secretary of the U.S. Department of Education. However, ED no longer makes a distinction between higher education institutions and other postsecondary institutions that are eligible to participate in federal financial aid programs. Thus, NCES no longer categorizes institutions as higher education institutions. Instead, NCES now categorizes institutions on the basis of whether the institution is eligible to award federal Title IV financial aid, and whether the institution grants degrees at the associate's level or higher. Institutions that are both Title IV-eligible and degree-granting are approximately equivalent to higher education institutions as previously defined. It is this subset of postsecondary institutions (Title IVeligible and degree-granting) that are included in the 2002 PEQIS sampling frame.
The sampling frame for the 2002 PEQIS panel was constructed from the 2000 Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics file. Institutions eligible for the 2002 PEQIS frame included 2-year and 4-year (including graduate-level) institutions that are both Title IV-eligible and degree-granting, and are located in the 50 states and the District of Columbia: a total of 4,175 institutions. The 2002 PEQIS sampling frame was stratified by instructional level (4-year, 2- year), control (public, private nonprofit, private for-profit), highest level of offering (doctor's/firstprofessional, master's, bachelor's, less than bachelor's), and total enrollment. Within each of the strata, institutions were sorted by region (Northeast, Southeast, Central, West) and by whether the institution had a relatively high minority enrollment. The sample of 1,610 institutions was allocated to the strata in proportion to the aggregate square root of total enrollment. Institutions within a stratum were sampled with equal probabilities of selection. The modified Keyfitz approach resulted in 81 percent of the institutions in the 2002 panel overlapping with the 1996 panel. Panel recruitment was conducted with the 300 institutions that were not part of the overlap sample. During panel recruitment, 6 institutions were found to be ineligible for PEQIS. The final unweighted response rate at the end of PEQIS panel recruitment with the institutions that were not part of the overlap sample was 97 percent (285 of the 294 eligible institutions). There were a total of 1,600 eligible institutions in the entire 2002 panel, because 4 institutions in the overlap sample were determined to be ineligible for various reasons. The final unweighted participation rate across the institutions that were selected for the 2002 panel was 99 percent (1,591 participating institutions out of 1,600 eligible institutions). The weighted panel participation rate was also 99 percent.
Each institution in the PEQIS panel was asked to identify a campus representative to serve as survey coordinator. The campus representative facilitates data collection by identifying the appropriate respondent for each survey and forwarding the questionnaire to that person.
The sample for the survey consisted of all of the institutions in the 2002 PEQIS panel. The weighted number of eligible institutions in the survey represent the estimated universe of approximately 4,130 Title IV-eligible, degree-granting institutions in the 50 states and the District of Columbia.12 In late February 2002, questionnaires (see appendix B) were mailed to the PEQIS coordinators at the institutions. Coordinators were told that the survey was designed to be completed by the person at the institution most knowledgeable about the institution's distance education course offerings. Telephone followup of nonrespondents was initiated in mid-March 2002; data collection and clarification were completed in June 2002. During data collection, one institution was determined to be ineligible for this survey. For the eligible institutions, an unweighted response rate of 94 percent (1,500 responding institutions divided by the 1,599 eligible institutions in the sample for this survey) was obtained. The weighted response rate for this survey was also 94 percent. The unweighted overall response rate was 93 percent (99.4 percent panel participation rate multiplied by the 93.8 percent survey response rate). The weighted overall response rate was also 93 percent (99.3 percent weighted panel participation rate multiplied by the 93.8 percent weighted survey response rate).
Weighted item nonresponse rates ranged from 0 to 1 percent for all items. Imputation for item nonresponse was not implemented. Estimated totals using nonimputed data implicitly impute a zero value for all missing data. These zero implicit imputations will mean that the estimates of totals will underestimate the true population totals. The total number of enrollments in all distance education courses was missing for 5 cases in the sample. For college-level, credit-granting courses, the number of enrollments in courses at both levels was missing for 5 cases in the sample, and the number of enrollments in undergraduate and graduate courses was missing for 11 cases in the sample . The total number of different distance education courses was missing for 8 cases in the sample. For college-level, credit-granting courses, the number of courses at both levels was missing for 7 cases in the sample, the number of undergraduate courses was missing for 11 cases in the sample, and the number of graduate courses was missing for 10 cases in the sample.
The response data were weighted to produce national estimates (see table A-1). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.
The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.
To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents at institutions like those that completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by NCES. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.
The standard error is a measure of the variability of an estimate due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of institutions reporting that they offered any distance education courses in 2000–2001 is 56.3 percent, and the estimated standard error is 1.2 percent. The 95 percent confidence interval for the statistic extends from [56.3 - (1.2 times 1.96)] to [56.3 + (1.2 times 1.96)], or from 53.9 to 58.7 percent. Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVar) was used to calculate the estimates of standard errors. WesVar is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).
The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In addition, Bonferroni adjustments were made to control for multiple comparisons where appropriate. Bonferroni adjustments correct for the fact that a number of comparisons (g) are being made simultaneously. The adjustment is made by dividing the 0.05 significance level by g comparisons, effectively increasing the critical value necessary for a difference to be statistically different. This means that comparisons that would have been significant with an unadjusted critical t value of 1.96 may not be significant with the Bonferroni-adjusted critical t value. For example, the Bonferroni-adjusted critical t value for comparisons between any two of the three categories of institutional size is 2.39, rather than 1.96. This means that there must be a larger difference between the estimates being compared for there to be a statistically significant difference when the Bonferroni adjustment is applied than when it is not used.
The survey was performed under contract with Westat, using the Postsecondary Education Quick Information System (PEQIS). Westat's Project Director was Elizabeth Farris, and the Survey Managers were Laurie Lewis and Tiffany Waits. Bernie Greene was the NCES Project Officer.
The following individuals reviewed this report:
For more information about the Postsecondary Education Quick Information System or the Survey on Distance Education at Higher Education Institutions : 2000–2001, contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, 1990 K Street, NW, Washington, DC 20006; e-mail: Bernard.Greene@ed.gov; telephone (202) 502-7348.
12 The estimated number of institutions in the survey universe decreased from the 4,175 institutions on the PEQIS sampling frame to an estimated
4,130 institutions because some of the institutions were determined to be ineligible for PEQIS during panel recruitment and survey data
13 Definitions for level are from the data file documentation for the Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics file, U.S. Department of Education, National Center for Education Statistics.