Skip Navigation
Dual Enrollment of High School Students at Postsecondary Institutions: 2002-03
NCES 2005008
April 2005

Technical Notes

Postsecondary Education Quick Information System

The Postsecondary Education Quick Information System (PEQIS) was established in 1991 by the National Center for Education Statistics (NCES), U.S. Department of Education (ED). PEQIS is designed to conduct brief surveys of postsecondary institutions or state higher education agencies on postsecondary education topics of national importance. Surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Most PEQIS institutional surveys use a previously recruited, nationally representative panel of institutions. The PEQIS panel was originally selected and recruited in 1991–92. In 1996, the PEQIS panel was reselected to reflect changes in the postsecondary education universe that had occurred since the original panel was selected. A modified Keyfitz approach was used to maximize overlap between the panels; this resulted in 80 percent of the institutions in the 1996 panel overlapping with the 1991–92 panel. The PEQIS panel was reselected again in 2002. A modified Keyfitz approach was used to maximize the overlap between the 1996 and 2002 samples; 81 percent of the institutions overlapped between these two panels.

At the time the 1991–92 and 1996 PEQIS panels were selected, NCES was defining higher education institutions as institutions accredited at the college level by an agency recognized by the Secretary of the U.S. Department of Education. However, ED no longer makes a distinction between higher education institutions and other postsecondary institutions that are eligible to participate in federal financial aid programs. Thus, NCES no longer categorizes institutions as higher education institutions. Instead, NCES now categorizes institutions on the basis of whether the institution is eligible to award federal Title IV financial aid, and whether the institution grants degrees at the associate's level or higher. Institutions that are both Title IV-eligible and degree-granting are approximately equivalent to higher education institutions as previously defined. It is this subset of postsecondary institutions (Title IVeligible and degree-granting) that are included in the 2002 PEQIS sampling frame.

The sampling frame for the 2002 PEQIS panel was constructed from the 2000 Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics file. Institutions eligible for the 2002 PEQIS frame included 2-year and 4-year (including graduate-level) institutions that are both Title IV eligible and degree granting, and are located in the 50 states and the District of Columbia: a total of 4,175 institutions. The 2002 PEQIS sampling frame was stratified by instructional level (4-year, 2- year), control (public, private nonprofit, private for-profit), highest level of offering (doctor's/firstprofessional, master's, bachelor's, less than bachelor's), and total enrollment. Within each of the strata, institutions were sorted by region (Northeast, Southeast, Central, West) and by whether the institution had a relatively high minority enrollment. The sample of 1,610 institutions was allocated to the strata in proportion to the aggregate square root of total enrollment. Institutions within a stratum were sampled with equal probabilities of selection. The modified Keyfitz approach resulted in 81 percent of the institutions in the 2002 panel overlapping with the 1996 panel. Panel recruitment was conducted with the 300 institutions that were not part of the overlap sample. During panel recruitment, 6 institutions were found to be ineligible for PEQIS. The final unweighted response rate at the end of PEQIS panel recruitment with the institutions that were not part of the overlap sample was 97 percent (285 of the 294 eligible institutions). There were 1,600 eligible institutions in the entire 2002 panel, because 4 institutions in the overlap sample were determined to be ineligible for various reasons. The final unweighted participation rate across the institutions that were selected for the 2002 panel was 99 percent (1,591 participating institutions out of 1,600 eligible institutions). The weighted panel participation rate was also 99 percent.

Each institution in the PEQIS panel was asked to identify a campus representative to serve as survey coordinator. The campus representative facilitates data collection by identifying the appropriate respondent for each survey and forwarding the questionnaire to that person. Data are weighted to produce national estimates, and the sample size allows for limited breakouts by classification variables. However, as the number of categories within the classification variables increases, the sample size within categories decreases, which results in larger sampling errors for the breakouts by classification variables.

Sample Selection and Response Rates

The sample for the survey consisted of all of the institutions in the 2002 PEQIS panel. In February 2004, questionnaires (see appendix B) were mailed to the PEQIS coordinators at the institutions. Coordinators were told that the survey was designed to be completed by the person at the institution most knowledgeable about the institution's dual enrollment programs and courses. Respondents had the option of completing the survey online. Telephone follow-up of nonrespondents was initiated in mid-March 2004; data collection and clarification were completed in June 2004. Before and during data collection for the PEQIS dual enrollment survey, 23 institutions were determined to be ineligible for the panel. For the eligible institutions, an unweighted response rate of 92 percent (1,461 responding institutions divided by the 1,587 eligible institutions in the sample for this survey) was obtained. The weighted response rate for this survey was 93 percent. The unweighted overall response rate was 91 percent (99.4 percent panel participation rate multiplied by the 92 percent survey response rate). The weighted overall response rate was 92 percent (99.3 percent weighted panel participation rate multiplied by the 93 percent weighted survey response rate). Of the institutions that completed the survey, 51 percent completed it online, 32 percent completed it by mail, 9 percent completed it by fax, and 8 percent completed it by telephone. Following data collection on the PEQIS 2004 dual enrollment survey, a poststratification weighting adjustment was made to the totals in the 2002 IPEDS Institutional Characteristics file. The weighted number of eligible institutions in the survey represent the estimated universe of approximately 4,240 Title IV-eligible degree-granting institutions in the 50 states and the District of Columbia (see table A-1).

Imputation for Item Nonresponse

Weighted item nonresponse rates ranged from 0 to 2 percent for all items. Although item nonresponse for key items was very low, data were imputed for all missing questionnaire data. These 20 items are listed in table A-2. The missing items included both numerical data such as counts of students in dual enrollment programs or outside of programs, as well as categorical data such as which sources paid tuition for courses taken by high school students in dual enrollment programs. The missing data were imputed using a "hot-deck" approach to obtain a "donor" institution from which the imputed values were derived. Under the hot-deck approach, a donor institution that matched selected characteristics of the institution with missing data (the recipient institution) was identified. The matching characteristics included PEQIS stratum (defined by sector, highest level of offering, and enrollment size) and whether the institution offered courses inside or outside of a dual enrollment program. Once a donor was found, it was used to derive the imputed values for the institution with missing data. For categorical items, the imputed value was simply the corresponding value from the donor institution. For numerical items, the imputed value was calculated by taking the donor's response for that item (e.g., enrollment in dual enrollment programs) and dividing that number by the total number of students enrolled in the donor institution. This ratio was then multiplied by the total number of students enrolled in the recipient institution to provide an imputed value. All missing items for a given institution were imputed from the same donor whenever possible.

Data Reliability

While the "Dual Enrollment Programs and Courses for High School Students" survey was designed to account for sampling error and to minimize nonsampling error, estimates produced from the data collected are subject to both types of error. Sampling error occurs because the data are collected from a sample rather than a census of the population, and nonsampling errors are errors made during the collection and processing of the data.

Sampling Errors

The responses were weighted to produce national estimates (see table A-1). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability. General sampling theory was used to estimate the sampling variability of the estimates and to test for statistically significant differences between estimates. The standard error is a measure of the variability of an estimate due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of Title IV degree-granting institutions with any dual enrollment is 56.9 percent and the standard error is 1.1 percent (see tables 1 and 1a). The 95 percent confidence interval for the statistic extends from [56.9 – (1.1 x 1.96)] to [56.9 + (1.1 x 1.96)], or from 54.7 to 59.1 percent. The 1.96 is the critical value for a statistical test at the 0.05 significance level (where 0.05 indicates the 5 percent of all possible samples that would be outside the range of the confidence interval).

Because the data from the PEQIS dual enrollment programs and courses survey were collected using a complex sampling design, the variances of the estimates from this survey (e.g., estimates of proportions) are typically different from what would be expected from data collected with a simple random sample. Not taking the complex sample design into account can lead to an underestimation of the standard errors associated with such estimates. To generate accurate standard errors for the estimates in this report, standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variance of the statistic. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped 1 at a time to define 50 jackknife replicates. A computer program (WesVar) was used to calculate the estimates of standard errors. WesVar is a stand-alone Windows application that computes sampling errors from complex samples for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).

Nonsampling Errors

Nonsampling error is the term used to describe variations in the estimates that may be caused by population coverage limitations and data collection, processing, and reporting procedures. The sources of nonsampling errors are typically problems like unit and item nonresponse, differences in respondents' interpretations of the meaning of questions, response differences related to the particular time the survey was conducted, and mistakes made during data preparation. It is difficult to identify and estimate either the amount of nonsampling error or the bias caused by this error. To minimize the potential for nonsampling error, this study used a variety of procedures, including a pretest of the questionnaire with individuals at postsecondary institutions deemed to be the most knowledgeable about the dual enrollment programs and courses at their institutions. The pretest provided the opportunity to check for consistency of interpretation of questions and definitions and to eliminate ambiguous items. The questionnaire and instructions were also extensively reviewed by NCES and the data requestor at the Office of Vocational and Adult Education. In addition, manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone to resolve problems. Data were keyed with 100 percent verification for surveys received by mail, fax, or telephone.

Definitions of Analysis Variables

  • Institution type: public 2-year, private 2-year, public 4-year, private 4-year. Type was created from a combination of level (2-year, 4-year) and control (public, private). Twoyear institutions are defined as institutions at which the highest level of offering is at least 2 but less than 4 years (below the baccalaureate degree); 4-year institutions are those at which the highest level of offering is 4 or more years (baccalaureate or higher degree).17 Private comprises private nonprofit and private for-profit institutions; these private institutions are reported together because there are too few private for-profit institutions in the sample for this survey to report them as a separate category.
  • Size of institution: less than 3,000 students; 3,000 to 9,999 students; and 10,000 or more students.

For more information about the Postsecondary Education Quick Information System or the Survey on Dual Enrollment Programs and Courses for High School Students, contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, 1990 K Street, NW, Washington, DC 20006; e-mail: Bernard.Greene@ed.gov; telephone (202) 502-7348.

Top