Skip Navigation
Remedial Education at Degree-Granting Postsecondary Institutions in Fall 2000
NCES 2004010
November 2003

Methodology

Postsecondary Education Quick Information System

The Postsecondary Education Quick Information System (PEQIS) was established in 1991 by the National Center for Education Statistics (NCES), U.S. Department of Education. PEQIS is designed to conduct brief surveys of postsecondary institutions or state higher education agencies on postsecondary education topics of national importance. Surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Most PEQIS institutional surveys use a previously recruited, nationally representative panel of institutions. The PEQIS panel was originally selected and recruited in 1991–92. In 1996, the PEQIS panel was reselected to reflect changes in the postsecondary education universe that had occurred since the original panel was selected. A modified Keyfitz approach (Brick, Morganstein, and Wolters 1987) was used to maximize overlap between the 1996 panel and the 1991–92 panel. The sampling frame for the PEQIS panel recruited in 1996 was constructed from the 1995–96 Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics file.

Institutions eligible for the 1996 PEQIS sampling frame included 2-year and 4-year (including graduate-level) institutions (both institutions of higher education30 and other postsecondary institutions), and less-than-2-year institutions of higher education located in the 50 states and the District of Columbia: a total of 5,353 institutions. The 1996 PEQIS sampling frame was stratified by instructional level (4-year, 2-year, less-than-2- year), control (public, private nonprofit, private for-profit), highest level of offering (doctor's/first professional, master's, bachelor's, less than bachelor's), total enrollment, and status as either an institution of higher education or other postsecondary institution. Within each of the strata, institutions were sorted by region (Northeast, Southeast, Central, West), whether the institution had a relatively high minority enrollment, and whether the institution had research expenditures exceeding $1 million. The sample of 1,669 institutions for the 1996 PEQIS panel was allocated to the strata in proportion to the aggregate square root of total enrollment. Institutions within a stratum were sampled with equal probabilities of selection. The modified Keyfitz approach resulted in 80 percent of the institutions in the 1996 panel overlapping with the 1991–92 panel. Panel recruitment was conducted with the 338 institutions that were not part of the overlap sample. During panel recruitment, 20 institutions were found to be ineligible for PEQIS, primarily because they were either closed or offered only correspondence courses. The final unweighted response rate at the end of PEQIS panel recruitment with the institutions that were not part of the overlap sample was 98 percent (312 of the 318 eligible institutions). There were a total of 1,634 eligible institutions in the entire 1996 panel, because 15 institutions in the overlap sample were determined to be ineligible for various reasons. The final participation rate across the institutions that were selected for the 1996 panel was over 99 percent (1,628 participating institutions out of 1,634 eligible institutions).

Each institution in the PEQIS panel was asked to identify a campus representative to serve as survey coordinator. The campus representative facilitates data collection by identifying the appropriate respondent for each survey and forwarding the questionnaire to that person.

Sample and Response Rates

The sample for the PEQIS 2000 remedial education survey consisted of all of the 2-year and 4-year higher education institutions in the 1996 PEQIS panel that enrolled freshmen. At the time the PEQIS panels were selected, NCES was defining higher education institutions as institutions accredited at the college level by an agency recognized by the Secretary, U.S. Department of Education (ED). However, ED no longer makes a distinction between higher education institutions and other postsecondary institutions that are eligible to participate in federal Title IV financial aid programs. Thus, NCES no longer categorizes institutions as higher education institutions. Following data collection on the PEQIS 2000 remedial education survey, a poststratification weighting adjustment was conducted. As part of this adjustment, the definition of eligible institutions was changed because of the way NCES now categorizes postsecondary institutions. An institution is now eligible for PEQIS (and for this PEQIS remedial education survey) if it is eligible to award federal Title IV financial aid, and grants degrees at the associate's level or higher. Institutions that are both Title IV-eligible and degree-granting are approximately equivalent to higher education institutions as previously defined. The 1,242 eligible institutions in the survey represent the universe of approximately 3,230 Title IV-eligible, degree-granting institutions that enrolled freshmen in the 50 states and the District of Columbia.31 In early June 2001, questionnaires (see appendix C) were mailed to the PEQIS coordinators at the institutions. Coordinators were told that the survey was designed to be completed by the person at the institution most knowledgeable about the institution's remedial education courses.

Telephone follow up of nonrespondents was initiated in late June 2001; data collection and clarification were completed in early September 2001. The unweighted survey response rate was 95 percent (1,186 responding institutions divided by the 1,242 eligible institutions in the sample); the weighted survey response rate was 96 percent. Taking into account both nonresponse in the PEQIS panel and survey nonresponse among eligible institutions, the unweighted overall response rate was 95 percent (99.6 percent panel recruitment participation rate multiplied by the 95.49 percent survey response rate). The weighted overall response rate was also 95 percent (99.7 percent weighted panel recruitment participation rate multiplied by the 95.52 percent weighted survey response rate). Weighted item nonresponse rates ranged from 0 to 1 percent, except for question 5i (percent of entering freshmen enrolled in remedial courses in reading, writing, and mathematics), which had a weighted item nonresponse of 3 percent for each of the subject areas. Imputation for item nonresponse was not implemented.

Comparing the 1995 and 2000 PEQIS Studies: Technical Notes

There are a number of factors that must be considered when comparing the 1995 and 2000 PEQIS studies. This section describes the sample for the 1995 PEQIS study and how it differs from the sample for the 2000 study, and describes the approach used for comparing findings from the two studies.

The sample for the 1995 study consisted of twothirds of the 2-year and 4-year higher education institutions in the PEQIS panel selected in 1991– 92, which was based on the 1990–91 IPEDS Institutional Characteristics file. Of this sample of 847 institutions, 797 institutions responded, for an unweighted response rate of 94 percent, and a weighted response rate of 96 percent. Of the responding institutions, 750 enrolled freshmen. These institutions represented the universe of approximately 3,060 higher education institutions at the 2-year and 4-year level in the 50 states, the District of Columbia, and Puerto Rico that enrolled freshmen.

The sample for the 2000 study, described in the Sample and Response Rates section above, consisted of all of the 2-year and 4-year higher education institutions in the PEQIS panel selected in 1996, which was based on the 1995–96 IPEDS Institutional Characteristics file. The 1996 PEQIS panel was selected in a way that maximized the overlap between the 1991–92 and 1996 panels. However, institutions in Puerto Rico were not included in the 1996 PEQIS panel, as they had been in the 1991–92 PEQIS panel. At the time the 1996 PEQIS panel was selected, NCES was still defining higher education institutions in the same way as it was when the 1991–92 PEQIS panel was selected. However, as part of the poststratification weighting adjustment conducted after data collection on the 2000 study, the definition of eligible institutions was changed because of the way NCES now categorizes postsecondary institutions. An institution is now eligible for PEQIS (and for this PEQIS remedial education survey) if it is eligible to award federal Title IV financial aid, and grants degrees at the associate's level or higher.

In order to make comparisons between the two studies, the data from the 1995 study were reanalyzed with the definition of eligible institutions changed to match the definition for the 2000 study as closely as possible. Information about eligibility to award federal Title IV financial aid was not available for the institutions in the 1995 study. According to NCES, the designation as a higher education institution was the best approximation to Title IV eligibility available for these institutions. Institutions were identified as degree-granting based on level of offering as reported to IPEDS. As a result of the changes in the definition of eligible institutions, there were a total of 14 institutions excluded from the data file for the 1995 study—10 institutions in Puerto Rico, and 4 that were not degree-granting. The analyses for the 1995 study that are presented in this report are based on 736 institutions, representing approximately 2,990 degree-granting higher education institutions in the 50 states and the District of Columbia. In addition, the replicate weights32 for the studies were redefined for variance calculations to reflect the overlap in the 1995 and 2000 samples.

Definition of Institutional Type

Institutional type (public 2-year, private 2-year, public 4-year, private 4-year) was used for analyzing the survey data. Type was created from a combination of level (2-year, 4-year) and control (public, private). Two-year institutions are defined as institutions at which the highest level of offering is at least 2 but less than 4 years (below the baccalaureate degree); 4-year institutions are those at which the highest level of offering is 4 or more years (baccalaureate or higher degree).33 Private comprises private nonprofit and private for-profit institutions; these private institutions are reported together because there are too few private for-profit institutions in the sample for this survey to report them as a separate category.

Sampling and Nonsampling Errors

The survey data were weighted to produce national estimates (see tables A-1 and A-2). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability. The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.

To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents at institutions like those that completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by NCES. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.

Variances

The standard error is a measure of the variability of an estimate due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of institutions reporting that they offered any remedial education courses in reading, writing, or mathematics in fall 2000 is 76.3 percent, and the estimated standard error is 1.5 percent. The 95 percent confidence interval for the statistic extends from [76.3 - (1.5 times 1.96)] to [76.3 + (1.5 times 1.96)], or from 73.4 to 79.2 percent. Tables of standard errors for each table and figure in the report are provided in appendix B.

The coefficient of variation (cv) is defined as the ratio of the standard error of an estimate to the estimate itself (Kish 1965). When multiplied by 100, the cv expresses the standard error as a percentage of the quantity being estimated. Thus, the cv can be viewed as relative standard error. For example, if an estimate of 25,000 has standard error of 3,300, the corresponding cv is 13.2 percent. In this report, estimates with a cv of 50 percent or greater were flagged to be interpreted with caution.

Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVar) was used to calculate the estimates of standard errors. WesVar is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters). The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflect the complex nature of the sample design. In addition, Bonferroni adjustments were made to control for multiple comparisons where appropriate. Bonferroni adjustments correct for the fact that a number of comparisons (g) are being made simultaneously. The adjustment is made by dividing the 0.05 significance level by g comparisons, effectively increasing the critical value necessary for a difference to be statistically significant. This means that comparisons that would have been significant with an unadjusted critical t value of 1.96 may not be significant with the Bonferroni adjusted critical t value. For example, the Bonferroni-adjusted critical t value for comparisons between any two of the four categories of institutional type is 2.64, rather than 1.96. This means that there must be a larger difference between the estimates being compared for there to be a statistically significant difference when the Bonferroni adjustment is applied than when it is not used.

Background Information

The survey was requested by the National Center for Education Statistics of the U.S. Department of Education and performed under contract with Westat. Bernie Greene was the NCES Project Officer. Westat's Project Director was Elizabeth Farris, and the Survey Managers were Laurie Lewis and Basmat Parsad.

This report was reviewed by the following individuals:

Outside NCES

  • Hunter Boylan, Professor and Director, National Center for Developmental Education, Appalachian State University
  • Stephanie Cronen, American Institutes for Research, Education Statistics Services Institute
  • Lawrence Lanahan, American Institutes for Research, Education Statistics Services Institute
  • Robert McCabe, President Emeritus, Miami Dade Community College
  • Jon Oberg, Policy and Program Studies Service
  • Leslie Scott, American Institutes for Research, Education Statistics Services Institute

Inside NCES

  • Nancy Borkow, Postsecondary Education Studies Division
  • Lisa Hudson, Early Childhood, International, and Crosscutting Studies Division
  • Tracy Hunt-White, Postsecondary Education Studies Division
  • William Hussar, Early Childhood, International, and Crosscutting Studies Division
  • Val Plisko, Early Childhood, International, and Crosscutting Studies Division
  • John Ralph, Early Childhood, International, and Crosscutting Studies Division
  • Marilyn Seastrom, Chief Statistician, Statistical Standards Program, Office of the Deputy Commissioner
  • Bruce Taylor, Statistical Standards Program, Office of the Deputy Commissioner
  • John Wirt, Early Childhood, International, and Crosscutting Studies Division

For more information about the Postsecondary Education Quick Information System or the Survey on Remedial Education at Higher Education Institutions, contact Bernie Greene.


30 At the time the 1991–92 and 1996 PEQIS panels were selected, NCES was defining higher education institutions as institutions accredited at the college level by an agency recognized by the Secretary, U.S. Department of Education.

31 Institutions were stratified by instructional level (4-year, 2-year), control (public, private nonprofit, private for-profit), highest level of offering (doctor's/first-professional, master's, bachelor's, less than bachelor's), and total enrollment.

32 Replicate weights are discussed in the section below on variances.

33 Definitions for level are from the data file documentation for the IPEDS Institutional Characteristics file.

Top