Did higher education institutions experience cuts in their operating budgets during the fiscal year (after the budget was initially approved) from fiscal years 1990 to 1993, and what were the reasons for any such cuts? Have institutions increased or decreased key academic offerings and student services since 1989-90, and what are the reasons for such increases or decreases? How do the responses to these items vary by institutional control? Information to answer these questions is reported in the National Center for Education Statistics' Survey on Higher Education Finances and Services, conducted in 1993 through the Post Secondary Education Quick Information System (PEQIS).
A great deal was being written between 1991 and 1993 about the fiscal crisis in higher education. Articles appearing in such publications as The Washington Post, The Chronicle of Higher Education, and Science discussed the budgetary woes of colleges and universities. Reports such as those issued by the American Association of State Colleges and Universities Council of State Representatives, the American Council on Education, and by individual institutions and states provided some general information about the financial situation for higher education. Among the issues discussed in articles and reports were mid-year budget cuts (particularly at public institutions) and changes in academic offerings and student services as approaches to dealing with the changing financial climate at colleges and universities. However, nationally representative, institution-level information was lacking. This survey was conducted to provide that information.
About a third of all institutions had cuts in their operating budgets during the fiscal year (after the budget was initially approved) for fiscal years 1991 through 1993 (Table 1). This is a substantial increase over fiscal year 1990, when 17 percent of institutions had cuts in their operating budgets during the year. There was substantial variation by institutional control, with a greater proportion of public than private nonprofit institutions experiencing budget cuts during each fiscal year. For public institutions, the proportion of institutions with budgets cuts during the year ranged from 27 percent in fiscal year 1990 to 55 percent in fiscal year 1992; for private nonprofit institutions, the proportions ranged from 7 percent in fiscal year 1990 to 27 percent in fiscal year 1993. The major reason for budget cuts also differed by institutional control. In each fiscal year, the major reason for cuts given by 9 out of 10 of the public institutions that had experienced cuts was rescissions in state or local appropriations. For private nonprofit institutions, the most frequently selected reason for cuts in each fiscal year was tuition and fees shortfall, selected by 56 to 67 percent of the institutions with budget cuts in fiscal years 1991 through 1993.
Most institutions reported that class size had stayed about the same since 1989-90 for introductory courses (65 percent) and advanced courses (77 percent; figure 1). Increases in class size for introductory courses were reported by 29 percent of institutions; 19 percent reported increases in class size for advanced courses. Public institutions were more likely than private nonprofit institutions to have increased class size in introductory courses (Table 2).
Few institutions (14 percent) reported decreases in
the number of courses or sections offered (figure 1).
Instead, institutions tended to report that they either
increased the number of courses or sections offered
(47 percent) or that there had been no net change in
the number offered (39 percent). The number of
academic departments and number of academic
programs were reported to have stayed about the
same at 77 percent and 56 percent of institutions,
respectively; only 7 and 11 percent of institutions
reported decreases in the number of departments and
programs (figure 1). There were few differences by
institutional control. Private nonprofit institutions
were more likely than public institutions to have
increased the number of academic programs (Table 2).
The major reasons for increases in introductory and advanced class size (among institutions that reported such increases) were budgetary reasons and "other reasons"1 for public institutions (Table 3), and "other reasons" for private nonprofit institutions (Table 4). For both public and private nonprofit institutions, institutional policy and "other reasons" were reported most frequently as the reasons for increases in the other academic offerings (i.e., courses or sections offered, academic departments, and academic programs).
Public institutions that had decreases in the specific academic offerings cited budgetary reasons as the major reason for decreases in the number of courses or sections offered, number of academic departments, and number of academic programs (Table 5). There were too few cases for a reliable estimate for public institutions for class size in introductory and advanced courses and for all academic offerings for private nonprofit institutions.
Very few institutions (between 4 and 7 percent)
reported decreases in their key student services since
academic year 1989-90 (figure 2); half to three quarters
of the institutions indicated that there had
been no net changes in student services since 1989-
90. Where changes had occurred, they were likely to
be increases rather than decreases. About a fifth of
institutions reported increases in student health
services and library operating hours, about a third
reported increases in student personal counseling
services and career guidance and job placement
services, and 39 percent said they had increases in
student academic tutoring.
There were few differences by institutional control.
Public institutions were more likely than private
nonprofit institutions to have increased student
academic tutoring, and were more likely to have
decreased library operating hours (Table 6).
For both public and private nonprofit institutions, the
major reasons for increases in student services
(among institutions that reported such increases)
were institutional policy and "other reasons" (tables
8). Few institutions reported that budgetary
reasons or statelocal policy were the reasons for
increases in student services, except for student
health services, where 8 percent of public and 15
percent of private nonprofit institutions that had
increases reported that statelocal policy was the
major reason for increases in this service.
Public institutions that had decreases in the specific student services cited budgetary reasons as the major reason for decreases in career guidance and job placement services, student personal counseling services, and library operating hours (Table 9). There were too few cases for a reliable estimate for public institutions for student academic tutoring and student health services, and for all services for private nonprofit institutions.
The Survey on Higher Education Finances and Services was conducted in spring 1993 by the National Center for Education Statistics using the Postsecondary Education Quick Information System (PEQIS), PEQIS is designed to collect limited amounts of policy-relevant information quickly from a previously recruited nationally representative stratified sample of postsecondary institutions. PEQIS surveys are generally limited to 2 to 3 pages of questions with a response burden of 30 minutes per respondent. The survey was mailed to the PEQIS survey coordinators at 787 2-year and 4-year public and private nonprofit higher education institutions. Completed questionnaires were received from 711 of the 780 eligible institutions,2 for an unweighted survey response rate of 91 percent (the weighted survey response rate is 90 percent). All estimates for the 1990, 1991, 1992, and 1993 fiscal years are based on data reported by the institution in spring 1993.
The sample size and pattern of results did not allow for indepth analyses of many aspects of the data.
The response data were weighted to produce national estimates. The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability. The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained horn all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of institutions that had cuts in their operating budget during fiscal year 1991 is 33 percent, and the estimated standard error is 1.5 percent. The 95 percent confidence interval for the statistic extends from [33 - (1.5 times 1.96)] to [33 + (1.5 times 1.96)], or from 30.1 to 35.9 percent. Estimates of standard errors for this report were computed using a jackknife replication method. Standard errors for all of the estimates are presented in the tables, including Table 10, which provides standard errors for the estimates in the figures. All specific statements of comparison made in this report have been tested for statistical significance through chi-square tests and t-tests adjusted for multiple comparisons using the Bonferroni adjustment, and they are significant at the 95 percent confidence level or better.
The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in collection or processing of data. These errors can sometimes bias the data. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure. To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents at institutions like those that completed the survey. During the design of the survey and the survey pretest an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The question and instructions were extensively reviewed by the National Center for Education Statistics. Manual and machine editing of the questionnaire responses were conducted to check the &la for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.
This report was reviewed by the following individuals: