Skip Navigation

Statistical Analysis Report:

Enhancing the Quality and Use of Student Outcomes Data: Final Report of the National Postsecondary Education Cooperative Working Group on Student Outcomes From a Data Pespective

September 1997

(NCES 97-992) Ordering information

Introduction and Background

During the past year, the National Postsecondary Education Cooperative (NPEC) has sponsored two Working Groups on student outcomes, one exploring this area from a policy perspective, the other examining it from a data perspective. The goals of the "Student Outcomes from a Data Perspective" Working Group are to: (1) determine how different audiences think about students and outcomes of different kinds of education and training programs; (2) document data sources and analyze the data collected and maintained to answer questions about student outcomes including identification of gaps in the current data sources; (3) describe the purpose and effectiveness of current data collection in developing sound policy; and (4) explain how technology advances are affecting the way student data are coordinated and disseminated and how the advances will affect future practices and policies./1

The Working Group has used two inter-related strategies to achieve its goals. First, the Working Group consultants conducted case studies of student outcomes data collection in Texas and Virginia. The case studies provided descriptive information about student outcomes data at the state level and established the foundation for discussion and analysis of the trade-offs, benefits, and disadvantages of various approaches to student outcomes data collection, analysis, and dissemination. Second, two Working Group meetings have provided opportunities for participants to share their experiences and knowledge and to move toward consensus in identifying priority concerns and developing recommendations. These discussions have been informed by the case studies and the professional literature on assessment and student outcomes.

This report presents results of the case studies, discusses the strengths and weaknesses of the current "state of the art," and provides Working Group recommendations for enhancing the quality, breadth, and usefulness of outcomes data. This section provides background information about student outcomes and the role of the Working Group. Section II describes the goals, methodology, and results of the case studies, and also discusses the strengths and weaknesses of current student outcomes data based on the case study findings and Working Group discussions. Finally, Section III presents the Working Group's recommendations. Although this report focuses on the accomplishments of the Student Outcomes from a Data Perspective Working Group, the analyses and recommendations extend and complement those of the Student Outcomes from a Policy Perspective Working Group.

There is no single generally accepted definition of "student outcomes." In fact, the definition of "student outcome" must be contextually-based. Data that represent an "outcome" in one context may represent a predictor in another. For purposes of this paper, the universe of "student outcomes data" is captured in the outcomes taxonomy developed by the NPEC Working Group on Student Outcomes from a Policy Perspective (Terenzini, 1996). Appendix A displays this taxonomy. Our focus is on the outcomes of postsecondary education, or formal training and education beyond the high school level. Postsecondary education as used here refers to training offered by institutions, including proprietary schools, colleges and universities, business and industry, and the military.


Challenges of Student Outcomes Assessment

Although postsecondary institutions and systems have collected, analyzed, and reported student outcomes information for years, the policy significance of student outcomes has increased substantially over the past ten to fifteen years. Outcomes data originally were collected to address research questions about the effects of postsecondary education on students' lives (for example, Feldman and Newcomb, 1969). In the 1980s, outcomes data received new attention from policymakers as a means of motivating and evaluating efforts to improve undergraduate education. Fully 23 states established some type of assessment initiative during this time (Ewell, 1995).

More recently, however, outcomes data have been used to evaluate the productivity and performance of postsecondary institutions and systems. Rapidly rising costs of postsecondary education coupled with widespread concern that the nation's workforce lacks the skills needed to maintain our nation's economic competitiveness have stimulated questions about both the effectiveness and efficiency of the sector. Information about student outcomes carries the potential to respond to many of these questions. Thus, although the policy and fiscal context of postsecondary education is changing, the demand for outcomes data continues. Over three quarters of the states now require information about student outcomes or institutional performance (Ewell, 1996; Keller, 1996). Among the purposes that outcomes data are expected to serve are:

Nonetheless, outcomes information to date is limited in its ability to meet these needs. The technical challenges associated with assessing student outcomes are significant and range from the difficulty of developing valid and reliable measures of higher order cognitive skills to the problems of inferring causality from correlational data. Assessing student outcomes is also a political process, and challenges in this domain include the problems of imposing "unfunded mandates" on systems or institutions, campus resistance due to perceived links between assessment and downsizing or cost-cutting, and the range of interpretations available for any set of observed outcomes (Steele and Lutz, 1995). Disagreement among educators and policymakers about the purposes and goals of higher education further stymie efforts assessment efforts, especially in a policy context. As a result of these challenges, a number of observers have pointed out that existing student outcomes data systems are unable to answer basic questions about what students learn in college and whether they possess the skills and abilities needed by the labor market (Terenzini, 1996).

Yet another challenge to using outcomes information to measure institutional performance is the substantial variation in available data across institutions and systems. For example, only seven states use common assessment measures to measure student learning, and each state's common measure has little in common with others (Karelis, 1996). Three states--Florida, Texas, and Georgia--maintain competency testing programs, and four others--Tennessee, Wisconsin, South Dakota, and Arkansas--test all students or samples of students in at least one general education skill area. Most states require or encourage institutions to develop their own outcomes measures (Ewell, 1995). The proliferation of outcomes measures is observed within as well as between campuses--An ACT survey found that only one third of institutions reported common measures and most of these were placement exams (Steele and Lutz, 1995). Even the outcomes measures that appear pervasive in postsecondary education, such as retention and program completion rates, are calculated and reported differently across institutions and systems. This lack of standardization in outcomes data increases the difficulty of drawing meaningful comparisons between institutions, systems, and states and hence reduces the applicability of outcomes data to policymaking and evaluation.

If student outcomes information is to fulfill its potential for informing public policy, stronger data systems are needed. Fortunately, there are signs of support for this goal. For example, Steele and Lutz (1995) report that 82 percent of state boards responding to an ACT survey support the use of common outcomes measures across institutions. In addition, Russell (1995) traces the development of statewide higher education data systems. Today, 32 states have comprehensive statewide databases and another nine have more limited databases. These databases provide a strong foundation for assessing student outcomes and informing postsecondary policy and program development. New initiatives such as inter-organizational collaboration on the development of a postsecondary student data handbook show progress toward standardization of data elements in these databases (AACRAO, 1996).


Objectives of the Student Outcomes from a Data Perspective Working Group

The Student Outcomes From a Data Perspective Working Group has sought to identify strategies and recommendations for improving the quality of outcomes data. Rather than consider the ideal student outcomes system without regard for current resources and constraints, the group began with a description of current data systems. It then considered the strengths and limitations of these data systems and how they could be improved, given the perspective of current practitioners (both providers and users of data).

Given limited time and resources, the Working Group needed to make choices about which aspects of student outcomes data to address. Each choice necessarily involves trade-offs. The following decisions emerged from early Working Group meetings and deliberations:


First, the Working Group focused on unit record level student outcomes databases, largely because the vast majority of postsecondary institutions, systems, and states now maintain some form of student database. These databases typically include information about a population (for example, all enrolled students or all graduating students) rather than a sample, and most of the information is drawn from official records, such as applications or transcripts. This process of record extraction is often more cost effective than other means of data collection, such as surveys or interviews. Because data are compiled in a standardized format over time and across programs or institutions, these databases enable both longitudinal and cross-sectional comparisons.

In making this choice, the Group did not intend to imply that unit record databases are sufficient as a source of information about outcomes. Many other types of information, such as reflective essays by alumni or satisfaction ratings by employers, add considerable depth and richness to the study of student outcomes. In addition, the Working Group recognizes that aggregate data are useful and appropriate under many circumstances. It leaves the task of analyzing the trade-offs between the two approaches to the Unit Record vs. Aggregate Data Working Group. The Working Group further acknowledges that while data providers and users must be concerned with the issues involved in the use of unit record data, these data are often the primary source of information on student outcomes for policy making.

Second, the Working Group wanted to review efforts to expand outcomes information beyond that which is available in student records alone (for example, retention and graduation). This led to the question of how various information sources can be linked to provide a more comprehensive view of outcomes, even if existing information sources will not address all important outcomes. Given the current policy emphasis on workforce development and the availability of unit records on employment through unemployment insurance files, the Group was especially interested in describing and assessing efforts to link educational and occupational data.

In making this choice, the Working Group did not intend to establish an a priori recommendation that all institutions should link their student records to occupational records. Members were in fact divided on the usefulness of such information and recognized that institutional mission and goals should shape student outcomes data collection and analysis. Instead, the choice to study linkages between educational and occupational data (via unemployment insurance files) provides a good example of the potential benefits and problems associated with linking files as a means of expanding outcomes information.

Third, given limited resources, the Working Group further chose to direct its efforts toward the student outcomes data used to inform state-level policy and decisionmaking. The primary reason for focusing on the state as the unit of analysis is that the greatest pressures for information about student outcomes are coming from state governments, which are increasingly preoccupied with issues of productivity and performance in postsecondary education (Ewell, 1995). The outcomes information of most relevance to state policy are drawn from state-level student databases, which therefore became a special focus of the Working Group's activities. The widespread presence of these databases, their ongoing use for various policy analyses, and the inclusion of standardized data from multiple institutions provides a relatively strong base on which to build. Additionally, lessons learned from an analysis of state-level data and policymaking may be applicable to other settings, including institutions, multi-state coalitions, and the federal government.

It is important to note that the Working Group's choice to study state-level unit record databases is not a de facto endorsement of these databases. Rather, state-level databases were selected because they provide the most representative and most developed student outcome data systems with sufficiently broad characteristics and capacities to allow for discussion and generalizable observations. The use of the state system as a unit of analysis also allowed for access to individual institutions, both public and private, two- and four-year, for the researchers. Thus, in focusing on state-level information and data, the Working Group in no way intended to assign the highest priority to public institutions or neglect independent institutions. Representatives of independent institutions and coalitions of independent institutions were included in data collection and received special attention in analysis. For example, the case studies included interviews with representatives of private colleges to determine the perceived costs and benefits to the independent sector of cooperative efforts with other schools (both in and out of the state) to standardize and share outcomes information.

Fourth, the Working Group has been especially interested in efforts within several states to link educational data to occupational data, such as unemployment insurance wage record files. In so doing, these states can measure students' occupational outcomes in a more comprehensive and cost effective manner than previously possible. These and other linkages represent important innovations in student outcomes data collection and analysis. At the same time, these linkages pose complex technical, logistical, and political challenges. By reviewing the experiences of states that have been pioneers in linking student and occupational data, the Working Group hoped to assist others in effective planning, implementation, and use of linked or integrated databases.

This report summarizes the activities and conclusions of the Working Group. Major activities include: (a) conducting case studies of student outcomes data collection in two states; (b) analyzing the strengths and weaknesses of existing student outcomes data systems; and (c) formulating recommendations for strengthening student outcomes data systems.


FOOTNOTE:

[1] To date, the Working Group has focused primarily on the first three goals.


PDF Download/view the full report in a PDF file.(182K)

HELP Help with PDF files

For more information about the content of this report, contact Nancy Borkow at Nancy.Borkow@ed.gov.