- Surveys & Programs
- Data & Tools
- Fast Facts
- News & Events
- Publications & Products
- About Us

*Nontraditional Undergraduates / Appendix B*

The need for a nationally representative database on postsecondary student financial aid prompted the U.S. Department of Education to conduct the National Postsecondary Student Aid Study (NPSAS), a survey conducted every three years beginning in 1987. The NPSAS sample was designed to include students enrolled in all types of postsecondary education. Thus, it included students enrolled in public institutions; private, not-for-profit institutions; and private, for-profit institutions. The sample included students at 4-year and 2-year institutions, as well as students enrolled in occupationally specific programs that lasted for less than 2 years. United States service academies were not included in the institution sample because of their unique funding and tuition base, and certain other type of institutions were also excluded.[34]

[34] Other excluded institutions were those offering only avocational, recreational, or remedial courses; those offering only in-house business courses; those offering only programs of less than 3 months' duration; and those offering only correspondence courses.

NPSAS surveys include a stratified sample of approximately 50,000 students (about 90% of whom were undergraduates) from about 1,100 institutions. Students were included in the samples if they attended a NPSAS-eligible institution; were enrolled on October 15, 1986 in the NPSAS:87 survey, and between July 1 and June 30 of the academic year of the survey for the NPSAS:90 and NPSAS:93 surveys; and were enrolled in one or more courses or programs including courses for credit, a degree or formal award program of at least 3 months' duration, or an occupationally or vocationally specific program of at least 3 months' duration. Regardless of their postsecondary status, however, students who were also enrolled in high school were excluded. NPSAS:87 differed from NPSAS:90 and NPSAS:93 in that the sample represents postsecondary students enrolled in the fall term only. The subsequent surveys represent students enrolled in all terms.

The NPSAS survey samples, while representative and statistically accurate, are not simple random samples. Instead, the samples are selected using a more complex three-step procedure with stratified samples and differential probabilities of selection at each level. First, postsecondary institutions are initially selected within geographical strata. Once institutions are organized by zip code and state, they are further stratified by control (i.e., public; private, not-for-profit; or private, for-profit) and offering (less-than-2-year, 2-year, 4-year nondoctorate-granting, and 4-year doctorate-granting). Sampling rates for students enrolled at different institutions and levels (undergraduate or other) vary, resulting in better data for policy purposes, but at a cost to statistical efficiency.

For each student in the NPSAS sample, there are up to three sources of data. First, institution registration and financial aid records are extracted. Second, a Computer Assisted Telephone Interview (CATI) is conducted with each student.[35] Finally, a CATI designed for the parents or guardians of a subsample of students is conducted. Data from these three sources are synthesized into a single system with overall response rates of about 67 percent, 89 percent, and 85 percent, respectively, for NPSAS:87, NPSAS:90, and NPSAS:93.

[35] The CATI system was begun in 1989-90, NPSAS:87 was a mailed questionnaire.

For more information on the NPSAS surveys, consult the three corresponding
methodology reports--*Methodology Report for the National Postsecondary
Student Aid Study* (1987, 1989, and 1992, Washington, D.C.:
U.S. Department of Education, ).

The Beginning Postsecondary Student Longitudinal Study (BPS) follows NPSAS:90 students who enrolled in postsecondary education for the first time in 1989-90. The first followup was conducted in spring 1992 and the second in spring 1994. BPS collected information from students on their persistence, progress, and attainment and on their labor force experience using a CATI. Approximately 8,000 students were included in the BPS sample with an overall response rate of 91 percent.

Unlike other NCES longitudinal surveys (such as High School and
Beyond) which are based on age-specific cohorts, the BPS sample
is more likely to include some of the increasing numbers of "nontraditional"
postsecondary students, such as those who have delayed their education
due to financial needs or family responsibilities. Students who
began their postsecondary studies during some other period and
then returned to them in 1989-90, however, were not included nor
were those who were still enrolled in high school.

**Accuracy of Estimates**

The statistics in this report are estimates derived from a sample. Two broad categories of error occur in such estimates: sampling and nonsampling errors. Sampling errors occur because observations are made only on samples of students, not on entire populations. Nonsampling errors occur not only in sample surveys but also in complete censuses of entire populations.

Nonsampling errors can be attributed to a number of sources: inability
to obtain complete information about all students in all institutions
in the sample (some students or institutions refused to participate,
or students participated but answered only certain items); ambiguous
definitions; differences in interpreting questions; inability
or unwillingness to give correct information; mistakes in recording
or coding data; and other errors of collecting, processing, sampling,
and imputing missing data.

The estimates presented in this report were produced using the
NPSAS:87,` `NPSAS:90, NPSAS:93 Undergraduate Data Analysis
Systems (DAS), and the BPS:90/94 DAS. The DAS software makes it
possible for users to specify and generate their own tables from
the NPSAS data. With the DAS, users can re-create or expand upon
the tables presented in this report. In addition to the table
estimates, the DAS calculates proper standard errors[36] and weighted
sample sizes for these estimates. For example, tables B1 and B2
present the standard errors that corresponds to table 2 and table
12, respectively in the text. If the number of valid cases is
too small to produce an estimate, the DAS prints the message "low-N"
instead of the estimate.

[36] The NPSAS and BPS samples are not simple random samples and, therefore, simple random sample techniques for estimating sampling error cannot be applied to these data. The DAS takes into account the complexity of the sampling procedures and calculates standard errors appropriate for such samples. The method for computing sampling errors used by the DAS involves approximating the estimator by the linear terms of a Taylor series expansion. The procedure is typically referred to as the Taylor series method.

In addition to tables, the DAS will also produce a correlation matrix of selected variables to be used for linear regression models. Included in the output with the correlation matrix are the design effects (DEFT) for all the variables identified in the matrix. Since statistical procedures generally compute regression coefficients based on simple random sample assumptions, the standard errors must be adjusted with the design effects to take into account the NPSAS stratified sampling method. (See discussion under "Statistical Procedures" below for the adjustment procedure.)

*For more information about the NCES NPSAS:87, NPSAS:90, NPSAS:93,
and BPS:90/94 Data Analysis Systems, contact:
*

Aurora D'Amico(202) 502-7334 Email address: Aurora.D'Amico@ed.gov

Two types of statistical procedures were employed in this report:
testing differences between means, and adjustment of means after
controlling for covariation among a group of variables. Each procedure
is described below.

*Differences Between Means*

The descriptive comparisons were tested in this report using Student's
*t* statistic. Differences between estimates are tested against
the probability of a Type I error, or significance level. The
significance levels were determined by calculating the Student's
*t* values for the differences between each pair of means
or proportions and comparing these with published tables of significance
levels for two-tailed hypothesis testing.

Student's *t* values may be computed to test the difference
between estimates with the following formula:

(1)

where *E*_{1} and *E*_{2} are the estimates
to be compared and *se*_{1} and *se*_{2}
are their corresponding standard errors. Note that this formula
is valid only for independent estimates. When the estimates were
not independent (for example, when comparing the percentages across
a percentage distribution), a covariance term was added to the
denominator of the *t*-test formula.

There are hazards in reporting statistical tests for each comparison.
First, comparisons based on large *t* statistics may appear
to merit special attention. This can be misleading, since the
magnitude of the *t* statistic is related not only to the
observed differences in means or percentages but also to the number
of students in the specific categories used for comparison. Hence,
a small difference compared across a large number of students
would produce a large *t* statistic.

A second hazard in reporting statistical tests for each comparison occurs when making multiple comparisons among categories of an independent variable. For example, when making paired comparisons among different levels of income, the probability of a Type I error for these comparisons taken as a group is larger than the probability for a single comparison. When more than one difference between groups of related characteristics or "families" are tested for statistical significance, one must apply a standard that assures a level of significance for all of those comparisons taken together.

Comparisons were made in this report only when p
.05/*k* for a particular pairwise comparison, where that
comparison was one of *k* tests within a family. This guarantees
both that the individual comparison would have p
.05 and that for *k* comparisons within a family of
possible comparisons, the significance level for all the comparisons
will sum to p .05.[37]

[37] The standard that p<.05/kfor each comparison is more stringent than the criterion that the significance level of the comparisons should sum to p<.05. For tables showing thetstatistic required to ensure that p<.05/kfor a particular family size and degrees of freedom, see Olive Jean Dunn, "Multiple Comparisons Among Means,"Journal of the American Statistical Association56: 52-64.

For example, in a comparison of the percentages of males and females
who enrolled in postsecondary education only one comparison is
possible (males versus females). In this family, *k*=1, and
the comparison can be evaluated without adjusting the significance
level. When students are divided into five racial-ethnic groups
and all possible comparisons are made, then *k*=10 and the
significance level of each test must be p
.05/10, or p .005. The
formula for calculating family size (k) is as follows:

(2)

where *j* is the number of categories for the variable being
tested. In the case of race-ethnicity, there are five racial-ethnic
groups (American Indian, Asian/Pacific Islander, black non-Hispanic,
Hispanic, and white non-Hispanic), so substituting 5 for *j*
in equation 2,

*Adjustment of Means*

Tabular results are limited by sample size when attempting to control for additional factors that may account for the variation observed between two variables. For example, when examining the percentages of those who completed a degree, it is impossible to know to what extent the observed variation is due to socioeconomic status (SES) differences and to what extent it is due to differences in other factors related to SES, such as type of institution attended, intensity of enrollment, and so on. However, if a nested table were produced showing SES within type of institution attended, within enrollment intensity, the cell sizes would be too small to identify the patterns. When the sample size becomes too small to support controls for another level of variation, one must use other methods to take such variation into account.

To overcome this difficulty, multiple linear regression was used
to obtain means that were adjusted for covariation among a list
of control variables. Adjusted means for subgroups were obtained
by regressing the dependent variable on a set of descriptive variables
such as gender, race-ethnicity, SES, etc. Substituting ones or
zeros for the subgroup characteristic(s) of interest and the mean
proportions for the other variables results in an estimate of
the adjusted proportion for the specified subgroup, holding all
other variables constant. For example, consider a hypothetical
case in which two variables, age and gender, are used to describe
an outcome,* Y* (such as completing a degree). The variables
age and gender are recoded into a dummy variable representing
age and a dummy variable representing gender:

Age -- *A
*

24 years or older -- 1

Under 24 years old -- 0

and

Gender --* G
*

Female -- 1

Male -- 0

The following regression equation is then estimated from the correlation
matrix output from the DAS:

Y = a+ b_{1}*A* + b_{2}*G*
(3)

To estimate the adjusted mean for any subgroup evaluated at the
mean of all other variables, one substitutes the appropriate values
for that subgroup's dummy variables (1 or 0) and the mean for
the dummy variable(s) representing all other subgroups. For example,
suppose we had a case where Y was being described by age (*A*)
and gender (*G*), coded as shown above, and the means for
*A* and *G* are as follows:

Variable MeanA0.355G0.521

Suppose the regression equation results in:

^

Y = 0.15 + (0.17)*A* + (0.01)*G *(4)

To estimate the adjusted value for older students, one substitutes
the appropriate parameter values into equation 3.

Variable -- Parameter -- Value a 0.15 -A0.17 1.000G0.01 0.521

This results in:

^

Y = 0.15 + (0.17)(1) + (0.01)(0.521) = 0.325 (5)

In this case the adjusted mean for older students is 0.325 and represents the expected outcome for older students who look like the average student across the other variables (in this example, gender).

It is relatively straightforward to produce a multivariate model using NPSAS or BPS:90/94 data, since one of the output options of the DAS is a correlation matrix, computed using pair-wise missing values.[38] This matrix can be used by most commercial regression packages as the input data to produce least-squares regression estimates of the parameters. That was the general approach used for this report, with two additional adjustments described below to incorporate the complex sample design into the statistical significance tests of the parameter estimates.

[38] Although the DAS simplifies the process of making regression models, it also limits the range of models. Analysts who wish to use other than pairwise treatment of missing values or to estimate probit/logit models can apply for a restricted data license from NCES.

Most commercial regression packages assume simple random sampling when computing standard errors of parameter estimates. Because of the complex sampling design used for the NPSAS and BPS surveys, this assumption is incorrect. A better approximation of their standard errors is to multiply each standard error by the average design effect of the dependent variable (DEFT),[39] where the DEFT is the ratio of the true standard error to the standard error computed under the assumption of simple random sampling. It is calculated by the DAS and produced with the correlation matrix.

[39] The adjustment procedure and its limitations are described in C.J. Skinner, D. Holt, and T.M.F. Smith, eds.Analysis of Complex Surveys(New York: John Wiley & Sons, 1989).

----------------------------------------------------------------------------------------------------------------------- Nontraditional characteristics ------------------------------------------------------------------------------------- GED or Percent with Older Attend Work Have high school any NT than part full Independ- depend- Single completion Year characteristics typical time time ent/1 ents parent certificate ----------------------------------------------------------------------------------------------------------------------- All 86 0.86 0.82 1.01 0.71 0.77 0.51 0.26 0.28 undergraduates 89 0.81 0.90 1.05 0.64 0.88 0.62 0.35 0.25 92 0.76 0.75 0.95 0.66 0.74 0.51 0.26 0.24 Nontraditional Total percent undergraduates: with status Minimally 86 0.30 1.12 1.04 0.77 0.69 0.00 0.00 0.46 nontraditional 89 0.32 1.04 1.25 1.11 0.52 0.00 0.00 0.21 92 0.30 1.13 1.20 0.68 0.58 0.00 0.00 0.16 Moderately 86 0.49 0.62 0.54 0.96 1.08 0.62 0.12 0.48 nontraditional 89 0.48 0.54 1.23 0.93 0.89 0.62 0.20 0.35 92 0.50 0.33 1.04 0.82 0.77 0.50 0.24 0.31 Highly 86 0.68 0.14 0.13 0.95 0.11 0.98 0.85 0.77 nontraditional 89 0.77 0.13 0.78 0.87 0.07 0.90 0.95 0.71 92 0.61 0.18 0.79 0.91 0.05 0.79 0.89 0.70 -----------------------------------------------------------------------------------------------------------------------

**SOURCES:** U.S. Department of Education, National Center
for Education Statistics, National Postsecondary Student Aid Study:
1986-87 (NPSAS:87), 1989-90 (NPSAS:90), 1992-93 (NPSAS:93), Data
Analysis Systems.

------------------------------------------------------------------------------- No degree No degree Attained attained, attained, any enrolled not enrolled degree in 1994 in 1994 ------------------------------------------------------------------------------- Total 1.09 0.76 1.07 Traditional 1.40 0.95 1.22 Nontraditional 1.43 1.08 1.49 Minimally nontraditional 2.08 1.59 1.88 Moderately nontraditional 2.50 2.02 2.85 Highly nontraditional 3.07 2.25 3.09 -------------------------------------------------------------------------------

**SOURCE:** U.S. Department of Education, National Center for
Education Statistics, Beginning Postsecondary Students Longitudinal
Study, Second Followup (BPS:90/94).