View Quarterly by:
This Issue | Volume and Issue | Topics
|
|||
| |||
This article was originally published as the Executive Summary of the technical report of the same name. The sample survey data are from the Beginning Postsecondary Students Longitudinal Study (BPS). | |||
The 1996 Beginning Postsecondary Students Longitudinal Study (BPS) follows a cohort of students who started their postsecondary education during the 1995-96 academic year. Students were first interviewed during 1996 as part of the 1995-96 National Postsecondary Student Aid Study (NPSAS:1996). BPS:1996/1998 is the first follow-up of this cohort. A second follow-up in 2001 will monitor academic progress through 6 years and assess completion rates for 4-year programs in the normal time expected. A third follow-up, scheduled to occur in 2003, 7 to 8 years after college entry, will allow for analysis of attainment among students who started working on a baccalaureate degree in 1995-96.
This technical report describes the methods and procedures used for the full-scale data collection effort of BPS:1996/1998. The report begins by presenting the background and purposes of the BPS full-scale study. Next, the design and methodology of the study are described, and overall outcomes of data collection and evaluations of the quality of data collected are provided. Discussions of data file construction and of weighting and variance estimations are presented in the final chapters. Materials used during the full-scale study are provided as appendices to the report.
The respondent universe for the BPS:1996/1998 full-scale study consisted of all students who began their postsecondary education for the first time during the 1995-96 academic year at any postsecondary institution in the United States or Puerto Rico. The sample students were the first-time beginners (FTBs) who attended postsecondary institutions eligible for inclusion in NPSAS:1996 and who were themselves NPSAS eligible. Students eligible for BPS:1996/1998 were those students eligible for NPSAS:1996 who were FTBs at NPSAS sample institutions in the 1995-96 academic year. The number of NPSAS:1996 computer-assisted telephone interview (CATI) respondents for which BPS:1996/1998 interviews were attempted was 11,985. In addition, 425 NPSAS:1996 nonrespondents who were potential FTBs were sampled for follow-up to improve upon the nonresponse bias reduction achieved through the nonresponse adjustments incorporated into the NPSAS:1996 statistical analysis weights. In an attempt to increase both the sample yield and the weighted effective response rate, a nonrespondent subsample of 300 was selected for more intensive data collection efforts from among nonfinalized CATI nonrespondents.
Section A of the BPS interview determined both eligibility for NPSAS:1996 and status as an FTB for those individuals who were nonrespondents during the NPSAS:1996 interview. It also collected background information for NPSAS:1996 partial respondents who missed key items during the base-year interview. Sections B through G collected new and updated information on postsecondary enrollment, employment, income, family formation/household composition, student financial aid, debts, education experiences, and education and career aspirations. The final section updated locating information in order for sample members to be more easily located during the second follow-up.
Three months prior to the start of data collection, a package was mailed to parents and/or other contacts to update the most recent student addresses and gain cooperation by explaining the purposes of the study. A standard lead letter was then mailed to students 2 weeks prior to the start of data collection to inform them of the upcoming interview and obtain additional postal service address updates. New contact information was preloaded into the CATI instrument to assist in locating sample members. Cases not located during the CATI-internal locating process were worked through one or more CATI-external locating procedures.
Training of interviewers For BPS:1996/1998, project staff developed two separate training programs: one for telephone interviewers and supervisors, who collected data through CATI; and one for field interviewers and supervisors, who conducted interviews through computer-assisted personal interviews (CAPI). Training topics covered administrative procedures, including confidentiality requirements and quality control techniques; student locating; interactions with students; the nature of the data to be collected; and the organization and operation of the CATI and CAPI programs used for data collection.
Telephone interviewing CATI locating and interviewing began in the spring of 1998. The initial CATI sample consisted of verified FTBs who had been located and interviewed successfully in the NPSAS:1996 full-scale data collection and for whom locating information was available. Additionally, sampled NPSAS:1996 non- respondents for whom new or verified locating information was obtained were included in the CATI sample. The remaining sample members became part of the initial field tracing and interviewing sample. Field locating and interviewing activities began approximately 3 months after the start of CATI interviewing so that a sufficient number of cases would be available to be worked in each of the 34 geographic clusters.
Overall contacting and interviewing results Overall contacting and interviewing results are shown in figure 1. Of the 12,410 students in the original sample, 11,184 were located and contacted, and 166 were excluded (out of scope) because they were deceased, out of the country, institutionalized or physically/mentally incapacitated,1 had no phone, or were otherwise unavailable for the entire data collection period. Among the contacted subsample, 10,332 were interviewed, 10,268 of whom were verified FTBs. The unweighted contact rate, exclusive of those out of scope, was 91.3 percent (11,184/12,244). For those contacted, the interview rate was 92.3 percent (10,268/11,120). The overall unweighted response rate was 84.3 percent (91.3 x 92.3).
Refusal conversion Efforts to gain cooperation from sample members included refusal conversion procedures. When a case initially refused to participate, the case was referred to a refusal conversion specialist. Fifteen percent (1,928 cases) refused to be interviewed at some point during data collection. Refusal conversion specialists called the sample members to try to gain full cooperation with the interview. When full cooperation could not be obtained, an abbreviated interview was attempted to obtain key information. Fifty-three percent (1,018 cases) of the refusals were converted.
Partial responses Of the 10,268 verified FTBs who were interviewed, full interviews were completed for 9,812 sample members, partial interviews were completed for 113 sample members, and abbreviated interviews were completed with 343. An interview was considered a partial interview if at least section B (enrollment information) of the main interview was completed, but not the full interview.
Field interviewing A total of 2,094 cases were assigned to field interviewers. Cases were selected for a number of reasons, including Puerto Rico residence, inability to locate in CATI, refusal in CATI, or extensively worked in CATI but unable to reach the subject. Only cases located in close geographic proximity to a field interviewer were assigned to the field. Seventy percent of the field cases were contacted (in either CATI or field), and 70 percent of those contacted were interviewed.
Timing The average administration time for the full-scale interview was 20 minutes, which was 2 minutes shorter than the field test and 9 minutes shorter than the NPSAS:1996 full-scale interview. On average, NPSAS:1996 nonrespondents took 5 minutes longer to complete the interview than NPSAS:1996 respondents. Section A, which was skipped by NPSAS:1996 full respondents, accounts for the majority of this additional time.
Indeterminate responses Overall item nonresponse rates were low, with only 10 of the 363 items containing over 10 percent missing data. Items with the highest rates of nonresponse were those pertaining to income. Many respondents were reluctant to provide information about personal and family finances and, among those who were not, many simply did not know.
Figure 1.Contacting and interviewing outcomes SOURCE: U.S. Department of Education, National Center for Education Statistics, 1996 Beginning Postsecondary Students Longitudinal Study, "First Follow-up" (BPS:1996/1998).
Online coding The BPS:1996/1998 instrument included tools that allowed computer-assisted online assignment of codes to literal responses for postsecondary education institution, major field of study, occupation, and industry. Ten percent of the major, occupation, and industry coding results were sampled and examined on a regular basis during data collection. Approximately 2 to 9 percent of the verbatim text strings were too vague to properly evaluate. Additionally, 5 to 10 percent of the strings were recoded, although very few resulted in a shift across broad categories.
Quality control monitoring Monitors listened to up to 20 questions during an ongoing interview and, for each question, evaluated two aspects of the interviewer-respondent interchange: whether the interviewer delivered the question correctly and whether the interviewer keyed the appropriate response. Over 14,000 items were monitored during the data collection period. The majority of the monitoring data was collected during the first half of data collection.
The sample for BPS:1996/1998 includes not only the students who were identified as FTBs in their NPSAS:1996 interviews, but also a subsample of NPSAS:1996 non-respondents who were considered potential FTBs at the conclusion of the study. Therefore, computation of the statistical analysis weights for BPS:1996/1998 consisted of the following primary steps: computing special-purpose NPSAS:1996 weights that account for follow-up of NPSAS:1996 nonrespondents within BPS:1996/1998; and computing the BPS:1996/1998 analysis weights from the special-purpose NPSAS:1996 weights.
The cumulative effect of the various factors affecting the precision of a survey statistic is often modeled as the survey design effect. The design effect is defined as the ratio of the sampling variance of the statistic under the actual sampling design divided by the variance that would be expected for a simple random sample of the same size. Hence, the design effect is unity (1.00), by definition, for simple random samples. For most practical sampling designs, the survey design effect is greater than unity, reflecting that the precision is less than could be achieved with a simple random sampling of the same size (if such a design were practical). The size of the survey design effect depends largely on the sample size and intracluster correlation within the primary sampling units. Hence, statistics that are based on observations that are highly correlated within institutions will have higher design effects for BPS. In order to provide an approximate characterization of the precision with which BPS:1996/1998 survey statistics can be estimated, the full report includes a short series of tables that provide estimates of key statistics, their standard errors, and the estimated survey design effects.
Although there are many other potential sources of bias, one of the most important sources of bias in sample surveys is survey nonresponse. Survey nonresponse results in bias when the unobserved outcomes for the nonrespondents are systematically different from the observed outcomes for the respondents. Hence, we can model the potential for nonresponse bias by modeling the pattern of mean response by date of response. We first used the date of interview (or date of last access for non-CATI responses) to subdivide the 10,268 survey respondents into 10 groups of approximately 1,000 respondents each. Then, within each institution level (less-than-2 year, 2-year, and 4-year), we again subdivided all respondents into 10 groups of approximately equal numbers of respondents. This strategy was adopted so that the mean response in each group would have approximately the same precision. However, it also results in respondent groups with shorter ranges of dates at the beginning of data collection because relatively larger numbers of interviews were completed during the first few months of data collection. We examined the pattern of cumulative mean response by date of interview for the following: mean age in the base year; percent minority; percent enrolled in spring 1998; percent who attained a degree by June 1998; and mean number of risk factors. In addition, for all students combined, we examined the mean of the institution level attended in the base year. For students who attended 4-year institutions in the base year, we examined the percentage who reported in the base year that they were attempting a baccalaureate degree. If the mean responses from the later groups of respondents are reasonably consistent, then obtaining additional responses probably will have little effect on survey estimates and nonresponse bias probably is negligible. Some potential for bias by institution level was evident for overall population estimates because it appears that additional respondents would be more likely to have attended less-than-4-year institutions. The only other evidence of potential for bias was with respect to the percentage of respondents who were enrolled in the spring of 1998. For students from 4-year institutions and for the sample as a whole, it appears that additional respondents would be more likely to have not been enrolled in the spring of 1998.
Footnotes
1 Sample members
were identified as institutionalized or physically/mentally incapacitated
by parents or other contacts.
|