![]() |
View Quarterly by:
This Issue | Volume and Issue | Topics
![]() |
||
| |||
This article was excerpted from the technical report of the same name. The sample survey data are from the Second Follow-up to the Baccalaureate and Beyond Longitudinal Study (B&B). | |||
The Baccalaureate and Beyond Longitudinal Study (B&B) tracks the experiences of a cohort of college graduates who received their baccalaureate degree during the 1992-93 academic year and were first interviewed in 1993 as part of the National Postsecondary Student Aid Study (NPSAS:93). This group's experiences in the areas of academic enrollments, degree completions, employment, public service, and other adult decisions will be followed for about 12 years, in a series of four follow-up interviews.
Schedule and purpose of the B&B interviews The first follow-up interview (B&B:93/94) collected information from respondents in 1994, 1 year after they received their bachelor's degrees. This report concerns the second follow-up interview (B&B:93/97), which collected data 4 years after bachelor's degree receipt. The next interview is planned for 9 years after graduation. By the time of the final interview, most students who attend graduate or professional schools should have completed, or nearly completed, their education and be established in their careers. The B&B study provides data to address issues in four major areas of education policy: outcomes of postsecondary attainment; access to graduate and professional schools; rates of return on investment in a bachelor's degree; and patterns of preparation for, and engagement in, teaching. With its wealth of data on the consequences of post-secondary education, B&B will contribute to the study of education as a lifelong process.
Content of this report This report documents B&B:93/97 methodology, examining sample design, instrument development and data collection, response rates, efficacy of the survey instrument, and weights and design effects. Also included in the report are reference materials such as letters and other information sent to members of the B&B:93/97 sample; a list of variables for B&B:93/97; and the survey instruments for NPSAS:93, B&B:93/94, and B&B:93/97.
The B&B sample design represents all postsecondary students in the United States who completed a bachelor's degree in academic year 1992-93. The B&B sample is a subsample of the students selected for the NPSAS:93 sample, a nationally representative sample of all postsecondary students.1
Sample for the first follow-up
The B&B:93/94 sample included those students in the NPSAS:93 sample who were identified either by the institution or during the student interview as having completed a bachelor's degree in the 1992-93 academic year (July 1, 1992, through June 30, 1993). In addition to retaining all 11,180 of the 1992-93 baccalaureate recipients who completed the NPSAS:93 interview, B&B:93/94 also retained subsamples of nonrespondents and of remaining eligible cases for which at least some data were available.2 Altogether, the B&B:93/94 sample included 12,478 cases.
Sample for the second follow-up
After B&B:93/94 data collection was complete, additional cases in the initial follow-up sample were found to be ineligible for B&B (Green et al. 1996). People were retained for follow-up in later rounds of the study if they were eligible either according to the student interview (10,080 people) or according to transcripts (an additional 1,094 people). Also included were 18 cases for which eligibility remained unknown in both the interview and the transcripts. Altogether, therefore, 11,192 cases were retained for future rounds, including B&B:93/97. During B&B:93/97 data collection, 30 of these cases were found to be either out of scope (29 cases) or ineligible (1 case), reducing the number of eligible cases to 11,162.3
A modified version of the B&B:93/94 instrument was shortened and revised based on results of the B&B:93/97 field test, input from the 23-member Technical Review Panel, and additional review and testing.
Revision of questionnaire items Items were dropped mainly for lack of reliability or usefulness. Topics for descriptive reports were identified and then used as a guide to determine which questionnaire items could be dropped and which should be retained, revised, or clarified. Most of the items excluded from the second follow-up main study instrument were from the demographic section (e.g., questions about high school grades, income of other household members, and access to computers). The most extensively revised portion of the instrument was the teaching section. A new definition of what constitutes the "teacher pipeline" was used to redesign the initial filter questions for this section. Another important revision was moving the teaching section to precede the employment section, so that data about teaching jobs were collected before data about other (nonteaching) jobs. The intended effect was to reduce respondent burden from the first follow-up, when data were first collected about all jobs and then again about teaching jobs.
Incorporation of online coding systems The B&B:93/97 instrument was designed to use five online coding systems developed by the National Center for Education Statistics (NCES). These coding systems enabled interviewers to code responses during the interview; they also guided interviewers' probes of any unclear or incomplete answers. These systems were used to code (1) occupation, (2) industry, (3) major field of study, (4) postsecondary schools attended, and (5) for teachers, the elementary and secondary schools where they taught.
In the spring of 1997, an advance mailing containing a letter and informational leaflet was sent to all 11,192 of the B&B:93/97 sample members. Data collection for the second follow-up began in early April, approximately 1 week after the advance mailing, and continued through December of 1997. Respondents were interviewed using one of two computer-assisted interviewing (CAI) systems. The majority of interviews were conducted by telephone interviewers located at a central facility using a computer-assisted telephone interviewing (CATI) system. These interviews were completed between April and July of 1997. The remaining cases were completed by field interviewers using a computer-assisted personal interviewing (CAPI) and case management system (CMS) that was loaded into their individual laptop computers. Most of these interviews were also conducted by telephone, but some were administered in person. These cases were completed between July and December of 1997.
Interviewer preparation and quality control Following a training period, all interviewers completed a mock interview with a supervisor or field manager, who ensured that they were ready to begin working their cases. To ensure data quality, the following procedures were used throughout the data collection phase: monitoring CATI (telephone facility) interviews on a random basis; checking the quality of CAPI (field) interviews by recontacting and briefly questioning a random selection of respondents; recoding a sample of entries from each of the five online coding programs; producing and reviewing production statistics for both CATI and CAPI interviewers on a daily basis; and reviewing item frequencies as well as "time stamps" that show the amount of time taken to complete each section of the interview.
CATI production As shown in figure A, all case records for the sample were loaded into the CATI telephone number management system (TNMS), which automatically delivered the cases to interviewers, tracked progress on all cases, and categorized each case based on the outcome of the previous telephone call. Over a period of 16 weeks, approximately 100 telephone center interviewers completed a total of 7,139 cases (63.9 percent of the 11,162 eligible cases). The number of calls per completed case is the best indicator of the level of effort required in the interviewing task. The number of CATI calls made to complete a case averaged 18.5 for the B&B:93/97 sample, compared to an average of 13.4 CATI calls for the B&B:93/94 sample. These data indicate that a much higher level of effort was required to complete cases in 1997. This was largely due to the much higher number of locating problems encountered (interviewers were much less likely to locate sample members at their preloaded phone numbers or still residing with their parents) and also reflects the busier lifestyles of the majority of sample members, who may have more career and family responsibilities than they had 3 years ago.
Figure APaths toward case completion
*Cases could be designated as locating or refusal problems, or both. SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993 Baccalaureate and Beyond Longitudinal Study, Second Follow-up (B&B:93/97). (Originally published as figure 4.1 on p. 14 of the complete report from which this article is excerpted.)
CAPI operations After interviewing at the telephone center was halted, all pending cases were transferred to field staff working in different regions of the United States. In addition to 58 telephone field interviewers, a total of 112 in-person field interviewers were hired as needed for locally based assistance in locating respondents or contacting respondents in person. A total of 4,000 cases (35.8 percent of the total sample) were sent to the field (figure A). All were cases that the telephone center had been unable to complete because the respondent refused, was evasive, or had not yet been located. Locating problems were the most significant deterrent to field production. About halfway through the field interviewing period, therefore, field staff were reconfigured into task-specific groups, which were able to handle the problems encountered more efficiently. Over a period of 23 weeks, field interviewers completed a total of 2,954 cases (73.8 percent of cases that were sent to the field and 26.5 percent of all eligible cases).
Respondent locating The B&B:93/97 field test experience was that more than half, rather than the expected third, of sample members had required locating. Prior to data collection, therefore, all cases were sent to a credit bureau database service to obtain updated phone and address information for each sample member. Cases for whom no phone number was available, either through this process or from an earlier interview, and cases whose updated phone number was subsequently identified as being incorrect, were sent to locating specialists. As figure A indicates, 5,881 cases (53 percent of the initial sample) required this intensive locating while in the telephone center. About half of these cases were eventually completed in the telephone center; the other half were sent to the field, where 429 additional locating problem cases were identified. Despite the large number of cases with locating problems, efforts to locate sample members proved very successful: only 2.7 percent of cases with locating problems (only 1.5 percent of all cases) were never located. Interviews were eventually completed with 86 percent of cases that had ever been identified as having locating problems. However, the refusal rate for cases with locating problems was twice as high as for cases without such problems, suggesting that some locating problems were actually hidden refusals.
Refusal conversion Although sample members' refusal to participate in the study presented less of a problem in the second follow-up than in the first follow-up, conversion remained difficult. Fifteen percent (1,679) of eligible sample members refused to participate at some time during the second follow-up, compared to 20 percent during B&B:93/94. The majority of these cases (1,415) were first identified as refusals in the telephone center. CATI refusal conversion specialists were able to complete interviews with about one-quarter of these sample members; three-quarters of these cases (1,050) had to be sent to the field, where interviewers could contact sample members in person if necessary. Field interviewers were able to convert an additional 782 reluctant sample members, producing a final response rate of 67 percent among those who had ever refused to participate.
Interviews were completed with 10,093 of the 11,162 eligible B&B:93/97 cases, for a final unweighted response weight of 90.4 percent (table A). Just 1.5 percent of the sample were finalized as unlocatable, while only 2.6 per-cent of the sample were finalized as refusals. Much of the remaining 5.5 percent nonresponse is attributable to sample members who were either out of the country or not available at any time during the time frame of this follow-up. Among sample members who had refused to participate at some point in the production period, the response rate was lower in 1997 than in 1994 (67 percent versus 74 percent). This might seem to suggest that the hard-to-persuade are becoming more intransigent; however, only 39 B&B:93/97 sample members have been nonrespondents to all three waves of data collection (NPSAS:93, B&B:93/94, and B&B:93/97). For B&B:93/97, in fact, successful interviews were completed with 501 sample members who had been nonrespondents in the first follow-up and 351 sample members who had been nonrespondents in NPSAS:93. The 2.6 percent rate of final refusal in B&B:93/97 compares favorably to the 5.8 percent refusal rate in B&B:93/94.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993 Baccalaureate and Beyond Longitudinal Study, Second Follow-up (B&B:93/97). (Originally published as table 5.1 on p. 28 of the complete report from which this article is excerpted.)
Not applicable. *B&B:93/97 discovered 29 deceased eligibles and one ineligible previously undiscovered. NOTE: Due to rounding, details may not add up to 100 percent. SOURCE: U.S. Department of Education, National Center for Education Statistics, Baccalaureate and Beyond Longitudinal Study: 1993/94 First Follow-up Methodology Report (NCES 96-149); and 1993 Baccalaureate and Beyond Longitudinal Study, Second Follow-up (B&B:93/97). (Originally published as table 2.2 on p. 7 of the complete report from which this article is excerpted.)
The B&B panel For the second follow-up, more interviews were completed than in the first follow-up, despite the fact that 23 of the first follow-up respondents had since died. Table B shows the full response patterns for all 11,192 B&B sample members. This table describes each type of response combination to the three survey rounds (starting with NPSAS:93) and provides frequencies for each description. As shown, a full 83 percent of the sample responded to all three rounds; these 9,274 respondents are classified as the B&B panel.
Response rates by demographic group While response rates are similar across many demographic subgroups, some distinctive differences exist. Response rates decrease slightly with age (93.1 percent of those under 26 compared to 90.4 percent of those over 30 participated), but participation among males and females is approximately equal. Response rates are also similar among whites, blacks, and American Indians (ranging from 89.5 percent to 91.6 percent) but are substantially lower for Asian/Pacific Islanders (only 82.2 percent) and those identifying themselves as "other" (73.8 percent).
One can look at several factors in assessing the efficacy of the survey instrument. This report discusses the interview length, the accuracy of interviewer coding when using online coding utilities, and the level of individual item nonresponse.
Interview length The average length of a completed interview for B&B:93/97 was almost 33 minutes (only 1 minute longer than the average administration time in the first follow-up). Not counting the locating section, which gathered address and telephone numbers for the respondent, parents, and other contacts, the average interviewing time was almost 28 minutes. When looking at administration time by section, clearly the longest section was the one that collected data about employment since the last interview date, with an average time of 11.5 minutes. The next highest administration time (7.5 minutes) was for the final sectionwhich collected demographic, civic participation, household, and debt informationbut this is partially due to the fact that this section was the longest in terms of number of questions. A little over 6 minutes were spent collecting information about postbaccalaureate education and internships, and an average of about 2 minutes were spent gathering data on respondents' teaching experiences.
Online coding accuracy Interviewers did a fairly good job in using the five online coding programs, and differences in coding accuracy between programs are relatively small. Three of the programsused to code major field of study, industry, and occupationrequired interviewers first to enter brief "verbatim" text supplied by the respondent and then to select from several possible codes suggested by the program. Ten percent of each week's cases were recoded by specially trained coders, who selected a code based on the verbatim text entered by the interviewer. In cases where the verbatim text was sufficient to allow verification, the percentage of incorrect codes selected by the original interviewers ranged from 5.5 percent (for major field of study) to 2.7 and 2.6 percent (for industry and occupation). Two of the online coding programsfor postsecondary institutions and elementary/secondary schoolsinvolved searching through a multilevel database of states, cities within states, and finally, schools within the selected city. Expert coders examined only those cases where the interviewer entered text because the school could not be found in the coding program. In these cases, the expert coders were asked to judge whether the text entered was sufficiently complete to allow the school to be coded later. About 94 percent of interviewers provided sufficient information to allow coding of postsecondary institutions, while only 76 percent provided that level of information for elementary/secondary schools. It was discovered, however, that respondents had failed to provide the names of 18 percent of the inadequately documented elementary/secondary schools. In a significant portion of the remaining uncodable cases, moreover, the interviewer had not been able to select a city.
Item nonresponse One of the goals of B&B:93/97 was to reduce item nonresponse, which results from respondents either refusing to answer a question or responding that they are unable to provide an accurate answer. This goal was accomplished by building respondent rapport through a variety of innovative techniques, such as conversational interviewing. Although the number of items with significant rates of nonresponse was reduced in the second follow-up, some items were still answered by less than 90 percent of the respondents who were asked. Of the approximately 1,800 questions in the final data set, almost 50 had nonresponse rates over 10 percent. Almost half of these questions were asked of only five or fewer respondents, however, and many were the third or fourth iterations of a looped question. As in the first follow-up (and similar surveys), refusal to answer income and salary questions accounted for a significant proportion of the nonresponse items. Nonetheless, the rate of refusal of such questions was lower than in the first follow-up. Items requiring specific datessuch as those for emigration, employment, and school attendancecontinued to have a high rate of "don't know" responses, as did items about spouse or partner income or debt.
B&B:93/97 final weights were calculated by making a nonresponse adjustment to the baseline B&B weight calculated for B&B:93/94. This baseline B&B weight, in turn, was an adjustment of the baseline NPSAS:93 weight.4
Design effects The design effect is defined as the ratio of the variance corrected for the sampling design to the variance based on a simple random sample (SRS). Most complex multistage sampling designs result in a design effect greater than 1; that is, the variance of an estimate is actually larger than the variance would be had the data been based on an SRS. For B&B:93/97, the Taylor Series procedure was used to calculate the standard errors. Standard errors for 30 variables based on B&B:93/97 data were calculated, both for B&B:93/97 respondents and for B&B panel respondents (respondents to all three surveys: NPSAS:93, B&B:93/94, and B&B:93/97). The design effects for these variables were calculated for the entire population and estimated for subgroups by sex, race, and type of school attended. In addition, design effects for the B&B panel, B&B:93/94, and B&B:93/97 were compared for the overall population as well as subgroups. The panel respondents tend to have the lowest design effects, while the mean design effects tend to be highest for B&B:93/94. These are only slight differences, however, since the three sets of design effects are very similar. Researchers who use the Data Analysis System prepared for use with B&B:93/97 will find that the program automatically produces design-corrected standard errors. Researchers using the restricted-use files are cautioned either to use a package (such as SUDAAN or OSIRIS) that can produce the design-corrected standard errors or to adjust the standard errors computed under SRS assumptions (as produced by typical packages such as SPSS or SAS) by multiplying them by the mean root design effect for that subgroup.5
Nonresponse bias To assess whether there are differences between groups in the frequency of refusing to answer particular questions, a subset of variables used in the examination of design effects was used in a nonresponse bias analysis. The analysis was conducted based on gender, date of interview, and race/ethnicity. No significant differences are evident based on gender; that is, males and females have approximately equal levels of missing data on the items included in this analysis. Significant differences based on date of interview are present for 21 of the 25 variables examined. Cases completed during the April-June period when most of the CATI data collection took place have lower levels of missing data than cases completed during the July-December CAPI field period. While it is possible that this represents a mode effect, it seems likely that it is the result of the fact that difficult cases were completed during the CAPI field period, including respondents who had refused to complete an interview over the phone. The analysis based on race and ethnicity shows some small level of nonresponse bias. In conducting t-tests between the percent valid and percent missing among white respondents, 13 of the 25 comparisons are significant. For all of these items, whites had high levels of valid data in comparison to missing data. Missing responses seem to be distributed more heavily among nonwhite than white cases. In conclusion, the overall level of nonresponse is very low in this data file. The response bias noted here is not sufficiently grave to have a major impact on most analysis. However, it is important to note so that improvements can be made for the next round of data collection.
Footnotes
1
NPSAS:93 employed a stratified two-stage sample design with postsecondary
institutions as the first-stage unit and students within schools as the
second stage. For details on the NPSAS:93 sample design, see Loft
et al. (1995).
2 For details on the B&B:93/94 sample design, see Green et al. (1996). 3 The 29 out-of-scope cases were sample members who had died since 1993; 1 case was identified as ineligible when it was determined that the respondent had never received a baccalaureate degree. 4 Documentation of NPSAS:93 sample development and weights calculation can be found in Whitmore, Traccarella, and Iannacchione (1995), while details on the development of weights for B&B:93/94 can be found in Green et al. (1996). 5 For tables of design effects and standard errors, see the complete report.
Green, P.J., Myers, S.L., Giese, P., Law, J., Speizer, H.M., and Staebler Tardino, V. (1996). Baccalaureate and Beyond Longitudinal Study:1993/94 First Follow-up Methodology Report (NCES 96-149). U.S. Department of Education. Washington, DC: U.S. Government Printing Office. Loft, J.D., Riccobono, J.A., Fitzgerald, R.A., and Malizio, A.G. (1995). Methodology Report for the 1993 National Postsecondary Student Aid Study (NCES 95-211). U.S. Department of Education. Washington, DC: U.S. Government Printing Office. Whitmore, R.W., Traccarella, M.A., and Iannacchione, V.G. (1995). Sampling Design and Weighting Report for the 1993 National Postsecondary Student Aid Study. Research Triangle Park, NC: Research Triangle Institute.
For additional technical information, see the complete report:
Green, P., Myers, S., Veldman, C., and Pedlow, S. (1999). Baccalaureate
and Beyond Longitudinal Study: 1993/97 Second Follow-up Methodology
Report (NCES 1999-159).
Author affiliations: P. Green, S. Myers, C. Veldman,
and S. Pedlow, National Opinion Research Center (NORC) at the
University of Chicago.
For questions about content, contact Paula Knepper
(paula.knepper@ed.gov).
To obtain the complete report (NCES 1999-159),
call the toll-free ED Pubs number (877-433-7827), visit the NCES
Web Site (http://nces.ed.gov), or contact GPO
(202-512-1800).
|