Skip Navigation
Illustration/Logo View Quarterly by  This Issue  |  Volume and Issue  |  Topics
Education Statistics Quarterly
Vol 4, Issue 3, Topic: Methodology
1999 National Study of Postsecondary Faculty (NSOPF:99) Methodology Report
By: Sameer Y. Abraham, Darby Miller Steiger, Margrethe Montgomery, Brian D. Kuhr, Roger Tourangeau, Bob Montgomery, and Manas Chattopadhyay
 
This article was originally published as the Executive Summary of the Technical Report of the same name. The sample survey data are from the NCES National Study of Postsecondary Faculty (NSOPF).
 
 

Introduction

The 1999 National Study of Postsecondary Faculty (NSOPF:99) serves a continuing need for data on faculty and other instructional staff,1 all of whom directly affect the quality of education in postsecondary institutions. Faculty determine curriculum content, performance standards for students, and the quality of students' preparation for careers. In addition, faculty perform research and development work upon which the nation's technological and economic advancement depend. For these reasons, it is essential to understand who they are; what they do; and whether, how, and why the nation's faculty are changing.


Target Population and Sample Design

NSOPF:99 utilized a sample of 960 institutions and 28,576 full- and part-time faculty employed at these institutions. The sample was designed to allow detailed comparisons and high levels of precision at both the institution and faculty levels. The sampled institutions represent all public and private not-for-profit Title IV-participating, degree-granting institutions in the 50 states and the District of Columbia.

Both the sample of institutions and the sample of faculty were stratified, systematic samples. The institution sample was stratified by Carnegie classifications that were aggregated into fewer categories. The faculty sample was stratified by gender and race/ethnicity.

The sample for NSOPF:99 was selected in three stages. In the initial stage, 960 postsecondary institutions were selected from the 1997–98 Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics (IC) data files and the 1997 and 1995 IPEDS Fall Staff files.2 Each sampled institution was asked to provide a list of all of the full- and part-time faculty that the institution employed during the 1998 fall term, and 819 institutions provided such a list.

In the second stage of sampling, 28,576 faculty were selected from the lists provided by the institutions. Over 1,500 of these sample members were determined to be ineligible for NSOPF:99, as they were not employed by the sampled institution during the 1998 fall term, resulting in a sample of 27,044 faculty.

A third stage of sampling occurred in the final phases of data collection. In order to increase the response rate, a subsample of the faculty who had not responded was selected for intensive follow-up efforts. Others who had not responded were eliminated from the sample, resulting in a final sample of 19,213 eligible faculty.

back to top


Data Collection Design and Outcomes

NSOPF:99 involved a multistage effort to collect data from sampled faculty. At the same time that institutions were asked to provide a list of all their faculty and instructional staff (as described above), they were also asked to complete a questionnaire about their policies regarding tenure, benefits, and other policies. Counts of full-time and part-time faculty were also requested on the questionnaire. Prior to sampling faculty from the lists provided by the institutions, counts of faculty on the lists were compared with counts on the questionnaires. If no questionnaire data were provided, the list counts were compared to the prior year's IPEDS data. If a discrepancy of more than 5 percent existed, intensive follow-up was conducted to rectify the inconsistency. Once an institution's list was determined to be accurate and complete, faculty were sampled from the list and were invited to participate in the study. Intensive locating was performed to ensure that an updated home or campus address was available for each sample member.

Institution data collection

Institutional recruitment began in September 1998 when the Chief Administrative Officer (CAO) for each sampled institution was asked to designate an institution coordinator, who would be responsible for providing both the list of faculty and the institution questionnaire. The institution coordinator was then mailed a complete data collection packet, including both the institution questionnaire and instructions for compiling the list of faculty. The coordinator had the option of completing the questionnaire via the Internet or returning a paper questionnaire. The list of faculty could be provided in any format; institutions were encouraged to provide the list in an electronic format, if possible. Follow-up with coordinators was conducted via telephone, mail, and e-mail. The field period for list and institution questionnaire collection encompassed approximately 54 weeks.

Of the 959 institutions that were determined to be eligible to participate in NSOPF:99, a total of 819 institutions provided lists of their faculty and instructional staff, resulting in an unweighted participation rate of 85.4 percent. A total of 865 institutions returned the institution questionnaire, resulting in an unweighted questionnaire response rate of 90.2 percent.

Faculty data collection

Because lists of faculty were received on a rolling basis, faculty were sampled in seven waves. Data collection for wave 1 began in February 1999, and data collection for wave 7 began in December 1999. Sampled faculty were given the option of completing a paper questionnaire and returning it by mail or completing the questionnaire via the Internet. Sampled faculty in each wave received a coordinated series of mail, e-mail, and telephone follow-up, including as many as two additional mailings of the questionnaire and six e-mail reminders. Telephone follow-up included telephone prompting to encourage self-administration, followed by computer-assisted telephone interviewing (CATI) for nonresponding faculty.

Of the final sample of 19,213 faculty who were determined to be eligible to participate in NSOPF:99, a total of about 17,600 respondents completed the faculty questionnaire, resulting in a weighted response rate of 83.2 percent. This response rate takes into account the reduction of the active sample through subsampling as described earlier.

back to top


Quality Control

Quality control procedures were implemented for receiving faculty list data and processing it for sampling, monitoring the receipt of completed questionnaires, preparing paper questionnaires for data entry, editing paper questionnaires for overall adequacy and completeness, entering the data, flagging cases with missing or inconsistent data through automated consistency checks, coding responses, checking data entry, and preparing questionnaires, lists, and other documentation for archival storage.

back to top


Data Quality

Item nonresponse

One measure of data quality is item nonresponse rates. Item nonresponse occurs when a respondent does not complete a questionnaire item. Item nonresponse creates two problems for survey analysts. First, it reduces the sample size and thus increases sampling variance. This happens when respondents must be eliminated from the sample that is used for analyses because they failed to respond to a large percentage of the questionnaire items. As a result, insufficient sample sizes may hinder certain analyses such as subgroup comparisons. Second, item nonresponse may give rise to nonresponse bias. To the extent that the missing data for a particular item differ from the reported data for that item, the reported data are unrepresentative of the survey population. Item nonresponse is also worth examining because it can signal items that respondents had difficulty answering.

Item nonresponse rates were calculated by dividing the total number of responses to a question by the number of respondents eligible to respond to that item (n). The standard error of the item nonresponse rate (SE) equals the square root of (RATE * (1–RATE)/n). In general, this means that the larger the number of eligible respondents for a particular question and the further the nonresponse rate is from .5, the lower the standard error. Because these estimates were conditional on selection into the sample and do not represent population estimates, for simplicity's sake, the standard errors for item nonresponse rates were modeled as though the sample were a simple random sample. For questions containing multiple subitems, each subitem was counted as a unique question.

The mean item nonresponse rate for the institution questionnaire was 3.4 percent (SE=.004). Overall, the item nonresponse rate for the faculty questionnaire was 6.2 percent. More than half of the items on the faculty questionnaire (55 percent) had an item nonresponse rate of less than 5 percent, 25 percent had rates between 5 and 10 percent, and 20 percent had rates greater than 10 percent.

Discrepancies in faculty counts

Another measure of data quality is the magnitude of discrepancies in faculty counts on the lists and questionnaires provided by institutions. When institutions provided discrepant data, they tended to provide more faculty on the questionnaire than on the list. As was detected in earlier rounds of NSOPF, some institutions had difficulty generating lists of part-time faculty. Without discrepancy checks, this can result in serious coverage error, with part-time faculty given less of an opportunity to participate in NSOPF:99. Similarly, earlier cycles of NSOPF indicated that some institutions were less likely to include medical faculty on their lists. Special reminders were inserted into the list collection instructions to encourage institutions to remember to include part-time faculty and medical faculty. In addition, a rigorous check was conducted to ensure the completeness of the faculty lists, with intensive follow-up if needed.

Nearly 43 percent of the institutions returning both a questionnaire and a list provided identical data on both. An additional 30 percent had discrepancies of 10 percent or less. Thus, roughly 73 percent of institutions provided data with a discrepancy of 10 percent or less. This stands in marked contrast to the previous cycle of NSOPF, where only 42 percent had discrepancies of 10 percent or less.

back to top


Footnotes

1In the interest of brevity, this report uses the term "faculty" interchangeably with "faculty and other instructional staff."

2Information about IPEDS, as well as data and publications, can be found on the Internet at http://nces.ed.gov/ipeds/.

back to top

Data source: The NCES 1999 National Study of Postsecondary Faculty (NSOPF:99).

For technical information, see the complete report:

Abraham, S.Y., Steiger, D.M., Montgomery, M., Kuhr, B.D., Tourangeau, R., Montgomery, B., and Chattopadhyay, M. (2002). 1999 National Study of Postsecondary Faculty (NSOPF:99) Methodology Report (NCES 2002–154).

Author affiliations: S.Y. Abraham, D.M. Steiger, M. Montgomery, B.D. Kuhr, R. Tourangeau, B. Montgomery, and M. Chattopadhyay, The Gallup Organization.

For questions about content, contact Linda J. Zimbler (linda.zimbler@ed.gov).

To obtain the complete report (NCES 2002–154), call the toll-free ED Pubs number (877–433–7827) or visit the NCES Electronic Catalog (http://nces.ed.gov/pubsearch).


back to top