Skip Navigation

National Study of Postsecondary Faculty (NSOPF)


Target Population

Since NSOPF:99, the target population has consisted of all public and private, not-for-profit Title IV-participating, 2- and 4-year degree-granting institutions in the 50 states and the District of Columbia that offer programs designed for high school graduates and are open to persons other than employees of the institution and faculty and instructional staff in these institutions. The NSOPF:93 and NSOPF:88 institution-level population included postsecondary institutions with accreditation at the college level recognized by the U.S. Department of Education. The NSOPF:88 faculty-level population included only instructional faculty, but it also targeted department chairpersons.  

Sample Design

NSOPF:04 used a two-stage sample design, with a sample of 1,080 institutions selected for participation in the first stage, of which 1,070 were eligible and 890 provided a faculty list suitable for sampling. In the second stage, a total of 35,630 faculty were sampled from participating institutions. Of these, 34,330 were eligible.

The institution frame was constructed from the Winter 2001–02 IPEDS data file. Institutions were partitioned into institutional strata based on institutional control, highest level of offering, and Carnegie classification.

The sample of institutions was selected with probability proportional to size (PPS) based on the number of faculty and students at each institution.

In the faculty-level stage of sampling, faculty were grouped into strata based on race/ethnicity, gender, and employment status. Furthermore, the faculty sample was implicitly stratified by academic field. Stratifying the faculty in this way allowed for the oversampling of relatively small subpopulations (such as members of Black, Hispanic, and other ethnic/racial groups) in order to increase the precision of the estimates for these groups. The selection procedure allowed the sample sizes to vary across institutions, but minimized the variation in the weights within the staff-level strata: the sampling fractions for each sample institution were made proportional to the institution weight.

The sample for NSOPF:99 was selected in three stages. Both the first-stage sample of institutions and the second stage sample of faculty were stratified, systematic samples. In the initial stage, 960 postsecondary institutions were selected from the 1997–98 Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics (IC) data files and the 1997 and 1995 IPEDS Fall Staff files. Each sampled institution was asked to provide a list of all of the full- and part-time faculty that the institution employed during the 1998 fall term, and 819 institutions provided such a list. In the second stage of sampling, some 28,580 faculty were selected from the lists provided by the institutions. Over 1,500 of these sample members were determined to be ineligible for NSOPF:99, as they were not employed by the sampled institution during the 1998 fall term, resulting in a sample of 27,040 faculty. A third stage of sampling occurred in the final phases of data collection. In order to increase the response rate and complete data collection in a timely way, a subsample of the faculty who had not responded was selected for intensive follow-up efforts. Others who had not responded were eliminated from the sample, resulting in a final sample of 19,210 eligible faculty.

NSOPF:93 was conducted with a sample of 970
postsecondary institutions (public and private, not-forprofit 2- and 4-year institutions whose accreditation at the college level was recognized by the U.S. Department of Education) in the first stage and 31,350 faculty sampled from institution faculty lists in the second stage. Institutions were selected from IPEDS and then classified into 15 strata by school type, based on their Carnegie Classifications. The strata were (1) private, other Ph.D. institution (not defined in any other stratum); (2) public, comprehensive; (3) private, comprehensive; (4) public, liberal arts; (5) private, liberal arts; (6) public, medical; (7) private, medical; (8) private, religious; (9) public, 2-year; (10) private, 2-year; (11) public, other type (not defined in any other stratum); (12) private, other type (not defined in any other stratum); (13) public, unknown type; (14) private, unknown type; and (15) public, research; private, research; and public, other Ph.D. institution (not defined in any other stratum). Within each stratum, the institutions were further sorted by school size. Of the 960 eligible institutions, 820 (85 percent) provided lists of faculty. The selection of faculty within each institution was random except for the oversampling of the following groups: Blacks (both non-Hispanics and Hispanics); Asians/Pacific Islanders; faculty in disciplines specified by the National Endowment for the Humanities; and full-time female faculty.

NSOPF:88 was conducted with a sample of 480 institutions (including 2-year, 4-year, doctoral-granting, and other colleges and universities), some 11,010 faculty, and more than 3,000 department chairpersons. Institutions were sampled from the 1987 IPEDS universe and were stratified by modified Carnegie Classifications and size (faculty counts). These strata were (1) public, research; (2) private, research; (3) public, other Ph.D. institution (not defined in any other stratum); (4) private, other Ph.D. institution (not defined in any other stratum); (5) public, comprehensive; (6) private, comprehensive; (7) liberal arts; (8) public, 2-year; (9) private, 2-year; (10) religious; (11) medical; and (12) “other” schools (not defined in any other stratum). Within each stratum, institutions were randomly selected. Of the 480 institutions selected, 450 (94 percent) agreed to participate and provided lists of their faculty and department chairpersons. Within 4-year institutions, faculty and department chairpersons were stratified by program area and randomly sampled within each stratum; within 2-year institutions, simple random samples of faculty and department chairpersons were selected; and within specialized institutions (religious, medical, etc.), faculty samples were randomly selected (department chairpersons were not sampled). At all institutions, faculty were also stratified on the basis of employment status—full-time and part-time. Note that teaching assistants and teaching fellows were excluded in NSOPF:88.


Data Collection and Processing

NSOPF:04 allowed ICs to upload lists of faculty and instructional staff and to complete the Institution Questionnaire online. Institutions were also given the option of responding by telephone, though a web response was preferred. Faculty and instructional staff were allowed to participate via a self-administered web-based questionnaire or an interviewer-administered telephone interview (CATI). Follow-up with ICs and with faculty was conducted by telephone, mail, and e-mail.

NSOPF:99 allowed sample members to complete a self-administered paper questionnaire and mail it back or to complete the questionnaire online. Follow-up activities included e-mails, telephone prompting, and, for nonresponding faculty, CATI. As part of the study, an experiment was conducted to determine if small financial incentives could increase use of the web-based version of the questionnaire. Previously, NSOPF was a mailout/mailback survey with telephone follow-up.

NSOPF:88 was conducted by SRI International; NSOPF:93 by the National Opinion Research Center (NORC) at the University of Chicago; NSOPF:99 by The Gallup Organization; and NSOPF:04 by RTI International.

Reference Dates. Most of the information collected in NSOPF pertains to the fall term of the academic year surveyed. For NSOPF:04, the fall term was defined as the academic term containing November 1, 2003. The Institution Questionnaire also asked about the number of full-time faculty/instructional staff considered for tenure in the 2003–04 academic year. The NSOPF:04 Faculty Questionnaire asked faculty and instructional staff about the year they began their first faulty or instructional staff position at a postsecondary institution; the number of presentations and publications during their entire career and, separately, the number during the last 2 years; and their gross compensation and household income in calendar year 2003. Similarly, NSOPF:99, NSOPF:93, and NSOPF:88 requested most information for the 1998, 1992, and 1987 fall term, respectively, but included some questions requiring retrospective or prospective responses.


Data Collection. The NSOPF:04 data collection offered both a CATI and a web-based version of the Institution and Faculty questionnaires, with mail, telephone, and email follow-up. Some 1,070 institutions in the eligible institution sample for the 2004 National Study of Faculty and Students (NSoFaS:04) were sampled and recruited to participate in both components of NSoFaS:04 (NSOPF:04 and NPSAS:04). The fielding of NSOPF:04 and NPSAS:04 together as NSoFaS:04 was one of three changes made in the institution contacting procedures for this cycle of NSOPF. The second change was to administer the Institution Questionnaire as a web or CATI instrument, with no hard-copy equivalent. The third change was to begin recruiting institutions and initiating coordinator contacts in March 2003—a full 8 months prior to the November reference date for the fall term and 5 to 6 months earlier than the September start dates of previous cycles. This change was prompted by the need to draw a faculty sample and subsequently contact sampled faculty for participation prior to the 2004 summer break.

The data collection procedure started in March 2003 with a cover letter and a set of pamphlets on NSoFaS, NSOPF, and NPSAS being sent to the institution’s Chief Administrator (CA) as an introduction to the study. Study personnel then followed up with the CA by telephone, asking him or her to name an IC. An information packet was then sent to the IC. Each IC was then asked to complete a Coordinator Response Form to confirm that the institution could supply the faculty list within stated schedule constraints. ICs who indicated that a formal review process was needed before their institution would participate were forwarded additional project materials as appropriate.

A binder containing complete instructions for NSOPF:04, as well as a request for a faculty/instructional staff list, was sent to ICs in September 2003. ICs were asked to complete the Institution Questionnaire using the study’s website. Data collection for the Institution Questionnaire ended in October 2004.

In NSOPF:04 full-scale study, the faculty data collection began with introductory materials being sent to sample members via first-class mail as well as e-mail. The letter included instructions for completing the self-administered questionnaire on the Internet or by calling a toll-free number to complete a telephone interview. After an initial 4-week period, telephone interviewers began calling sample members. An early-response incentive, designed to encourage sample members to complete the self-administered questionnaire prior to outgoing CATI calls, was offered to sample members who completed the questionnaire within 4 weeks of the initial mailing. Incentives were also offered to selected sample members as necessary (i.e., those who refused to complete the questionnaire and other nonrespondents).

The NSOPF:99 data collection offered both a paper and a web version of the Institution and Faculty questionnaires, with telephone (including CATI) and e-mail follow-up. The data collection procedure started with a prenotification letter to the institution’s CA to introduce him or her to the study and secure the name of an appropriate individual to serve as the IC. The data collection packet was then mailed directly to the IC.

The packet contained both the Institution Questionnaire and the faculty list collection packet. The IC was asked to complete and return all materials at the same time. The mailing was timed to immediately precede the November 1, 1998, reference date for the fall term.

The field period for the NSOPF:99 faculty data collection extended from February 1999 through March 2000. Questionnaires were mailed to faculty in waves, as lists of faculty and instructional staff were received, processed, and sampled. Questionnaires were accompanied by a letter that provided the web address and a unique access code to be used to access the web questionnaire. The first wave of questionnaires was mailed on February 4, 1999; the seventh and final wave was mailed on December 1, 1999. Faculty sample members in each wave received a coordinated series of mail, e-mail, and telephone follow-ups. Mail follow-up for nonrespondents included a postcard and up to four questionnaire re-mailings; these were mailed to the home address of the faculty member if provided by the institution. E-mail prompts were sent to all faculty for whom an e-mail address was provided; faculty received as many as six e-mail prompts. Telephone follow-up consisted of initial prompts to complete the mail or web questionnaire. A CATI was scheduled for nonrespondents to the mail, e-mail, and telephone prompts.

The following efforts were made for the NSOPF:93 institution data collection: initial questionnaire mailing, postcard prompting, second questionnaire mailing, second postcard prompting, telephone prompting, third questionnaire mailing, and telephone interviewing. Similarly, the NSOPF:93 faculty data collection used an initial questionnaire mailing, postcard prompting, second questionnaire mailing, third questionnaire mailing, telephone prompting, and CATI. In both collections, institutions and faculty who missed critical items and/or had inconsistent or out-of-range responses were identified for data retrieval. Extra telephone calls were made to retrieve these data.

Data collection procedures for NSOPF:88 involved three mailouts for both the Institution Questionnaire and the Department Chairperson Questionnaire, and two mailouts and one CATI interview for the Faculty Questionnaire.


Data Processing. The NSoFaS:04 website was used for both NSOPF:04 and NPSAS:04. For institutions, it was a central repository for all study documents and instructions. It allowed for the uploading of electronic lists of faculty and instructional staff. In addition, it housed the Institution Questionnaire for the IC to complete online.

For NSOPF:04, institutions were asked to provide a single, unduplicated (i.e., with duplicate entries removed) electronic list of faculty in any commonly used and easily processed format (e.g., ASCII fixed field, comma delimited, spreadsheet format). However, as in previous cycles, paper lists were accepted, as were multiple files (e.g., separate files of full- and part-time faculty) and lists in electronic formats that did not lend themselves to electronic processing (such as word processing formats). For the first time, institutions were given the option of transmitting their electronic faculty lists via a secure upload to the NSoFaS:04 website and were encouraged to do so. (In previous cycles, direct upload was available only by file-transfer protocols, an option that few institutions utilized.) Institutions were also given the option of sending a CD-ROM or diskette containing the list data or sending the list via e-mail (as an encrypted file, if necessary).

Follow-up with ICs was conducted by telephone, mail, and e-mail. As faculty lists were received, they were reviewed for completeness, readability, and accuracy. Additional follow-up to clarify the information provided or retrieve missing information was conducted by the institution contactors as necessary. For institutions lacking the resources to provide a complete list of full- and part-time faculty and instructional staff, list information was, if possible, abstracted from course catalogs, faculty directories, and other publicly available sources. Faculty lists abstracted in this fashion were reviewed for completeness against IPEDS before being approved for sampling.

Institution Questionnaire follow-up was conducted simultaneously with follow-up for lists of faculty. If an institution was unable to complete the questionnaire online, efforts were made to collect the information by telephone. To expedite data collection, missing questionnaire data was, in some instances, abstracted directly from benefits and policy documentation supplied by the institution or from information publicly available on the institution’s website.

For the faculty data collection, NSOPF:04 also utilized a mixed-mode data collection methodology that allowed sample members to participate via a web-based self-administered questionnaire or via CATI. The NSOPF:04 faculty instrument was designed to minimize potential mode effects by using a single instrument for both self-administration and CATI interviews. Four weeks after the release of the web-based questionnaire, nonrespondents were followed up to conduct a CATI interview.

Faculty lists and questionnaire data were evaluated by the project staff for quality, item nonresponse, item mode effects, break-offs, coding, quality control monitoring of interviewers, and interviewer feedback.

In NSOPF:99, each of the three modes of questionnaire administration required separate systems for data capture. All self-administered paper questionnaires were optically scanned. The system was programmed so that each character was read and assigned a confidence level. All characters with less than a 100 percent confidence level were automatically sent to an operator for manual verification. The contractor verified the work of each operator and the recognition engines on each batch of questionnaires to ensure that the quality assurance system was working properly. Also, 100 percent of written-out responses (as opposed to check marks) were manually verified.

Each web respondent was assigned a unique access code, and respondents without a valid access code were not permitted to enter the website. A respondent could return to the survey website at a later time to complete a survey that was left unfinished in an earlier session. When respondents entered the website using the access code, they were immediately taken to the same point in the survey item sequence that they had reached during their previous session. If respondents, re-using an access code, returned to the website at a later time after completing the survey in a previous session, they were not allowed access to the completed web survey data record. Responses to all web-administered questionnaires underwent data editing, imputation, and analysis.

All telephone interviews used CATI technology. The CATI program was altered from the paper questionnaire to ensure valid codes, perform skip patterns automatically, and make inter-item consistency checks where appropriate. The quality control program for CATI interviewing included project-specific training of interviewers, regular evaluation of interviewers by interviewing supervisors, and regular monitoring of interviewers.

NSOPF:93 used both computer-assisted data entry (CADE) and CATI. The CADE/CATI systems were designed to ensure that all entries conformed to valid ranges of codes; enforce skip patterns automatically; conduct inter-item consistency checks, where appropriate; and display the full question-and-answer texts for verbatim responses. As part of the statistical quality control program, 100 percent verification was conducted on a randomly selected subsample of 10 percent of all Institution and Faculty questionnaires entered in CADE. The error rate was less than 0.5 percent for all items keyed. Quality assurance for CATI faculty interviews consisted of random online monitoring by supervisors.


Editing and Coding. For the study in general, a large part of the data editing and coding was performed in the data collection instruments, including range edits; across-item consistency edits; and coding of fields of teaching, scholarly activities, and highest degree. During and following data collection, the data were reviewed to confirm that the data collected reflected the intended skip-pattern relationships. At the conclusion of the data collection, special codes were inserted in the database to reflect the different types of missing data.

The data cleaning and editing process in NSOPF:04 consisted of the following steps:

  • Review of one-way frequencies for every variable to confirm that there were no missing or blank values and to check for reasonableness of values. This involved replacing blank or missing data with -9 for all variables in the instrument database and examining frequencies for reasonableness of data values.
  • Review of two-way cross-tabulations between each gate-nest combination of variables to check data consistency. Gate variables are items that determine subsequent instrument routing. Nest variables are items that are asked or not asked, depending on the response to the gate question. Legitimate skips were identified using the interview programming code as specifications to define all gate-nest relationships and replace -9 (missing values that were blank because of legitimate skips) with -3 (legitimate skip code). Additional checks ensured that the legitimate skip code was not overwriting valid data and that no skip logic was missed. In addition, if a gate variable was missing (-9), the -9 was carried through the nested items.
  • Identify and code items that were not administered due to a partial or abbreviated interview. This code replaced -9 values with -7 (item not administered) based on the section completion and abbreviated interview indicators.
  • Recode “don’t know” responses to missing. This code replaced -1 (don’t know) values with -9 (missing) for later stochastic imputation. For selected items for which ”don’t know” seemed like a reasonable response, variables were created both with and without the “don’t know” category.
  • Identify items requiring recoding. During this stage, previously uncodable values (e.g., text strings) collected in the various coding systems were upcoded, if possible.
  • Identify items requiring range edits, logical imputations, and data corrections. Descriptive statistics for all continuous variables were examined. Values determined to be out-of-range were either coded to the maximum (or minimum) reasonable value or set to missing for later imputation. Logical imputations were implemented to assign values to legitimately skipped items whose values could be implicitly determined from other information provided. Data corrections were performed where there were inconsistencies between responses given by the sample member.   


Estimation Methods

Weighting was used in NSOPF to adjust for sampling and unit nonresponse at both the institution and faculty levels. Imputation was performed to compensate for item nonresponse.

Weighting. In NSOPF:04, three weights were computed: full-sample institution weights, full-sample faculty weights, and a contextual weight (to be used in “contextual” analyses that simultaneously include variables drawn from the Faculty and Institution questionnaires). The formulas representing the construction of each of these weights are provided in the 2004 National Study of Postsecondary Faculty (NSOPF:04) Methodology Report (Huer et al. 2005).

NSOPF:99 used weighting procedures similar to those used in NSOPF:04. For details on these procedures, see the 1999 National Study of Postsecondary Faculty (NSOPF:99) Methodology Report (Abraham et al. 2002).

The weighting procedures used in NSOPF:93 and NSOPF:88 are described below.

NSOPF:93. Three weights were computed for the NSOPF:93 sample—first-stage institution weights, final institution weights, and final faculty weights. The first-stage institution weights accounted for the institutions that participated in the study by submitting a faculty list that allowed faculty members to be sampled. The two final weights—weights for the sample faculty and for institutions that returned the Institution Questionnaire—were adjusted for nonresponse. The final faculty weights were poststratified to the “best” estimates of the number of faculty. The “best” estimates were derived following reconciliation and verification through recontact with a subset of institutions that had discrepancies of 10 percent or more between the total number enumerated in their faculty list and Institution Questionnaire. For more information on the reconciliation effort, see “Measurement Error” (in section 5). For more information on the calculation of the “best” estimates of faculty, see the 1993 National Study of Postsecondary Faculty Methodology Report (Selfa et al. 1997).

NSOPF:88. The NSOPF:88 sample was weighted to produce national estimates of institutions, faculty, and department chairpersons by using weights designed to adjust for differential probabilities of selection and nonresponse. The sample weights for institutions were calculated as the inverse of the probability of selection, based on the number of institutions in each size substratum. Sample weights were adjusted to account for nonresponse by multiplying the sample weights by the reciprocal of the response rate. Sample weights for faculty in NSOPF:88 summed to the total number of faculty in the IPEDS universe of institutions, as projected from the faculty lists provided by participating institutions, and accounted for two levels of nonresponse: one for nonparticipating institutions and one for nonresponding faculty. Sample weights for department chairpersons in NSOPF:88 summed to the estimated total number of department chairpersons in the IPEDS universe of institutions and accounted for nonresponse of nonparticipating institutions and nonresponding department chairpersons.


Imputation. Data imputation for the NSOPF:04 Faculty Questionnaire was performed in four steps:

  • Logical imputation. The logical imputation was conducted during the data cleaning steps (as explained under “Editing and Coding” above).
  • Cold deck. Missing responses were filled in with data from the sample frame or institution record data whenever the relevant data were available.
  • Sequential hot deck. Nonmissing values were selected from “sequential nearest neighbors” within the imputation class. All questions that were categorical and had more than 16 categories were imputed with this method.
  • Consistency checks. After all variables were imputed, consistency checks were applied to the entire faculty data file to ensure that the imputed values did not conflict with other questionnaire items, observed or imputed. This process involved reviewing all of the logical imputation and editing rules as well.

Data imputation for the institution questionnaire used three methods, within-class mean, within-class random frequency, and hot deck. The imputation method for each variable is specified in the labels for the imputation flags in the institution dataset. Logical imputation was also performed in the cleaning steps described previously in the “Editing and Coding” section.

Imputation for the NSOPF:99 Faculty Questionnaire was performed in four steps:

  • Logical imputation. The logical imputation was conducted during the data cleaning steps (as explained under “Editing and Coding” above).

  • Cold deck. Missing responses were filled in with data from the sample frame whenever the relevant data were available.

  • Sequential hot deck. Nonmissing values were selected from “sequential nearest neighbors” within the imputation class. All questions that were categorical and had more than 16 categories were imputed with this method.
  • Regression type. This procedure employed SAS PROC IMPUTE. All items that were still missing after the logical, cold-deck, and hot-deck imputation procedures were imputed with this method. Project staff selected the independent variables by first looking through the questionnaire for logically related items and then by conducting a correlation analysis of the questions against each other to find the top correlates for each item.

Data imputation for the NSOPF:99 Institution Questionnaire used three methods. Logical imputation was also performed in the cleaning steps described under “Editing and Coding.”

  • Within-class mean. The missing value was replaced with the mean of all nonmissing cases within the imputation class. Continuous variables with less than 5 percent missing data were imputed with this method.
  • Within-class random frequency. The missing value was replaced by a random draw from the possible responses based on the observed frequency of nonmissing responses within the imputation class. All categorical questions were imputed with this method, since all categorical items had less than 5 percent missing data.
  • Hot deck. As with the faculty imputation, this method selected nonmissing values from the “sequential nearest neighbor” within the imputation class. Any questions that were continuous variables and had more than 5 percent missing cases were imputed with this method.

For a small number of items, special procedures were used. See the 1999 National Study of Postsecondary Faculty (NSOPF:99) Methodology Report (Abraham et al. 2002).

In NSOPF:93, two imputation methods were used for the Faculty Questionnaire–PROC IMPUTE and the “sequential nearest neighbor” hot-deck method. PROC IMPUTE alone was used for the NSOPF:93 Institution Questionnaire. All imputation was followed by a final series of cleaning passes that resulted in generally clean and logically consistent data. Some residual inconsistencies between different data elements remained in situations where it was impossible to resolve the ambiguity as reported by the respondent.

Although NSOPF:88 consisted of three questionnaires, imputations were only performed for faculty item nonresponse. The within-cell random imputation method was used to fill in most Faculty Questionnaire items that had missing data.  


Recent Changes

NSOPF:04 was, in one respect, unlike any previous cycle of NSOPF, as it was conducted in tandem with another major study, NPSAS:04, under one overarching contract: NSoFaS:04. NCES recognized that, historically, there had been considerable overlap in the institutions selected for participation in NSOPF:04 and NPSAS:04. By combining the two independent studies under one contract, NCES sought to minimize the response burden on institutions and to realize data collection efficiencies. Nevertheless, NSOPF:04 and NPSAS:04 retain their separate identities. The purpose of this chapter is to summarize the methodology of NSOPF:04; sampling and data collection procedures for NPSAS:04 are referred to only as they are combined with, or impact, the parallel procedures for NSOPF:04.

The combination of NSOPF:04 and NPSAS:04 into NSoFaS:04 had important implications for the NSOPF:04 institution sample design and institution contacting procedures. Institutions for the NSOPF:04 sample were selected as a subsample of the NPSAS:04 sample. This combination resulted in a somewhat larger sample of institutions for the full-scale study than in previous NSOPF cycles (1,070 eligible institutions in NSOPF:04 compared to 960 in NSOPF:99) and created a need to balance the design requirements of both studies in all institution-related study procedures.

Future Plans

A specific date has not yet been selected for the next administration of NSOPF.