National Study of Postsecondary Faculty (NSOPF)



5. Data Quality and Comparability

NSOPF:04 included procedures for both minimizing and measuring nonsampling errors. A field test was performed before NSOPF:04, and quality control activities continued during interviewer training, data collection, and data processing.

Sampling Error

Standard errors for all NSOPF data can be computed using a technique known as Taylor Series approximation. Individuals opting to calculate variances with the Taylor Series approximation method should use a “with replacement” type of variance formula. Specialized computer programs, such as SUDAAN, calculate variances with the Taylor Series approximation method. The Data Analysis System (DAS) from NCES available on CD-ROM calculates variances using the Taylor Series method, and the DAS available online calculates variances using the balanced repeated replicate method.

Replicate weights are provided in the NSOPF data files (64 sets of replicates in NSOPF:99 and NSOPF:04 and 32 replicate weights in NSOPF:93). These weights implement the balanced half-sample (BHS) method of variance estimation. They have been created to handle the certainty strata and to incorporate finite population correction factors for each of the noncertainty strata. Two widely available software packages, WesVar and PC CARP, have the capability to use replicate weights to estimate variances.

Analysts should be cautious about the use of BHS-estimated variances that relate to one stratum or to a group of two or three strata. Such variance estimates may be based upon far fewer than the number of replicates; thus, the variance of the variance estimator may be large. Analysts who use either the restricted-use faculty file or the institution file should also be cautious about cross-classifying data so deeply that the resulting estimates are based upon a very small number of observations. Analysts should interpret the accuracy of the NSOPF statistics in light of estimated standard errors and the small sample sizes.

Nonsampling Error

To minimize the potential for nonsampling errors, the NSOPF:04 Institution and Faculty questionnaires (as well as the sample design, data collection, and data processing procedures) were field-tested with a national probability sample of 150 postsecondary institutions (though only 80 of these were used for the full second-stage sampling of faculty and instructional staff) and 1,200 faculty members. A major focus of the field test was the effect of combining NSOPF and NPSAS. The field test also included an incentive experiment, which tested the use of incentives for increasing early responses and for obtaining interviews from nonrespondents. Other aspects of data quality were also examined.

The NSOPF:99 Institution and Faculty questionnaires (as well as the sample design, data collection, and data processing procedures) were field-tested with a national probability sample of 160 postsecondary institutions and 510 faculty members. Four methodological experiments— to increase unit response rates, speed the return of mail questionnaires, increase data quality, and improve the overall efficiency of the data collection process—were conducted as part of the field test. The experiments involved the use of prenotification, prioritized mail, a streamlined instrument, and the timing of CATI attempts. Another focus of the field test was the effort to reduce discrepancies between the faculty counts derived from the list of faculty provided by each institution and those provided in the Institution Questionnaire. Changes introduced to reduce discrepancies included providing clearer definitions of faculty eligibility (with consistency across forms and questionnaires) and collecting list and Institution Questionnaire data simultaneously (with the objective of increasing the probability that both forms would be completed by the same individual and evidence fewer inconsistencies).

During the NSOPF:93 field test, a subsample of faculty respondents was reinterviewed to evaluate reliability. In addition, an extensive item nonresponse analysis of the field-tested questionnaires was conducted, followed by additional evaluation of the NSOPF:93 instruments and survey procedures. An item nonresponse analysis was also conducted for the full-scale data collection. Later, in 1996, NCES analyzed discrepancies in the NSOPF:03 faculty counts, conducting a retrieval, verification, and reconciliation effort to resolve problems.

Top

Coverage Error. Because the IPEDS universe is the institutional frame for NSOPF, coverage of institutions is complete. However, there are concerns about the coverage of faculty and instructional staff. In NSOPF:04, prior to sampling, faculty counts from all lists provided by participating institutions were checked against both IPEDS and the counts that institutions provided in their Institution Questionnaire. (In NSOPF:99, the IPEDS comparison was used as a quality control check only when Institution Questionnaire counts were absent.) In NSOPF:04, as in NSOPF:99, institutions were contacted to resolve any discrepancies between data sources.

In NSOPF:99, in an effort to decrease the discrepancies in faculty counts noticed in NSOPF:93, ICs were asked to provide counts of full- and part-time faculty and instructional staff at their institutions as of November 1, 1998, the same reference date used for the 1997-98 IPEDS Fall Staff Survey; asked them to return both the faculty list and the Institution Questionnaire at the same time; and— giving them explicit warnings about potential undercounts of faculty—asked them to ensure that the counts provided in the list and questionnaire were consistent. These efforts appear to have worked, with 73 percent of institutions in NSOPF:99 providing questionnaire and list data that exhibited discrepancies of less than 10 percent, an improvement of 31 percentage points since NSOPF:93.

In NSOPF:93, a discrepancy between the faculty counts reported in the Institution Questionnaires and those provided in faculty lists by institutions at the beginning of the sampling process necessitated the “best estimates” correction to the NSOPF:93 faculty population estimates, as described earlier (in “Weighting,” section 4).

Top

Nonresponse Error. Unit Nonresponse. Unit response rates have been similar over NSOPF administrations, though they decreased slightly in NSOPF:04 (see table NSOPF-1). Note that the overall faculty response rates are the percentage of faculty responding in institutions that provided faculty lists for sampling.

Item Nonresponse. For the NSOPF:04 Institution Questionnaire, 2 of the 90 items had more than 15 percent of the data missing. For the Faculty Questionnaire, 34 of the 162 items had more than 15 percent of the data missing. For further details on item nonresponse, see the 2004 National Study of Postsecondary Faculty (NSOPF:04) Methodology Report (Huer et al. 2005).

For the NSOPF:99 Institution Questionnaire, the mean item nonresponse rate was 3.4 percent (weighted). Overall, the item nonresponse rate for the Faculty Questionnaire was 6.2 percent. More than half of the items in the Faculty Questionnaire (55 percent) had an item nonresponse rate of less than 5 percent, 25 percent had rates between 5 and 10 percent, and 20 percent had rates greater than 10 percent. For further details on item nonresponse, see the 1999 National Study of Postsecondary Faculty (NSOPF:99) Methodology Report (Abraham et al. 2002).

For the NSOPF:93 Institution Questionnaire, the mean item nonresponse rate was 10.1 percent, with the level of nonresponse increasing in the latter parts of the questionnaire. For the Faculty Questionnaire, the mean item nonresponse rate was 10.3 percent.

Measurement Error. In NSOPF:04, as in prior administrations of this study, secured faculty lists were evaluated for accuracy and completeness of information before being processed for sampling. To facilitate quality control, faculty list counts were compared against counts obtained from the following supplementary sources:

  • the Institution Questionnaire (or the file layout form, if a questionnaire was not completed but an overall faculty count was supplied); 
  • the 2001 IPEDS Fall Staff Survey; 
  • the Contact Information and File Layout (CIFL) form (which included faculty counts and was used when questionnaire data was unavailable); and 
  • NSOPF:99 frame data.

Discrepancies in counts of full- and part-time faculty between the faculty list and other sources that were outside the expected range were investigated. All institutions with faculty lists that failed any checks were recontacted to resolve the observed discrepancies. Because of time and definitional differences between NSOPF and IPEDS, it was expected that the faculty counts obtained from the institutions and IPEDS would include discrepancies. Consequently, quality control checks against IPEDS were less stringent than those against the Institution Questionnaire. However, list count comparisons against IPEDS and NSOPF:99 data were useful in identifying systematic errors, particularly those related to miscoding of the employment status of faculty members.

Results of the data quality evaluations showed that 82 percent of faculty list counts were within 10 percent of the corresponding Institution Questionnaire counts. There were greater variances between list counts and IPEDS, which is based on a narrower definition of faculty. Patterns of discrepancies between IPEDS and list data followed expected patterns, with list counts larger than counts from IPEDS. For more information, see the 2004 National Study of Postsecondary Faculty (NSOPF:04) Methodology Report (Huer et al., 2005).

For NSOPF:99, NCES conducted an intensive follow-up with 230 institutions (29 percent of those participating) whose reports exhibited a variance of 5 percent or more between the list and questionnaire counts overall or between the two part-time counts. NSOPF has experienced discrepancies in faculty counts among IPEDS, Institution Questionnaires, and faculty lists across all cycles of the study. Even though identical information is requested in the questionnaire and in the list (e.g., in NSOPF:99, a count of all full- and part-time faculty and instructional staff as of November 1, 1998), institutions have continued to provide discrepant faculty data. As in NSOPF:93, large discrepancies tend to be concentrated among smaller institutions and 2-year institutions in NSOPF:99. Undercounting of part-time faculty and instructional staff without faculty status in the list remains the primary reason for the majority of these discrepancies.

However, procedures implemented in NSOPF:99 improved the consistency of the list and questionnaire counts when compared to previous cycles of NSOPF. The percentage of institutions providing list and questionnaire data that had less than a 10 percent discrepancy increased from 42 percent in NSOPF:93 to 73 percent in NSOPF:99. A total of 43 percent provided identical data in the list and questionnaire in NSOPF:99 (compared to only 2.4 percent in NSOPF:93). Moreover, schools providing identical list and questionnaire data were shown to have provided more accurate and complete data in both the list and questionnaire. These findings suggest that the changed procedures that were introduced in the 1998 field test and NSOPF:99 resulted in more accurate counts of faculty and instructional staff. Institutions may also be in a better position to respond to these requests for data. Their accumulated experience in handling NSOPF and IPEDS (and other survey) requests, their adoption of better reporting systems, more flexible computing systems and staff, and a general willingness to provide the information are probably also a factor in their ability to provide more consistent faculty counts, although data to support these assertions are not available. For more detail, see the 1999 National Study of Postsecondary Faculty (NSOPF:99) Methodology Report (Abraham et al., 2002).

NCES conducted three studies to examine possible measurement errors in NSOPF:03, including (1) a reinterview study of selected faculty questionnaire items, conducted after the field test; (2) a discrepancy and trends analysis of faculty counts in the full-scale data collection; and (3) a retrieval, verification, and reconciliation effort involving recontact of institutions. For detail on these studies, see Measurement Error Studies at the National Center for Education Statistics (Salvucci et al, 1997) and the 1993 National Study of Postsecondary Faculty Methodology Report (Selfa et al., 1997).

Reinterview Study. A reliability reinterview study was conducted after the NSOPF:93 field test to identify Faculty Questionnaire items that yielded low-quality data and the item characteristics that caused problems, thus providing a basis for revising the questionnaire items prior to implementation of the full-scale data collection. The analysis of the reinterview items was presented by item type—categorical or continuous variables—rather than by subject area. The level of consistency between the field-test responses and the reinterview responses was relatively high: a 70 percent consistency for most of the categorical variables and a 0.7 correlation for most of the continuous variables. A detailed analysis of the question on employment sector of last main job was conducted because it showed the highest percentage of inconsistent responses (28 percent) and the highest inconsistency index (36.0). It was concluded that the large number of response categories and the involvement of some faculty in more than one job sector were plausible reasons for the high inconsistency rate. The items with the lowest correlations were those asking for retrospective reporting of numbers that were small fractions of dollars or hours and those asking for summary statistics on activities that were likely to fluctuate over time—the types of questions shown to be unreliable in past studies.

Discrepancy and Trends Analysis of Faculty Counts. This analysis compared discrepancies between different types of institutions to identify systematic sources of discrepancies in faculty estimates between the list counts provided by the institutions and the counts they reported in the Institution Questionnaire. The investigation found that list estimates tended to exceed questionnaire estimates in large institutions, in institutions with medical components, and in private schools. Questionnaire estimates tended to be higher in smaller institutions, in institutions without medical components, and in public schools. Institutions supplied much higher questionnaire estimates than list estimates for part-time faculty. Faculty lists submitted early in the list collection process showed little difference in the magnitude of questionnaire/list discrepancies from faculty lists submitted later in the process.

Retrieval, Verification, and Reconciliation. This effort involved recontacting 509 institutions: 450 institutions (more than half of all institutions) whose questionnaire estimate of total faculty differed from their list estimate by 10 percent or more and an additional 59 institutions NCES designated as operating medical schools or hospitals. All institutions employing health sciences faculty and participating in NSOPF:93 were selected for recontact.

NCES accepted the reconciled estimates obtained in this study as the true number of faculty. More than half (57 percent) of the recontacted institutions identified the questionnaire estimate as the most accurate response, while 25 percent identified the list estimate as the most accurate. Another 11 percent of the institutions provided a new estimate; 1 percent indicated that their IPEDS estimate was the most accurate response; and 6 percent could not verify any of the estimates and thus accepted the original list estimate.

The majority of discrepancies in faculty counts resulted from the exclusion of some full- or part-time faculty from the list or questionnaire. Another factor was the time interval between the date the list was compiled and the date the questionnaire was completed. Downsizing also affected faculty counts at several institutions. Some of the reasons for the discrepancies were unexpected. For example, some institutions provided “full-time equivalents” (FTEs) on the Institution Questionnaire instead of an actual headcount of part-time faculty.

Sometimes part-time faculty were overreported—often as a result of confusion over the pool of part-time and temporary staff employed by, or available to, the institution during the course of the academic year versus the number actually employed during the fall semester. Another reason for overreporting part-time faculty was an inability to distinguish honorary/unpaid part-time faculty from paid faculty and teaching staff. This study also confirmed that a small number of institutions, those that considered their medical schools separate from their main campuses, excluded medical school faculty from their lists of faculty.

While these results indicate that there may have been some bias in the NSOPF:93 sample, no measure of the potential bias, such as the net difference rate, was computed. Instead, the reconciliation prompted NCES to apply a poststratification adjustment to the estimates based entirely on the “best” estimates obtained during the reinterview study described above. Problems with health science estimates, however, could only be partly rectified by the creation of new “best” estimates. For more information on the calculation of the “best” estimates and further discussion of the health science estimates, see the 1993 National Study of Postsecondary Faculty Methodology Report (Selfa et al. 1997).  

Top

Data Comparability

Design Changes. Each succeeding cycle of NSOPF has expanded the information base about faculty. NSOPF:04 was designed both to facilitate comparisons over time and to examine new faculty-related issues that had emerged since NSOPF:99. The NSOPF:04 sample was designed to allow detailed comparisons and high levels of precision at both the institution and faculty levels. The merging of NSOPF with NPSAS for the 2003–04 administration allowed for the inclusion of a larger number of institutions in NSOPF while reducing respondent burden. Since NSOPF:93, the operant definition of “faculty” for NSOPF has included instructional faculty, noninstructional faculty, and instructional personnel without faculty status.

NSOPF:04, NSOPF:99, and NSOPF:93 consisted of two questionnaires: an Institution Questionnaire and a Faculty Questionnaire. NSOPF:88 included, in addition, a Department Chairperson Questionnaire.

Definitional Differences. Comparisons among the cycles must be made cautiously because the respondents in each cycle were different. At the institution level, the NSOPF:04 sample consisted of all public and private, not-for-profit Title IV-participating, 2- and 4-year degree-granting institutions in the 50 states and the District of Columbia. The sample was first constituted in this way in NSOPF:99 so that the NSOPF sampling universe would conform with that of IPEDS. In the two previous rounds of the study (NSOPF:93 and NSOPF:88), the sample consisted of public and private, not-for-profit 2- and 4-year (and above) higher education institutions.

The definition of faculty and instructional staff for each NSOPF cycle is given above (see Section 3, “Key Concepts”). On the design level, note that NSOPF:04, NSOPF:99, and NSOPF:93 requested a listing of all faculty (instructional and noninstructional) and instructional staff from institutions for the purpose of sampling. For NSOPF:88, institutions were asked to provide only the names of instructional faculty. Although not specifically stated, NCES expected that institutions would provide information on instructional staff as well. The term faculty was used generically. However, there is no way of knowing how many institutions that had instructional staff as well as instructional faculty provided the names of both. Each institution was allowed to decide which faculty members belonged in the sample, thereby creating a situation that does not allow researchers to precisely match the de facto sample definition used by institutions in NSOPF:88.

Top

Content Changes. Major goals for NSOPF:04 included making the questionnaires shorter and easier to complete. Other changes were implemented to bring NSOPF up to date with current issues in the field. As a result, 9 items from the NSOPF:99 Institution

Questionnaire were eliminated from the NSOPF:04 Institution Questionnaire, 14 items were revised, and 3 items were repeated without change. For the NSOPF:04 Faculty Questionnaire, 39 items from the NSOPF:99 Faculty Questionnaire were eliminated, 51 items were simplified or otherwise revised, 1 item was added, and 3 items were unchanged.

Comparisons with other surveys. Comparisons of NSOPF:93 salary estimates with salary estimates from IPEDS and from the American Association of University Professors indicate that NSOPF data are consistent with these other sources. Most differences are relatively small and can be easily explained by methodological differences between the studies. The NSOPF estimates are based on self-reports of individuals, whereas the other two studies rely on institutional reports of salary means for the entire institution.

However, the reader should be aware of differences in faculty definitions between NSOPF and IPEDS. In IPEDS, individuals have to be categorized according to their primary responsibility (administrator, faculty, or other professional); in NSOPF, it is possible to categorize individuals according to any of their responsibilities.

Because NSOPF includes all faculty and instructional staff, it is possible for an “other professional” to have instructional responsibilities and/or be a faculty member, and it is also possible for an administrator to have instructional responsibilities and/or be a faculty member. Therefore, NSOPF includes all faculty under IPEDS, some of the administrators under IPEDS, and some of the other professionals under IPEDS.

Table NSOPF-1. Summary of weighted response rates for selected NSOPF surveys
Questionnaire List participation rate Questionnaire response rate  Overall
NSOPF:93    
 Institution 94 94
 Faculty 84 83 70
NSOPF:99      
 Institution 93 93
 Faculty 88 83 74
NSOPF:04      
 Institution 84 84
 Faculty 91 76 69
† Not applicable.
SOURCE: NSOPF methodology reports; available at https://nces.ed.gov/pubsearch/getpubcats.asp?sid=011.

Top