Skip Navigation
Campus Crime and Security at Postsecondary Education Institutions
NCES 97402
February 1997

Survey Methodology and Data Reliability

The Postsecondary Education Quick Information System (PEQIS) was established in 1991 by the National Center for Education Statistics, U.S. Department of Education. PEQIS is designed to conduct brief surveys of postsecondary institutions or state higher education agencies on postsecondary education topics of national importance. Surveys are generally limited to two or three pages of questions, with a response burden of about 30 minutes per respondent. Most PEQIS institutional surveys use a previously recruited, nationally representative panel of institutions. The sampling frame for the PEQIS panel recruited in 1992 was constructed from the 1990-91 Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics file. Institutions eligible for the PEQIS frame for the panel recruited in 1992 included 2-year and 4-year including graduate-level) institutions (both institutions of higher education and other postsecondary institutions) and less-than-2-year institutions of higher education located in the 50 states, the District of Columbia, and Puerto Rico: a total of 5,317 institutions.

The PEQIS sampling frame for the panel recruited in 1992 was stratified by instructional level (4-year, 2-year, less-than-2-year), control (public, private nonprofit, private for-profit), highest level of offering (doctor's/first professional, master's, bachelor's, less than bachelor's), total enrollment, and status as either an institution of higher education or other postsecondary institution. Within each of the strata, institutions were sorted by region (Northeast, Southeast, Central, West), whether the institution had a relatively high minority enrollment, and whether the institution had research expenditures exceeding $1 million. The sample of 1,665 institutions was allocated to the strata in proportion to the aggregate square root of full-time equivalent enrollment. Institutions within a stratum were sampled with equal probabilities of selection. During panel recruitment, 50 institutions were found to be ineligible for PEQIS, primarily because they had closed or offered just correspondence courses. The final unweighted response rate at the end of PEQIS panel recruitment in spring 1992 was 98 percent (1,576 of the 1,615 eligible institutions). The weighted response rate for panel recruitment was 96 percent.

Each institution in the PEQIS panel was asked to identify a campus representative to serve as survey coordinator. The campus representative facilitates data collection by identifying the appropriate respondent for each survey and forwarding the questionnaire to that person.

Sample and Response Rates

The sample for this survey consisted of 1,017 2-year and 4-year (including graduate-level) postsecondary institutions in the PEQIS panel (two-thirds of the panel institutions at these levels), plus a supplementary sample of 505 less-than-2-year postsecondary institutions, for a total sample of 1,522 institutions. In April 1996, questionnaires (see appendix C) were mailed to the PEQIS coordinators at the panel institutions and to the chief executive officer (CEO) at the institutions in the supplementary sample. Coordinators and CEOs were told that the survey was designed to be completed by the person at the institution most knowledgeable about the institution's security procedures and crime statistics.

Some 219 institutions out of the 1,522 institutions in the total sample were found to be out of the scope of the survey. Of these institutions, 140 were ineligible because they indicated on the survey form that they did not participate in federal Title IV financial aid programs, and 79 were ineligible because they were closed or were not postsecondary institutions. This left 1,303 eligible institutions. These 1,303 institutions represent the universe of approximately 6,310 postsecondary education institutions in the 50 states, the District of Columbia, and Puerto Rico that participate in federal Title IV financial aid programs. Telephone follow up of nonrespondents was initiated in May 1996; data collection and clarification was completed in July 1996. For the eligible institutions that received surveys, an unweighted response rate of 93 percent (1,218 responding institutions divided by the 1,303 eligible institutions in the sample) was obtained. The weighted response rate for this survey was 94 percent. The unweighted overall response rate was 91 percent (97.6 percent panel recruitment participation rate multiplied by the 93.5 percent survey response rate). The weighted overall response rate was 90 percent (96.1 percent weighted panel recruitment participation rate multiplied by the 93.8 percent weighted survey response rate).

Weighted item nonresponse rates ranged from 0 percent to 4.4 percent. Item nonresponse rates for most items were less than 1 percent. The item nonresponse for the crime statistics was about 2 percent for 1993 and 1994, and about 4 percent for 1992. Because the item nonresponse rates were so low, imputation for item nonresponse was not implemented.

Sampling and Nonsampling Errors

The response data were weighted to produce national estimates (see Table 22). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.

The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.

To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents at institutions like those that completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics; the Office of Postsecondary Education; and the National Institute on Postsecondary Education, Libraries, and Lifelong Learning, U.S. Department of Education. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.

Variances

The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of institutions reporting that the institution uses the FBI Uniform Crime Reporting definitions is 39.7 percent, and the estimated standard error is 1.7 percent. The 95 percent confidence interval for the statistic extends from [39.7 - (1.7 times 1.96)] to [39.7 + (1.7 times 1.96)], or from 36.4 to 43.0 percent. Tables of standard errors for each table and figure in the report are provided in appendix B.

Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics.14 To construct the replications, 51 stratified subsamples of the full sample were created and then dropped one at a time to define 51 jackknife replicates.15 A computer program (WesVarPC), distributed free of charge by Westat, Inc., through the Internet, was used to calculate the estimates of standard errors. WesVarPC is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).

The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In particular, an adjusted chi-square test using Satterthwaite's approximation to the design effect was used in the analysis of the two-way tables.16 Finally, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/g significance level to control for the fact that g differences were simultaneously tested.

Background Information

The survey was performed under contract with Westat, Inc., using the Postsecondary Education Quick Information System (PEQIS). This is the seventh PEQIS survey to be conducted. Westat's Project Director was Elizabeth Farris, and the Survey Manager was Laurie Lewis. Bernie Greene was the NCES Project Officer. The data were requested by the Office of Postsecondary Education and the National Institute on Postsecondary Education, Libraries, and Lifelong Learning, U.S. Department of Education.

This report was reviewed by the following individuals:

Outside NCES

  • Gregory Henschel, National Institute on Postsecondary Education, Libraries, and Lifelong Learning, U.S. Department of Education
  • Charles Masten, Office of Postsecondary Education, U.S. Department of Education
  • Dorothy Siegel, Executive Director, Campus Violence Prevention Center
  • Douglas Tuttle, Director of Public Safety at the University of Delaware and Immediate Past President of the International Association of Campus Law Enforcement Administrators (IACLEA)

Inside NCES

  • Nabeel Alsalam, Data Development and Longitudinal Studies Group
  • Michael Cohen, Statistical Standards and Services Group
  • Mary Frase, Data Development and Longitudinal Studies Group
  • William Freund, Surveys and Cooperative Systems Group
  • Roslyn Korb, Surveys and Cooperative Systems Group
  • Edith McArthur, Data Development and Longitudinal Studies Group

For more information about the Postsecondary Education Quick Information System (PEQIS) or the PEQIS Survey on Campus Crime and Security at Postsecondary Education Institutions, contact Bernie Greene, Data Development and Longitudinal Studies Group, National Center for Education Statistics, Office of Educational Research and Improvement, 555 New Jersey Avenue, NW, Washington, DC 20208-5651, telephone (202) 219-1366. Institutions that have questions about the Campus Security Act can call the Department of Education's Customer Support Branch at 1-800-433-7327. Additional information about the Act is also available on the World Wide Web at http://www.ed.gov/offices, where the Act and the implementing regulations can be found.

Definitions of Analysis Variables

  • Type of institution: for-profit less-than-2-year, other less-than-2- year, public 2-year, private 2-year, public 4-year, private 4-year. Type was created from a combination of level (less-than-2-year, 2- year, 4-year) and control (public, private nonprofit, private for profit). Less-than-2-year institutions are defined as institutions at which the highest level of offering is of less than 2 years duration; 2-year institutions are those at which the highest level of offering is at least 2 but less than 4 years (below the baccalaureate degree); 4- year institutions are those at which the highest level of offering is 4 or more years (baccalaureate or higher degree).17 For 2-year and 4- year institutions, private comprises private nonprofit and private forprofit institutions; these private institutions are reported together because there are too few 2-year and 4-year private for-profit institutions in the sample for this survey to report them as separate categories. For less-than-2-year institutions, "other" comprises public and private nonprofit institutions; these institutions are reported together because there are too few institutions in the sample in either of these categories to report them separately, and these institutions are very different from the for-profit less-than-2- year institutions.
  • Percent of students in campus housing: no campus housing, less than 25 percent, 25 percent or more. The percent of students in campus housing is based on the percent of all students (full and part time, undergraduate and graduate) at the institution in campus housing (including dormitories, on-campus fraternities and sororities, and institution-provided apartments) as reported on this PEQIS questionnaire.
  • Metropolitan status: large city, mid-size city, urban fringe, town or rural. Metropolitan status is based on the locale codes assigned to institutions by the U.S. Bureau of the Census. Large city is defined as the central city of a metropolitan statistical area (MSA) with a population greater than or equal to 400,000 or a population density greater than or equal to 6,000 persons per square mile. Midsize city is defined as the central city of an MSA but not designated "large central city." Urban fringe is defined as a place within an MSA of a large or mid-size central city and defined as urban by the U.S. Bureau of the Census. Urban fringe for this PEQIS survey comprises institutions in the urban fringe of large cities and midsize cities. Town is defined as a place not within an MSA, but with a population greater than or equal to 2,500 and defined as urban by the U.S. Bureau of the Census. Rural is defined as a place with a population less than 2,500 and defined as rural by the U.S. Bureau of the Census. The category of town or rural for this PEQIS survey comprises institutions in large towns, small towns, and rural areas. Institutions are reported in these collapsed categories because there are too few institutions in the sample in some of the individual categories to report them separately. Analyses by metropolitan status exclude institutions in Puerto Rico, since the U.S. Bureau of the Census does not assign locale codes for Puerto Rico.
  • Institutional size (enrollment): less than 200 students, 200 to 999 students, 1,000 to 2,999 students, 3,000 to 9,999 students, 10,000 or more students. Institutional enrollment size is based on the total enrollment of the institution (undergraduate and graduate, full and part time) in fall 1994.

Table 23 shows how the analysis variables of institutional type and size, and the percent of students in campus housing are related to each other. For example, most for-profit less-than-2-year institutions do not have campus housing and have less than 200 students; most public 4-year institutions have campus housing and have 3,000 or more students. Because of these relationships, differences on survey items tend to covary by these analysis variables.


14 K. Wolter. Introduction to Variance Estimation, Springer-Verlag, 1985.

15 Ibid, 183.

16 For example, see J.N.K.. Rao and A. Scott. "On Chi-square Tests for Multi-way Contingency Tables with Cell Proportions Estimated from Survey Data," Annals of Statistics 12 (1984): 46- 60.

17 Definitions for level are from the data file documentation for the Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics file, U.S. Department of Education, National Center for Education Statistics

Top