The Postsecondary Education Quick Information System (PEQIS) was established in 1991 by the National Center for Education Statistics, U.S. Department of Education. PEQIS is designed to conduct brief surveys of postsecondary institutions or state higher education agencies on postsecondary education topics of national importance. Surveys are generally limited to two to three pages of questions, with a response burden of about 30 minutes per respondent. Most PEQIS institutional surveys use a previously recruited nationally representative panel of institutions. The sampling frame for the PEQIS panel recruited in 1992 was constructed from the 1990-91 Integrated Postsecondary Education Data System (IPEDS) Institutional Characteristics file. Institutions eligible for the PEQIS frame for the panel recruited in 1992 included 2- year and 4-year (including graduate-level) institutions (both institutions of higher education and other postsecondary institutions), and less-than- 2-year institutions of higher education located in the 50 states, the District of Columbia and Puerto Rico: a total of 5,317 institutions.
The PEQIS sampling frame for the panel recruited in 1992 was stratified by instructional level (4-year, 2-year, less-than-2-year), control (public, private nonprofit private for-profit), highest level of offering (doctor's/first professional, master's, bachelor's, less than bachelor's), total enrollment, and status as either an institution of higher education or other postsecondary institution. Within each of the strata institutions were sorted by region (Northeast, Southeast, Central, West), whether the institution had a relatively high minority enrollment and whether the institution had research expenditures exceeding $1 million. The sample of 1,665 institutions was allocated to the strata in proportion to the aggregate square root of full-time-equivalent enrollment. Institutions within a stratum were sampled with equal probabilities of selection. During panel recruitment 50 institutions were found to be ineligible for PEQIS, primarily because they had closed or offered just correspondence courses. The final unweighed response rate at the end of PEQIS panel recruitment in spring 1992 was 98 percent ( 1,576 of the 1,615 eligible institutions). The weighted response rate for panel recruitment was 96 percent.
Each institution in the PEQIS panel was asked to identify a campus representative to serve as survey coordinator. The campus representative facilitates data collection by identifying the appropriate respondent for each survey and forwarding the questionnaire to that person.
The sample for this survey consisted of two-thirds of the 2-year and 4- year (including graduate-level) postsecondary education institutions in the PEQIS panel, for a sample of 1,036 institutions. In early March 1993, questionnaires (see Appendix B) were mailed to the PEQIS coordinators at the institutions. Coordinator were told that the survey was designed to be completed by the person or office at the institution that has the most information about deaf and hard of hearing students. Eleven institutions were found to be out of the scope of the survey (primarily because they were closed), leaving 1,025 eligible institutions. These 1,025 institutions represent the universe of approximately 5,000 2- year and 4-year (including graduate-level) postsecondary education institutions in the United States. Telephone followup of nonrespondents was initiated in late March; data collection was completed in mid-May. For the eligible institutions that received surveys, an unweighed response rate of 96 percent (982 responding institutions divided by the 1,025 eligible institutions in the sample) was obtained. The weighted response rate for this survey was 97 percent. The unweighed overall response rate was 94 percent (98 percent panel recruitment participation rate multiplied by the 96 percent survey response rate). The weighted overall response rate was 94 percent (96.1 percent weighted panel recruitment participation rate multiplied by the 97.4 percent weighted survey nonresponse rate).
Weighted item nonresponse rates ranged from O percent to 3.9 percent. The items with the highest nonresponse rates involved the information for academic year 1989-90 for the first three questions, which requested information about the numbers of students enrolled who identified themselves to the institution as deaf or hard of hearing, and the numbers of deaf and hard of hearing students sewed at the institution during each of the last 4 academic years. Because one of the major reasons for conducting this survey was to make national estimates of these numbers, imputations for item nonresponse were made for questions 1 b, 2b, and 3, which each requested information for academic years 1989-90, 1990-91, 1991-92, and 1992-93. The imputation procedures involved a combination of hot-deck imputation for institutions missing data for all 4 years (1989-90 through 1992-93), and application of subsequent years' data to previous years, adjusted by the average rate of change of similar institutions (based on sampling strata) for institutions that provided data for one or more of the 4 years. Hot-deck imputation selects a donor value from another institution with similar characteristics to use as the imputed value. Thus, the institutions were sorted by strata and within strata by total institution size before beginning imputation. No institution was used as a donor more than once.
The response data were weighted to produce national estimates (see Table 13). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.
The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in collection of the data These errors can sometimes bias the data. NonSampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.
To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents at institutions like those who completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics and the Office of Special Education and Rehabilitative Services (OSERS). Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.
The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of institutions reporting that the institution provided support services to deaf or hard of hearing students in 1989-90 through 1992-93 is 37 percent, and the estimated standard error is 1.5 percent. TIE 95 percent confidence interval for the statistic extends from [37 - (1.5 times 1.96)] to [37 + (1.5 times 1.96)], or from 34.1 to 39.9 percent. Tables of standard errors for each table in the report are provided in Appendix A.
Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides am estimate of the variances of the statistic (Wolter 1985, Chapter 4). To construct the replications, 52 stratified subsamples of the full sample were created and then dropped one at a time to define 52 jackknife replicates (Wolter 1985, 183). A computer program (WESVAR), available at Westat, Inc., was used to calculate the estimates of standard errors. The software runs under IBM/OS and VAX/VMS systems.
The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In particular, an adjusted chi-square test using Satterthwaite's approximation to the design effect was used in the analysis of the two-way tables (e.g., see Rao and Scott 1984). Finally, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/g significance level to control for the fact that g differences were simultaneously tested.
When OSERS requested this PEQIS survey, they began with a long list of the types of information that they would like to obtain. Included on this list were information about deaf and hard of hearing students by hearing level, academic level, full-time/part-time status, and race/ethnicity; a question about whether the respondent was aware of any deaf and hard of hearing students enrolled at the institution who did not identify themselves to the institution, and if so, how many, and how the nonrespondent became aware of these student, certificates and degrees awarded to deaf and hard of hearing students and the availability, requests for, and provision of a long list of support services.
In the early stages of questionnaire development it became clear that the question about the availability and provision of the support services to deaf and hard of hearing students was problematic for a couple of reasons. First, some of the services (e.g., personal counseling services, employment placement services) are available to all students on campus, not just to deaf and hard of hearing students. Second, if an institution only rarely enrolls a deaf or hard of hearing student, needed services are located and provided on an as-needed basis -- which is different than the concept of a service being "available" at an institution, since this implies that the service delivery mechanism is already in place. Because of these issues, the question was changed to ask about the provision of a small number of support services designed for deaf and hard of hearing students (and not about availability and requests for services).
The questionnaire was then sent to representatives at institutions in the PEQIS panel for feedback about the availability of the requested data. All respondents stressed that they only have information about students with disabilities who have voluntarily y chosen to identify themselves to the institution as having a disability. Thus, none of the institutions could respond to the questions about deaf and hard of hearing students who did not identify themselves to the institution. Information about certificates and degrees awarded, full-time/part-time status, and race/ethnicity could be provided by many of the institutions, but the time required to do so far exceeded the 30-minute PEQIS response burden. The major reason was that student records would have to be searched (by computer or manually, depending on the school) to locate and compile this information. Based on the feedback received from this review by institutions, the questionnaire was revised, an NCES questionnaire review meeting held, and a pretest conducted with institutions in the PEQIS sample. Only minor changes, mostly in the questionnaire format, were needed after the pretest. The final questionnaire is provided in Appendix B.
The number of students who identified themselves to the institution as deaf or hard of hearing as estimated by this PEQIS survey (20,040 in 1992-93) is much lower than the number of students who reported that they had a hearing impairment in a recent student self-report survey. The 1989-90 National Postsecondary Student Aid Study (NPSAS:90) asked almost 70,000 students enrolled in all kinds and levels of postsecondary education to indicate if they had a hearing impairment or any of several other kinds of disabilities. The data were then weighted to provide national estimates. Based on these self-reports, NPSAS:90 estimated that there were 258,197 hearing impaired students enrolled in 2-year and 4-year postsecondary education institutions in 1989-90 (U.S. Department of Education, October 1993). The difference in the numbers of students with hearing impairments in the NPSAS:90 self-report data and the number of deaf and hard of hearing students in the PEQIS institutional level data indicates that there may be many students with some degree of hearing impairment who do not identify themselves to the institution as deaf or hard of hearing.12 Based on these numbers, it appears that only about 8 percent of the students who report that they have a hearing impairment identify themselves to the institution as deaf or hard of hearing.
However, studies of hearing impaired students at the elementary and secondary levels yielded numbers much closer to the PEQIS numbers than to the NPSAS numbers. For example, the Office of Special Education and Rehabilitative Services of the U.S. Department of Education submits an annual report to Congress, as required by the Individuals with Disabilities Education Act (IDEA), about the numbers of children and youth with disabilities receiving special education and related services under IDEA and through Chapter 1 of the Elementary and Secondary Education Act (ESEA). Data about the number of children and youth receiving these services are collected by the U.S. Department of Education from the states. For the 1989-90 school year, reports indicated that 41,003 hearing impaired and 813 deaf-blind students were served under IDEA, and 17,161 hearing impaired and 821 deaf-blind students were seined under ESEA (U.S. Department of Education 1991). Another source of information at the elementary and secondary level is the annual survey conducted by the Center for Assessment and Demographic Studies at Gallaudet University. This study, referred to as the CADS survey, collects data from schools, with teachers and administrators asked to identify children with hearing impairments. In 1989-90, the CADS survey identified 46,666 children and youth as hearing impaired (Schildroth and Hotto 1991).
A study conducted by Gallaudet College (now University) in the early 1980s also produced estimates of the number of hearing impaired students in colleges that are much closer to the estimates in the PEQIS survey than to those in the NPSAS studies. The Gallaudet study, which contacted institutions for information, estimated that there were 10,400 hearing impaired students enrolled in American higher education institutions in 1978, including Gallaudet College and the National Technical Institute for the Deaf (NTID), which together enrolled about 2,000 students (Armstrong and Schneidmiller 1983). As discussed by the authors of the Gallaudet study, the National Center for Education Statistics, based on information collected from institutions, estimated that there were 11,256 "acoustically impaired" students attending U.S. colleges and universities in 1978, excluding Gallaudet and NTID.
There are many differences in methodologies and populations of interest in these various studies. In particular, the NPSAS numbers were student self-reports, while the other sources of data were obtained from institutions and states. Since the PEQIS study was designed to obtain estimates from institutions about students who had identified themselves to the institution as deaf or hard of hearing and about the services the institutions provided to these students, and was not designed as a comparative study, the reasons for the differences in the estimates from these various sources cannot be answered with the available data.
The survey was performed under contract with Westat, Inc., using the Postsecondary Education Quick Information System (PEQIS). This is the first PEQIS survey to be conducted. Westat's Project Director was Elizabeth Farris, and the Survey Manager was Laurie Lewis. Bernie Greene was the NCES Project Officer. The data were requested by Robert Davila then Assistant Secretary of the Office of Special Education and Rehabilitative Services, U.S. Department of Education.
This report was reviewed by the following individuals:
12NPSAS:87, which asked separately about deafness and hard of hearing, estimated about the same number of deaf and hard of hearing students as the NPSAS:90 estimated for hearing impaired students, indicating that the wording of the questions does not account for the very large differences in the estimates between NPSAS and PEQIS.