Skip Navigation
Illustration/Logo View Quarterly by  This Issue  |  Volume and Issue  |  Topics
Education Statistics Quarterly
Vol 2, Issue 3, Topic: Methodology
Quality Profile for SASS Rounds 1 through 3: 1987 through 1995
By: Graham Kalton, Marianne Winglee, Sheila Krawchuk, and Daniel Levine
 
This article was excerpted from the first and last chapters of the Technical Report of the same name. The report examines the quality of sample survey data from the NCES Schools and Staffing Survey (SASS) system.
 
 

The SASS System

The Schools and Staffing Survey (SASS) is an integrated system of periodic sample surveys providing information about teachers and administrators and the general condition of America's public and private elementary and secondary schools. Sponsored by the National Center for Education Statistics (NCES) of the U.S. Department of Education, SASS offers a source of data for policymakers, educators, education researchers, and the public.

SASS has been conducted three times: round 1 in 1987-88, round 2 in 1990-91, and round 3 in 1993-94. Round 4 is being fielded in the 1999-2000 school year. At each round, NCES reviews the SASS content to expand, retain, or delete topics covered in the previous administration, maintaining the survey's capability for trend analysis while adding new topics to address current concerns. The survey data are collected by mail, with telephone follow-up of nonrespondents.

Each round of SASS includes several core surveys, plus the Teacher Follow-up Survey (TFS), which is conducted the year after the core surveys. In the first two rounds, SASS comprised five components: the "School Survey," the "School Administrator Survey" (now known as the "School Principal Survey"), the "Teacher Demand and Shortage Survey" (TDSS), the "Teacher Survey," and the TFS. In round 3, SASS added the "Library Media Center Survey," the "Library Media Specialist/Librarian Survey," and the "Student Records Survey," resulting in a system of eight surveys in total. Round 4 administers six of these surveys, excluding the "Library Media Specialist/Librarian Survey" and the "Student Records Survey."

back to top


Purpose and Content of This Report

This report summarizes what is known about the quality of data from the SASS component surveys and provides information about the survey design and procedures for each survey. More specifically, the report reviews past and ongoing research on the quality of SASS data, with a view to identifying gaps in our knowledge and establishing priorities for future research activities. This information will be of interest to users of SASS data, to persons responsible for various aspects of the design and operation of SASS, and to anyone interested in the quality of survey data, especially data from mail surveys and surveys related to education.

The report draws on a large body of literature and provides references for readers who want more detailed information.

As the second edition of the Quality Profile for SASS, this report updates the first edition (Jabine 1994), which covered rounds 1 and 2. The current report discusses rounds 1 through 3. It also mentions some new features of round 4, but does not cover this round, because the round had not yet been completed when the report was prepared.

Each component survey is examined in a separate chapter of the report. Topics discussed for each of the surveys include potential sources of error and their possible impact on the accuracy of survey estimates. The final chapter looks at SASS as a whole, broadening the discussion of quality to cover issues of relevance, accessibility, timeliness, and periodicity. This chapter also combines key findings from earlier chapters to identify areas where efforts for methodological improvements might be most effectively directed and where further information is needed for the assessment of survey quality.

back to top


Relevance, Accessibility, and Timeliness of SASS Data

The ultimate goal of conducting SASS is to provide the data required by policymakers and researchers to understand the characteristics of the U.S. elementary and secondary education system. In order to do this, SASS must collect the relevant data, make the results and the data files readily accessible, and provide data that are as up to date as possible.

Relevance

By maintaining close contacts with the broad user community, NCES attempts to ensure that SASS collects the data needed to inform policy decisions and stimulate research. Before each round, NCES enlisted the help of many experts and specialists in the education research and policy communities to examine SASS and propose changes to its content and methods. In addition, the Advisory Council on Education Statistics (ACES)-the advisory panel for NCES-reviewed the plans for SASS at each round, and the SASS Technical Review Panel met regularly to discuss the recommendations made by other groups and to provide a broad evaluation of the plans for survey content, design, analysis, and reporting.

NCES introduced SASS in 1987 in response to needs for information about critical aspects of teacher supply and demand, the qualifications and working conditions of teachers and principals, and the basic conditions in schools as workplaces and learning environments. Although changes in design and procedures were made for round 2, the basic subject content remained essentially unchanged. For round 3, however, some additions and modifications to content were included. For example, the Student Records, Library, and Librarian Surveys were added.

The 6-year period between rounds 3 and 4 provided the opportunity for a major review of the content and purposes of SASS, in light of the many changes in education policy and thinking since its inception. The redesign of SASS engaged many segments of the education research and policy communities. Emerging from diverse redesign efforts, round 4 of SASS shifts emphasis from teacher supply and demand issues to the measurement of teacher and school capacity, both objectives of the recent school reform agenda. To measure teacher capacity, the redesigned SASS examines teacher qualifications, teacher career paths (including induction experience), and professional development.

To measure school capacity, SASS concentrates on school organization and decision making, curriculum and instruction, parental involvement, school safety and discipline, and school resources.

Accessibility

The value of a survey depends on the extent to which its data are used, which in turn depends on the accessibility of the survey results and the survey data files. Moreover, the proper use of the survey data requires the availability of good documentation.

Publications. Results from SASS are published in descriptive reports, analytic reports, and issue briefs. The descriptive reports present basic information about schools, principals, and teachers; the analytic reports examine issues of particular interest in more detail; and the issue briefs provide short accounts (about 2 pages) on topics of current concern.

NCES recently conducted a study to explore the satisfaction of key customers with SASS publications by means of individual interviews with 30 selected representatives from state education agencies and 19 other key customers (Rouk, Weiner, and Riley 1999). In general, these customers considered the publications to be easily accessible, the content appropriate for their data needs, and the presentations quite clear. In addition, focus group discussions were held with individuals from federal and state government, research, and data management organizations to obtain reactions concerning the appropriateness, usability, and accessibility of two key SASS publications: A Statistical Profile: 1993-94 (Henke et al. 1996) and SASS by State: 1993-94 (De Mello and Broughman 1996). In general, the comments of the participants on the format and content of SASS tabulations were favorable; they also provided suggestions for additional tabulation detail.

Data files. In addition to the publication of results from SASS, NCES makes microdata available for different or more detailed analyses by users. The public-use data files provided to the public are a subset of the full data set. Because of the need to protect the confidentiality of the respondents, some variables are suppressed and the level of detail for others is reduced. Users who need the full data set for detailed analyses can apply for a license to access the SASS restricted-use data. These unabridged data are not subject to disclosure avoidance procedures and, hence, provide a richer database to support detailed analyses.

Data documentation. To assist users, NCES provides a wide variety of documentation. For each round, a Data File User's Manual (NCES 1991a, 1991b, 1991c, 1991d; Faupel, Bobbitt, and Friedrichs 1992; Gruber, Rohr, and Fondelier 1993, 1996; Whitener et al. 1998) provides a comprehensive source of information about each of the surveys that constitute SASS. In similar fashion, a Sample Design and Estimation report (Kaufman 1991; Kaufman and Huang 1993; Abramson et al. 1996) provides a detailed description of the sample design and estimation procedures, including variance computation, for each of the surveys in each round. Additional information about SASS procedures and data quality is contained in the many SASS methodological publications.

The current SASS documentation is primarily cross-sectional in nature, providing factual information for each individual round. This form of documentation is well suited to the needs of those who analyze a single round of SASS. Each user's manual contains a brief discussion of changes from the previous round; however, it may not fully satisfy the needs of those who use two or more rounds of SASS data to examine change over time, and of methodologists and others who want to understand and assess the evolution of the SASS methodology. As data from more rounds of SASS become available, interest in documentation that provides a linkage across rounds will increase.

Timeliness and periodicity

Timely production of results. Since the inception of SASS in 1987, significant effort has been devoted toward producing the results in a timely fashion. Experience to date indicates that steady strides have been made in improving timeliness. In round 1, for example, information for principals, schools, and teachers became available about 2 years after the completion of data collection; school district information became available in about 3 years. Each succeeding round has improved on this timing: data from round 2 were first published in January 1993, about 11/2 years after the end of data collection, and round 3 publication began in June 1995, only about 12 months after the end of data collection. Plans for round 4 call for the data to become available even sooner, in spring 2001, only some 8 months following completion of data collection.

Factors that have contributed to this positive trend of providing the data more quickly include the growing familiarity with the subject matter, which leads to standardization and greater efficiency in data processing; the repetition of data collection, which permits investment in technology (such as computer-assisted telephone interviewing and computer editing); and improvements in the collection procedures and instruments, which result in more complete returns and a shorter time period required to collect the information. Improvements in data processing also have contributed significantly; specifically, capturing data through imaging rather than data keying and using modular software systems throughout the process have led to accelerated processing.

Periodicity of the surveys. SASS was designed to provide an ongoing and consistent source of data on the teaching workforce and school population. Rounds 1 to 3 of SASS were conducted at 3-year intervals, in 1987-88, 1990-91, and 1993-94. The interval between round 3 and round 4, administered in 1999-2000, was extended to 6 years, in part because of budget limitations. The next round is currently planned for 2003-04, and SASS is to be conducted on a 4-year cycle thereafter, as suggested by a survey of users and discussion by the SASS Technical Review Panel and ACES. The following reasons support this conclusion:

  • Because SASS is a unique source of national and state representative data on important topics in education reform, users considered that a 5-year cycle would leave too long a gap for SASS to maintain its currency and provide timely data to support policy planning.
  • A 4-year cycle beginning with 1999-2000 and the next administration in 2003-04 would allow SASS to coincide with the cycle of presidential elections and with the reauthorization schedule for the major elementary and secondary education legislation. This schedule would allow data from SASS to become available around the start of each presidency when the government and policymakers need data to inform the planning of new initiatives.
  • A 4-year cycle for SASS would also allow the possibility of administering SASS and the National Assessment of Educational Progress (NAEP) student assessment at the same time in some of the same schools (Skaggs and Kaufman in press). A SASS-NAEP linkage is being conducted as a research and development project in 1999-2000 to enrich the database for research. If the linkage is successful and the results prove useful, a similar linkage may be sought in future rounds of SASS. A 4-year cycle for SASS would allow the possibility for NAEP and SASS to be synchronized again in 2003-04.

back to top

Quality of SASS Data

The main sources of potential error for SASS are sampling error, coverage error, nonresponse, and measurement error. The following discussion reviews these potential error sources in order to identify areas for methodological improvement and for further methodological study.

Sampling error

Each of the individual surveys in SASS is designed to produce certain key estimates-often for many different domains-with specified levels of precision. Sample sizes are chosen to satisfy these precision requirements. Given this situation, a key issue with regard to sampling error is the efficiency of the sample design.

The assessment of sampling efficiency is complex because all the component surveys are interrelated, with the School Survey serving as the sampling frame for the other surveys. There are two advantages of the interrelated sample design: first, data from the different surveys can be linked for analysis (for example, data from the principal and teachers in the same school can be analyzed together); and, second, there are some cost savings in sample selection. However, the interrelated design places a high response burden on sampled schools, which may harm response rates, and it involves compromises in sample design.

Compromises in the current sample design. Because the sample of schools selected for the School Survey is the starting point for the samples for all the other surveys, its design places constraints on the sample designs for the other surveys. The sample design for the School Survey is a compromise design that takes account of the needs of both that survey and the Teacher Survey. Schools are sampled with probability proportional to a measure of size that is the square root of the number of teachers, as a compromise between equal probability, which would be appropriate for the School Survey, and probability proportional to the number of teachers, which would be appropriate for the Teacher Survey.

Use of the square root of the number of teachers as a compromise measure of size also has implications for the other SASS components. For example, sampling schools with equal probability would be more appropriate for the Principal, Library, and Librarian Surveys, whereas sampling with probability proportional to the number of teachers (which may be roughly proportional to the number of students) may be more suitable for the Student Records Survey.

The choice of a measure of size for sampling schools is related to the form of the estimates to be produced. Thus, an equal probability sample of schools is appropriate for the School, Principal, Library, and Librarian Surveys if the estimates produced are expressed in terms of numbers or percents of schools, principals, libraries, or librarians with given characteristics. However, it is often more relevant to express the estimates in terms of numbers or percents of students. In discussing this issue, Kish (1965, p. 418) gives the example that, around 1957, one-half of American high schools offered no physics, but that these schools accounted for only 2 percent of high school students. For most purposes, the 2 percent figure is the more meaningful one. An efficient design for student-based estimates would sample schools with probability proportional to the number of students, as distinct from the equal probability sampling that is appropriate for school-, principal-, library-, or librarian-based estimates.

Another example of compromise in sample design relates to the sample allocation used in the School Survey to provide domain estimates of specified precision. The smaller Library, Librarian, and Student Records Surveys are not designed to provide all these domain estimates, and therefore they subsample in a manner that attempts to redress the unequal allocation of the sample across domains. However, this subsampling cannot fully compensate for the domain oversampling.

Evaluation of the sample design. No extensive evaluation of the interrelated sample design for the surveys in SASS has been conducted. Since SASS is itself evolving and since circumstances are changing, a broad-ranging review of the interrelated design would be advisable periodically. Such a review could determine whether all the survey components should remain interrelated as at present or whether some of the surveys should be conducted separately.

Assuming that the interrelated design is retained, research could usefully be conducted based on data collected in the first four rounds to determine whether any improvements in sampling efficiency can be obtained. For example, early research led to the decision to sample schools first and then select the local education agencies (LEAs) of sampled schools. This decision could be reviewed using the data now available. The suitability of the current measure of size for sampling schools could also be assessed. A full review of the SASS interrelated sample design would be a complex undertaking since many design choices affect different survey components in different ways, but even some limited evaluations may lead to useful gains in sampling efficiency.

Coverage error

The ideal sampling frame for a survey would include every element in the survey's target population with a single listing for each element. In practice, this ideal is rarely achieved, and there is clear evidence that it is not achieved in the component surveys of SASS.

Sampling frames. The issue of school coverage is particularly important in SASS because of the nested structure of the surveys. In recent rounds, the Common Core of Data (CCD), supplemented by lists of schools from the Bureau of Indian Affairs (BIA) and the Department of Defense (DoD), has served as the sampling frame for public schools, and the Private School Survey (PSS), supplemented by updated lists of affiliation members, has been used for private schools. In round 4, an additional frame has been included for charter schools. Since the CCD and PSS are used for several NCES surveys, their coverage is the subject of broad interest. Several recent studies have evaluated their coverage, and continuous efforts will be made to improve them.

An issue of concern to SASS is that inevitably the universe frames are out of date for the school year of the SASS surveys (e.g., the public school sample for round 3 was selected from the CCD for school year 1991-92, whereas the reference period for that round of SASS was school year 1993-94). As a result, new schools and recent school splits and mergers are not reflected on the frames. It would be useful to determine the magnitude of the coverage problem from this source and also to evaluate the quality of the list of charter schools.

Definitional usage. A significant problem with coverage is that a survey's definition of the units to be covered may not conform to the structure and terminology used in different parts of the population. Thus, for example, some states consider certain administrative groups of schools to be single schools, whereas SASS defines each individual administrative unit to be a school. This kind of problem affects both the frame listings and the data reported for a sampled "school." The definitional problem arises particularly with students and teachers, since the sampled schools provide the listings of these individuals. It is a particularly severe problem in the teacher listing operation, since defining who is to be included as a teacher is not straightforward. In this situation, there is the risk that the person completing the form will use the school terminology for a teacher rather than the SASS definition.

Teacher Survey procedures. A particular concern with coverage in the Teacher Survey relates to the operational procedures that define the sample, since schools are asked to provide the listings of their teachers about 2 or 3 months before the Teacher Survey questionnaires are mailed out. Teachers who are sampled from the teacher listing forms but have left the school by the time of the Teacher Survey data collection are treated as out of scope, while teachers joining the school in the interim have no chance of selection. Thus, the survey's coverage is neither teachers at the beginning of the school year nor teachers at the time of data collection. No study of teacher mobility within a school year has been conducted to date to assess the magnitude of the problem.

Self-classification as out of scope. Additional noncoverage problems may also occur in the School Principal, Library, and Librarian Surveys, where some schools classify themselves as out of scope (having no principal, library, or librarian). A study to determine the extent of self-classification errors would be useful.

Nonresponse

Rates of response to the surveys. The response rates to the various SASS surveys have generally been high for public schools. For example, in round 3 the public school response rates for all the surveys that were conducted in a single phase of data collection were over 90 percent. The Teacher Survey and the Student Records Survey had lower response rates, at just over 80 percent, as a result of two opportunities for nonresponse. In the Teacher Survey, nonresponse could have occurred either because the school did not provide the teacher list for sampling teachers or because a sampled teacher failed to respond. In the Student Records Survey, nonresponse could have occurred either because a school did not provide a teacher list and class rosters for sampled teachers or because the school failed to return the completed questionnaires for sampled students. The lowest public school response rate has been that for the TFS. Although a high proportion of teachers responding to the Teacher Survey respond also to the TFS, the additional phase of data collection leads to some further losses that resulted in an overall response rate of 77 percent for round 3.

For private schools, the response rates for all the surveys in SASS have been markedly lower than those for public schools. In round 3, only the School and Principal Surveys had response rates of over 80 percent. The response rates for the other surveys were 70 percent or somewhat higher, except for the TFS, where the overall response rate was only 64 percent.

As with any repeated survey, continuing attention needs to be given in SASS to maintaining and, if possible, increasing response rates. Experimental studies could usefully be conducted to test out methods to improve response rates, particularly for the private school components of SASS. A range of possible methods could be considered, including the use of endorsements by different sponsor-organizations targeted at different types of schools, the use of incentives, and the use of shorter questionnaires that are easier to complete.

To achieve its final response rates, SASS employs a combination of mail questionnaires followed by telephone interviews with mail nonrespondents and field follow-up, if necessary. The per-unit cost for telephone data collection is much higher than for mail data collection. Also, there are indications that mail responses are of higher quality than telephone responses (although this is based on nonexperimental data). For both these reasons, it is desirable to maximize the mail response rates. Using postcard reminder cards and allowing a longer interval for mail returns in round 3 may have contributed to higher mail response rates in that round. Continued efforts to improve the user-friendly format of the questionnaires and the accompanying material may also help to increase mail response rates.

Rates of response to individual items. Item nonresponse rates vary greatly. Many items have high response rates, but there are others with low response rates. Some low response rates are likely to result from the difficulty or, in a few cases, the sensitivity of the information requested. Others appear to be caused by respondents' failure to navigate correctly through a questionnaire's skip instructions. It may be possible to reduce some of those problems by revising the content and wording of questions and by changing the format and layout of the questionnaires.

Recent research on the design of self-completion questionnaires deals with the principles of design for navigating the respondent through the questionnaire, as well as more generally for obtaining responses of high quality. In addition, advances in printing methods facilitate the use of tools-such as color, shading, and different font sizes-that increase the available design options. Attention to ensuring that the SASS questionnaires are as user-friendly as possible not only addresses the item nonresponse problem. It may also reduce total nonresponse, obtain more valid responses, and reduce the number of changes made in editing.

Measurement error

A variety of methods have been used to investigate measurement errors in SASS, including reinterviews, a record check study, in-depth interviews using cognitive research techniques, methodological experiments, reviews of completed questionnaires, analysis of errors and inconsistencies detected during data processing, and aggregate comparisons of survey estimates with estimates from external sources (which deal with all types of error in combination). A variety of methods are needed since all methods have their limitations.

Reinterviews. The reinterview program is a core component of the measurement error research in SASS, being applied in most of the surveys at each round. This program has been valuable in identifying items with high response variance, and many of these items have been revised in later rounds in an attempt to reduce the response variance. However, reinterviews have two main limitations. First, they measure only inconsistency of response, and thus fail to identify cases where a respondent consistently gives a wrong answer. Second, by themselves, reinterviews fail to indicate the reasons for inconsistency.

A common finding from the reinterview program across all the SASS surveys has been the low level of reliability for opinion questions. This finding is consistent with the results for opinion questions in other surveys. Such unreliability may be acceptable for some limited forms of analysis, but is problematic for more detailed analysis. For the latter type of analysis it may be necessary to improve the reliability with which a construct is measured by creating an index from the responses to several questions relating to that construct.

Record check studies. Record check studies are often valuable for examining measurement errors, but they also have their limitations. Most importantly, they can be used only when the relevant information is available on records and access can be obtained. Even when this is the case, there remain problems of erroneous matches and failures to match, incorrect information on the records, and differences between the definitions of the variable for the records and for the survey. For these various reasons, the only record check study conducted in SASS to date has been the teacher transcript record check study.

In-depth interviews. The attraction of a record check study is that it seeks to determine "true values" with which the survey responses can be compared (subject to the limitations indicated above). Another approach for obtaining true values is to conduct in-depth follow-up interviews for a subset of key items-such as number of full-time-equivalent (FTE) teachers in the school-with extensive questioning and encouragement to respondents to consult records. Not only can this approach give true values (with some error), it can also sometimes identify the sources of error (e.g., counting a part-time teacher as a full-time teacher). This approach may be useful in a future pilot study and/or a future round of SASS.

Comparisons with other sources. Comparisons of SASS estimates with estimates from other sources provide an overall evaluation of the SASS estimates. However, the opportunities for such comparisons are very limited, and even when they can be made, they tend to be of limited value. Any differences observed may reflect definitional differences, differences in the time reference, errors in the other sources, or errors in SASS arising from any combination of noncoverage, nonresponse, sampling, measurement, or processing. As a result, the aggregate comparisons that have been made in rounds 1 to 3 of SASS have been useful in drawing attention to some major discrepancies, but have generally not been able to identify the causes of the discrepancies. An extension of the aggregate comparison approach is to perform micro-level matching of SASS responses and similar data in record sources. This type of match may provide an understanding of the discrepancies and, hence, indicate whether changes should be made in SASS. For example, such a match conducted at the school level to compare SASS and CCD data on the numbers of FTE teachers found that schools often appeared to report headcounts, rather than FTE counts. Application of micro-level matches in other areas could prove equally useful.

back to top


Concluding Remarks

This report reviews a variety of error sources, providing quantitative measures of error where possible. However, in general, the effects that an error source may have on a survey estimate cannot be easily quantified. For instance, the lower the response rate, the greater the likelihood of a significant nonresponse bias, even after nonresponse adjustments have been made, but the magnitude of the bias in a particular estimate is unknown. Furthermore, it has not been feasible to combine all the indications of quality into an overall index of total survey error for a given survey estimate. Nevertheless, the information on quality presented in the report should help users to decide how much confidence to place in the estimates of interest to them and to determine how best to use the survey data in their analyses.

The report also suggests a number of possible research projects that may guide future methodological developments using the current approach to data collection. In a broader context, SASS will also need to keep in touch with technological advances in communications. In particular, the rapid advances taking place in the use of the Internet suggest that by round 5 or 6 of SASS the preferred mode of data collection may shift from a mail questionnaire to a Web-based questionnaire for several of the surveys. A number of special research studies will be needed to develop the new methods before such a change can be implemented in SASS data collection operations.

back to top


References

Abramson, R., Cole, C., Fondelier, S., Jackson, B., Parmer, R., and Kaufman, S. (1996). 1993-94 Schools and Staffing Survey: Sample Design and Estimation (NCES 96-089). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

De Mello, V.B., and Broughman, S.P. (1996). SASS by State: 1993-94 Schools and Staffing Survey: Selected State Results (NCES 96-312). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Faupel, E., Bobbitt, S., and Friedrichs, K. (1992). 1988-89 Teacher Followup Survey: Data File User's Manual (NCES 92-058). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Gruber, K., Rohr, C., and Fondelier, S. (1993). 1990-91 Schools and Staffing Survey: Data File User's Manual: Vol. I Survey Documentation (NCES 93-144-I). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Gruber, K.J., Rohr, C.L., and Fondelier, S.E. (1996). 1993-94 Schools and Staffing Survey: Data File User's Manual: Vol. I Survey Documentation (NCES 96-142). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Henke, R.R., Choy, S.P., Geis, S., and Broughman S.P. (1996). Schools and Staffing in the United States: A Statistical Profile: 1993-94 (NCES 96-124). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Jabine, T. (1994). Quality Profile for SASS: Aspects of the Quality of Data in the Schools and Staffing Surveys (SASS) (NCES 94-340). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Kaufman, S. (1991). 1988 Schools and Staffing Survey Sample Design and Estimation (NCES 91-127). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Kaufman, S., and Huang, H. (1993). 1990-91 Schools and Staffing Survey: Sample Design and Estimation (NCES 93-449). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Kish, L. (1965). Survey Sampling. New York: John Wiley & Sons, Inc.

National Center for Education Statistics. (1991a). 1987-88 Schools and Staffing Survey, Public and Private School Questionnaires, Base Year: Data File User's Manual (NCES 91-136g). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

National Center for Education Statistics. (1991b). 1987-88 Schools and Staffing Survey, Public and Private School Administrator Questionnaires, Base Year: Data File User's Manual (NCES 91-137g). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

National Center for Education Statistics. (1991c). 1987-88 Schools and Staffing Survey, Public and Private Teacher Demand and Shortage Questionnaires, Base Year: Data File User's Manual (NCES 91-021g). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

National Center for Education Statistics. (1991d). 1987-88 Schools and Staffing Survey, Public and Private School Teacher Questionnaires, Base Year: Data File User's Manual (NCES 91-139g). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Rouk, Ü., Weiner, L., and Riley, D. (1999). What Users Say About Schools and Staffing Survey Publications (NCES 1999-10). U.S. Department of Education. Washington, DC: National Center for Education Statistics Working Paper.

Skaggs, G., and Kaufman, S. (in press). The SASS-NAEP Linkage. Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Whitener, S.D., Gruber, K.J., Rohr, C., and Fondelier, S. (1998). Schools and Staffing Survey: 1994-95 Teacher Followup Survey, Data File User's Manual Public-Use Version (NCES 98-232). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

back to top

Data sources: The NCES School and Staffing Survey (SASS), 1987-88. 1990-91, and 1993-94; and Teacher Follow-up Survey (TFS), 1988-89, 1991-92, and 1994-95.

For technical information, see the complete report:

Kalton, G., Winglee, M., Krawchuk, S., and Levine, D. (2000) Quality Profile for SASS Rounds 1-3: 1987-1995 (NCES 2000-308).

Author affiliations: G. Kalton, M. Winglee, S. Krawchuk, and D. Levine, Westat.

For questions about content: contact Steve Kaufman (steve.kaufman@ed.gov).

To obtain the complete report (NCES 2000-308), call the toll-free ED Pubs number (877-433-7827) or visit the NCES web site (http://nces.ed.gov).

back to top