Skip Navigation
Teacher Quality: A Report on The Preparation and Qualifications of Public School Teachers
NCES: 1999080
January 1999

Appendix A—Sample Methodology and Data Reliability

Table of Contents

List of Appendix Tables
List of Appendix Exhibits
Sample Selection
Respondent and Response Rates
Sampling and Nonsampling Errors
Variances
Definitions of Analysis Variables
Comparisons to the 1993-94 Schools and Staffing Survey
Calculations of Major Field of Study for a Bachelor's or Graduate Degree
Calculations of In-Field Teaching
Background Information

List of Appendix Tables

  • Table A-1: Number and percent of responding full-time public school teachers in the studysample and estimated number and percent of full-time public school teachers thesample represents, by selected school and teacher characteristics: 1998

  • Table A-2: Percent of full-time public school teachers with any undergraduate or graduate major in various fields of study, by selected school and teacher characteristics: 1998

  • Table A-3: Percent of full-time public school teachers with any undergraduate or graduate major in various fields of study, by selected school and teacher characteristics: 1993-94

List of Appendix Exhibits

  • Exhibit A-1: Match of main teaching assignment field with major and minor fields of study: FRSS 1998

  • Exhibit A-2: Match of main teaching assignment field with major and minor fields of study: SASS 1993-94

Sample Selection

The sample for the FRSS Teacher Survey on Professional Development and Training consisted of 4,049 full-time teachers in regular public elementary, middle, and high schools in the 50 states and the District of Columbia. To select the sample of teachers, a sample of 1,999 public schools was first selected from the 1994-95 NCES Common Core of Data (CCD) Public School Universe File. The sampling frame constructed from the 1994-95 CCD file contained 79,250 regular public schools. Excluded from the sampling frame were special education, vocational, and alternative/ other schools, schools in the territories, overseas Department of Defense schools, and schools with a high grade lower than one or ungraded, or that taught only adult education. The frame contained 49,955 regular elementary schools, 14,510 regular middle schools, and 15,785 regular high/ combined schools. A school was defined as an elementary school if the lowest grade was less than or equal to grade 3 and the highest grade was less than or equal to grade 8. A middle school was defined as having a lowest grade greater than or equal to grade 4 and a highest grade less than or equal to grade 9. A school was considered a high school if its lowest grade was greater than or equal to grade 9 and the highest grade was less than or equal to grade 12. Combined schools were defined as having a lowest grade less than or equal to grade 3 and a highest grade greater than or equal to grade 9. High schools and combined schools were combined into one category for sampling.

The public school sampling frame was stratified by instructional level (elementary, middle, and high school/ combined), locale (city, urban fringe, town, and rural), and school size (less than 300, 300 to 499, 500 to 999, 1,000 to 1,499, and 1,500 or more). Within the primary strata, schools were also sorted by geographic region and percent minority enrollment in the school to produce additional implicit stratification. A sample of 1,999 schools was then selected from the sorted frame with probabilities proportionate to size, where the measure of size was the estimated number of full-time-equivalent (FTE) teachers in the school. The sample contained 665 elementary schools, 553 middle schools, and 781 high/ combined schools.

Each sampled school was asked to send a list of their teachers, from which a teacher sampling frame was prepared. The teacher sampling frame was designed to represent full-time teachers who taught in any of grades 1 through 12, and whose main teaching assignment was in English/ language arts, social studies/ social sciences, foreign language, mathematics, or science, or who taught a self-contained classroom. To prepare the teacher lists, schools were asked to start with a list of all the teachers in the school, and then to cross off the following types of teachers: part-time, itinerant, and substitute teachers, teachers' aides, unpaid volunteers, principals (even those who teach), kindergarten or preschool teachers, or anyone on the list who was not a classroom teacher (e. g., librarians, secretaries, or custodians). Next, schools were instructed to cross off the list any teachers whose primary teaching assignments were any of the following: art, bilingual education/ English as a second language, business, computer science, health, home economics, industrial arts, music, physical education, remedial or resource, special education, or any other teachers who did not primarily teach a core academic subject or a self-contained class. Then, schools were asked to code all teachers remaining on the list to indicate the primary subject taught, using the general categories of (1) math and science teachers, (2) other academic teachers (English/ language arts, social studies/ social sciences, or foreign language), or (3) self-contained, for teachers who teach all or most academic subjects in a self-contained classroom setting (including most elementary school teachers). Schools were then asked to code the total years of teaching experience for all teachers remaining on the list, using the categories of 3 or fewer years, or 4 or more years teaching experience, counting the current academic year as one full year.

Within selected schools, eligible teachers were stratified by years of teaching experience (3 or fewer, or 4 or more), and primary teaching assignment (mathematics/ science or other academic/ self-contained for middle and high schools; all elementary school teachers were treated for sampling as self-contained classroom teachers, because too few teachers at this level teach in departmentalized settings). Teacher sampling rates were designed to select at least one but no more than four teachers per school, with an average of about two, and were designed to be self-weighting (equal probability) samples within strata. A total of 4,049 teachers were selected. The sample contained 1,350 elementary school, 1,130 middle school, and 1,569 high school/ combined teachers.

Respondent and Response Rates
A letter and instruction sheet for preparing the list of teachers was sent to the principal of each sampled school in September 1997. The letter introduced the study, requested the principal's cooperation to sample teachers, and asked the principal to prepare a list of teachers that included only full-time teachers of self-contained classes or core academic subjects. Telephone followup was conducted from October 1997 through March 1998 with principals who did not respond to the initial request for teacher lists. Of the 1,999 schools in the sample, 14 were found to be out of the scope of the survey (no longer in existence), for a total of 1,985 eligible schools. Teacher lists were provided by 1,818 schools, or 92 percent of the eligible schools. The weighted response rate 1to the teacher list collection was 93 percent.

Questionnaires were mailed to the teachers in two phases, so that data collection on the teacher questionnaire would not be delayed while the list collection phase was being completed. The first phase of questionnaires was mailed in mid-February 1998, and the second in mid-March 1998. Telephone followup was conducted from March through June 1998 with teachers who did not respond the initial questionnaire mailing. In addition, a postcard prompt was sent to nonresponding teachers in April 1998. Of the 4,049 teachers selected for the sample, 183 were found to be out of the scope of the survey, usually because they were not a regular full-time classroom teacher, or because their main teaching assignment was not in a core academic subject or as a self-contained classroom teacher. This left a total of 3,866 eligible teachers in the sample. Completed questionnaires were received from 3,560 teachers, or 92 percent of the eligible teachers. The weighted teacher response rate was also 92 percent. The unweighted overall response rate was 84 percent (91.6 percent for the list collection multiplied by 92.1 percent for the teacher questionnaire). The weighted overall response rate was 86 percent (93.1 percent for the list collection multiplied by 92.1 percent for the teacher questionnaire). Weighted item nonresponse rates ranged from 0 percent to 1.9 percent. Because the item nonresponse was so low, imputation for item nonresponse was not implemented.

Sampling and Nonsampling Errors
The responses were weighted to produce national estimates (see Table A-1). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability.

The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.

To minimize the potential for nonsampling errors, the questionnaire was pretested with respondents like those who completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics and the Office of the Secretary, U. S. Department of Education. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.

Variances
The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of teachers who have a master's degree is 45.3 percent, and the estimated standard error is 1.1 percent. The 95 percent confidence interval for the statistic extends from [45.3 – (1.1 times 1.96)] to [45.3 + (1.1 times 1.96)], or from 43.1 to 47.5 percent. Tables of standard errors for each table and figure in the report are provided in the appendices.

Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVarPC) was used to calculate the estimates of standard errors. WesVarPC is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).

The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In particular, an adjusted chi-square test using Satterthwaite's approximation to the design effect was used in the analysis of the two-way tables. Finally, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/ g significance level to control for the fact that g differences were simultaneously tested.

Definitions of Analysis Variables School instructional level – Schools were classified according to their grade span in the Common Core of Data (CCD).

Elementary school -lowest grade less than or equal to grade 3 and highest grade less than or equal to grade 8.
Middle school – lowest grade greater than or equal to grade 4 and highest grade less than or equal to grade 9.
High school
– lowest grade greater than or equal to grade 9 and highest grade less than or equal to grade 12.
Combined school
– lowest grade less than or equal to grade 3 and highest grade greater than or equal to grade 9.

School enrollment size – total number of student enrolled as defined by the Common Core of Data (CCD).

Less than 300 students
300 to 499 students
500 to 999 students
1,000 or more students

Locale – as defined in the Common Core of Data (CCD).

Central city – a large or mid-size central city of a Metropolitan Statistical Area (MSA).
Urban fringe/ large town
– urban fringe is a place within an MSA of a central city, but not primarily its central city; large town is an incorporated place not within an MSA, with a population greater than or equal to 25,000.
Small town/ rural – small town is an incorporated place not within an MSA, with a population less than 25,000 and greater than or equal to 2,500; rural is a place with a population less than 2,500 and/ or a population density of less than 1,000 per square mile, and defined as rural by the U. S. Bureau of the Census.

Geographic region

Northeast -Maine, New Hampshire, Vermont, Massachusetts, Rhode Island, Connecticut, New York, New Jersey, Pennsylvania
Midwest -Ohio, Indiana, Illinois, Michigan, Wisconsin, Minnesota, Iowa, Missouri, North Dakota, South Dakota, Nebraska, Kansas
South -Delaware, Maryland, District of Columbia, Virginia, West Virginia, North Carolina, South Carolina, Georgia, Florida, Kentucky, Tennessee, Alabama, Mississippi, Arkansas, Louisiana, Oklahoma, Texas
West -Montana, Idaho, Wyoming, Colorado, New Mexico, Arizona, Utah, Nevada, Washington, Oregon, California, Alaska, Hawaii

Percent minority enrollment in the school – The percent of students enrolled in the school whose race or ethnicity is classified as one of the following: American Indian or Alaskan Native, Asian or Pacific Islander, black, or Hispanic, based on data in the 1995-96 CCD file. Data on this variable were missing for 0.4 percent of the teachers. The break points used for analysis were based on empirically developed quartiles from the weighted survey data.

5 percent or less
6 to 20 percent
21 to 50 percent
More than 50 percent

Percent of students at the school eligible for free or reduced-price lunch – This was based on information collected from the school during the teacher list collection phase; if it was missing from the list collection, it was obtained from the CCD file, if possible. Data on this variable were missing for 0.2 percent of the teachers. This item served as the measurement of the concentration of poverty at the school. The break points used for analysis were based on empirically developed quartiles from the weighted survey data.

Less than 15 percent
15 to 32 percent
33 to 59 percent
60 percent or more

Main teaching assignment – based on responses to the survey questionnaire.

Self-contained classroom – The teacher teaches all or most academic subjects to
the same group of students all or most of the day (Q1= 1).
Math/ science – The teacher teaches mathematics or science in a departmentalized setting, teaching the subject to several classes of different students all or most of the day (Q1= 2 and Q4A1= 43 or 44).
Other targeted academic subject – The teacher teaches English/ language arts, social studies/ social science, or foreign language in a departmentalized setting, teaching the subject to several classes of different students all or most of the day (Q1= 2 and Q4A1= 41 or 42 or 45).

Teaching experience – total years of teaching experience, based on responses to question 14 on the survey questionnaire.

3 or fewer years
4 to 9 years
10 to 19 years
20 or more years

Teacher race/ ethnicity – based on responses to questions 12 (Hispanic or Latino origin) and 13 (race) on the survey questionnaire. Question 13 specified that teachers should circle one or more racial categories to describe themselves. Data on this variable were missing for 0.5 percent of the teachers.

White, non-Hispanic – white only, and not Hispanic.
Black, non-Hispanic – black or African American only, and not Hispanic.
Other – Hispanic or Latino, American Indian or Alaska Native, Asian, Native
Hawaiian or other Pacific Islander, and multi-racial (i. e., anyone who selected
more than one race to identify themselves).

Sex – The sex of the teacher, based on question 11 on the survey questionnaire.

Male
Female

It is important to note that many of the school and teacher characteristics used for independentanalyses may also be related to each other. For example, enrollment size and instructional levelof schools are related, with middle and high schools typically being larger than elementaryschools. Similarly, poverty concentration and minority enrollment are related, with schools witha high minority enrollment also more likely to have a high concentration of poverty. Otherrelationships between analysis variables may exist. Because of the relatively small sample sizeused in this study, it is difficult to separate the independent effects of these variables. Theirexistence, however, should be considered in the interpretation of the data presented in this report.

Comparisons to the 1993-94 Schools and Staffing Survey
Data from the 1993-94 Schools and Staffing Survey (SASS) teacher questionnaire were reanalyzed for questionnaire items that are the same or similar to items on the FRSS questionnaire. The questionnaire items from the SASS teacher survey are shown in appendix F, and the detailed tables from the analyses are shown in appendix C. As a first step in the reanalysis process, a subset of teachers and schools was selected from SASS that was approximately the same as the teachers and schools sampled for FRSS. Regular full-time teachers who taught in grades 1 through 12 in regular public schools (i.e., excluding special education, vocational, and alternative/other schools) in the 50 states and the District of Columbia defined the overall eligible group of teachers. Within that group, teachers were selected for inclusion in the subset for these analyses if their main teaching assignment was either general elementary or a core academic subject area (defined here as English/language arts, social studies/social science, foreign language, mathematics, or science), based on question 21 in the SASS teacher questionnaire.

For comparability to the FRSS survey, a teacher was considered to be a self-contained classroom teacher if the main teaching assignment was specified as general elementary (code 03). 2 A teacher was considered to be a math/science teacher if the main assignment was specified as mathematics (33), or one of the sciences (57 through 61 and 09). A teacher was considered to be a teacher of one of the other targeted academic subjects if the main teaching assignment was specified as English/language arts (21), journalism (16), reading (43), social studies/social science (47), or one of the foreign languages (51 through 56).

Teachers were classified for instructional level of the school based on the categorization used for the FRSS survey (see above). In addition, the category splits for the percent minority enrollment in the school and the percent of students eligible for free or reduced-price lunch were based on the empirically developed quartiles from the weighted FRSS survey data. Information about the race of the teacher was collected in a slightly different way on the SASS questionnaire. Teachers were only allowed to select one racial category to describe themselves, and the categories were American Indian or Alaska Native, Asian or Pacific Islander, black, and white. The weighted distributions of the SASS teachers by the various classification variables are shown in Table C-1. Teachers were assigned as departmentalized or general elementary for the average class size calculations based on their main teaching assignment, with math/science and other targeted academic teachers considered departmentalized.

Approximately 5 percent of the teachers were excluded from the SASS class size analyses, either because they taught "pull-out" classes, where they provided instruction to students who were released from their regular classes (2 percent), or because of reporting problems in their class size information (3 percent).

When there are differences between the FRSS and SASS data, there are a number of possible reasons for such differences that should be considered. One possible reason, of course, is that the differences show actual change between 1993-94 and 1998. However, it is also important to consider other possibilities. While the subset of schools and teachers from SASS was selected to be as comparable as possible to the FRSS sample of schools and teachers, there may still be some differences in the samples for the two surveys. In addition, the questionnaires that the teachers completed were very different. The FRSS questionnaire was very short, consisting of three pages of questions and one page of codes.

Information was collected in a very compact format, and at a fairly aggregated level. For example, teachers in departmentalized settings were asked about their main and secondary teaching assignments, rather than about all the courses they taught, and were asked about their teaching assignments and about major and minor fields of study for degrees held at an aggregated level (i.e., whether they taught courses or had degrees in science, rather than in chemistry or physics). The SASS questionnaire, on the other hand, was 35 pages long and asked teachers for very detailed information about courses taught and degrees held, as well as a lot of other information about the teacher and his or her job. Thus, the questionnaires provided very different response contexts for the teachers.

It is also important to be aware that some of the questions asked on the two questionnaires appear more similar at first glance than they actually are. For example, the FRSS questionnaire asked teachers whether they had participated in professional development activities in the last 12 months that focused on "new methods of teaching (e.g., cooperative learning)." The SASS questionnaire asked teachers whether they had participated in professional development programs since the end of the last school year that focused on "methods of teaching your subject field," and "cooperative learning in the classroom" as two separate questions. Another example is the item on parent support for teachers. The FRSS survey asked whether teachers agreed or disagreed with the statement, "parents support me in my efforts to educate their children." The SASS questionnaire asked whether teachers agreed or disagreed with the statement, "I receive a great deal of support from parents for the work I do." In addition, the FRSS survey had four statements about parent and school support, compared with 25 statements about school climate in the SASS survey, again creating a very different response context for the teachers. Thus, while differences between the FRSS and SASS data may reflect actual change, measurement issues must also be considered as possible explanations.

Calculations of Major Field of Study for a Bachelor's or Graduate Degree
A variable was constructed that combined information about all the major fields of study for the bachelor's, master's, and doctorate degrees into the categories of academic field, subject area education (i.e., the teaching of an academic field, such as mathematics education), general education, and other education fields (e.g., special education, curriculum and instruction, or educational administration). For the analyses presented in the text (see Tables 1and2), each teacher was counted only once, even if he or she had more than one major or more than one degree. Major fields of study were selected in the order of academic field, subject area education, other education, and general education. For example, if a teacher had a bachelor's degree in general education and a master's degree in English, he or she was considered for these analyses to have majored in an academic field.

Similarly, if a teacher had a bachelor's degree in mathematics and a master's degree in curriculum and instruction, he or she was also considered for these analyses to have majored in an academic field. Tables A-2 and A-3 provide information about duplicated degree counts. In these tables, teachers with more than one major or more than one degree are counted for each field of study in which they have a major or degree. Thus, a teacher with a bachelor's degree in general education and a master's degree in English would be counted once under academic field and once under general education in Tables A-2or A-3.

However, a teacher with a bachelor's degree in English and a master's degree in history would be counted only once in Tables A-2or A-3, since both degrees were in an academic field.

Calculations of In-Field Teaching
A measure of in-field teaching was constructed that compared the fields in which teachers had undergraduate or graduate majors or minors with the fields in which they had their main teaching assignments (i.e., the field in which they reported that they taught the most courses). A major or minor was considered in field if it was in either the academic field (e.g., mathematics) or subject area education (e.g., mathematics education) that matched the main teaching assignment. This measure was constructed for any teacher who taught English/language arts, foreign language, social studies/social science, mathematics, or science in a departmentalized setting in any of grades 7 through 12. Teachers were defined as teaching in field if they had an undergraduate or graduate major or minor in the field of their main teaching assignment. Details of how this measure was constructed are provided below.

The in-field teaching analyses were based on teacher level (grades taught) rather than on the instructional level of the school. Any teacher who provided departmentalized instruction and who taught in grade 7 or above (for the first set of analyses) or grade 9 or above (for the second set of analyses) was included, regardless of whether he or she also taught any lower grades. Teachers of self-contained classrooms at all levels were excluded, as were teachers who taught only in grade 6 or below, even if they provided departmentalized instruction. The in-field teaching measure was constructed only for the main teaching assignment, because there were too few teachers in the FRSS sample with a secondary teaching assignment to provide meaningful estimates for in-field teaching in the secondary assignment.

In-field teaching was defined as having a major or minor at the bachelor's, master's, or doctorate level in the field of the main teaching assignment. The in-field teaching measure was constructed at the aggregate level of English/language arts, social studies/social science, foreign language, math, and science. The measure was constructed this way because the FRSS questionnaire collected information about degrees and teaching assignments at this aggregated level, rather than at a lower level of aggregation (e.g., whether a teacher had degrees or taught courses in chemistry or physics) because of space limitations on the FRSS questionnaire. The main teaching assignment field was matched against the major and minor fields of study for the FRSS data as shown in Exhibit A-1, using the categorization approach from SASS. The numbers in parentheses indicate the code numbers on the FRSS questionnaire.

The main teaching assignment field was matched against the major and minor fields of study for the SASS data as shown in Exhibit A-2.

Background Information
The survey was performed under contract with Westat, using the Fast Response Survey System (FRSS). Westat's Project Director was Elizabeth Farris, and the Survey Manager was Laurie Lewis. Bernie Greene was the NCES Project Officer. The data were requested by Terry Dozier, Office of the Secretary, U.S. Department of Education.

This report was reviewed by the following individuals:

Outside NCES

  • Susan Choy, MPR Associates


  • Richard Ingersoll, University of Georgia


  • David Mandel, MPR Center for Curriculum and Professional Development


  • Judith Thompson, Connecticut State Department of Education

Inside NCES

  • Shelley Burns, Early Childhood, International, and Crosscutting Studies Division


  • Mary Frase, Early Childhood, International, and Crosscutting Studies Division


  • Kerry Gruber, Elementary/Secondary and Libraries Studies Division


  • Marilyn McMillen, Chief Statistician


  • Martin Orland, Associate Commissioner, Early Childhood, International, and Crosscutting Studies Division


  • John Ralph, Early Childhood, International, and Crosscutting Studies Division

For more information about the Fast Response Survey System (FRSS), contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Office of Educational Research and Improvement, U.S. Department of Education, 555 New Jersey Avenue, NW, Washington, DC 20208-5651, e-mail: Bernard_Greene@ed.gov, telephone (202) 219-1366.

For more information about the Teacher Survey on Professional Development and Training, contact Edith McArthur, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Office of Educational Research and Improvement, U.S. Department of Education, 555 New Jersey Avenue, NW, Washington, DC 20208-5651, e-mail: Edith_McArthur@ed.gov, telephone (202) 219- 1442.

__________________________
1 All weighted response rates were calculated using the base weight.

2 For clarity, these teachers are referred to throughout the report as general elementary teachers for both the 1998 FRSS and 1993-94 SASS studies.

Top