Skip Navigation
Dual Credit and Exam-Based Courses in U.S. Public High Schools: 2002-03
NCES: 2005009
April 2005

Appendix A: Technical Notes

The Fast Response Survey System (FRSS) was established in 1975 by the National Center for Education Statistics (NCES), U.S. Department of Education. FRSS is designed to collect issue-oriented data within a relatively short time frame. FRSS collects data from state education agencies, local education agencies, public and private elementary and secondary schools, public school teachers, and public libraries. To ensure minimal buDerden on respondents, the surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly. Data are weighted to produce national estimates of the sampled education sector. The sample size permits limited breakouts by classification variables. However, as the number of categories within the classification variables increases, the sample size within categories decreases, which results in larger sampling errors for the breakouts by classification variables.

Sample Design

The sample for the FRSS survey on dual credit and exam-based courses consisted of 1,499 regular public secondary schools in the 50 states and the District of Columbia. It was selected from the 2001–02 NCES Common Core of Data (CCD) Public School Universe file, which was the most current file available at the time of selection. The sampling frame included 17,059 regular secondary schools. For the purposes of the study, a secondary school was defined as a school with a grade 11 or 12. Excluded from the sampling frame were schools with a highest grade lower than 11, along with special education, vocational, and alternative/other schools, schools outside the 50 states and the District of Columbia, and schools with zero or missing enrollment.

The public school sampling frame was stratified by enrollment size (less than 300, 300 to 499, 500 to 999, 1,000 to 1,499, and 1,500 or more) and minority enrollment of the school (less than 6 percent, 6 to 20 percent, 21 to 49 percent, and 50 percent or more). Schools in the frame were then sorted by type of locale (city, urban fringe, town, rural) and region (Northeast, Southeast, Central, West) to induce additional implicit stratification. These variables are defined in more detail in the "Definitions of Analysis Variables" section of this report.

Data Collection and Response Rates

Questionnaires and cover letters for the study were mailed to the principal of each sampled school in mid-September 2003. The letter introduced the study and requested that the questionnaire be completed by the school's director of guidance counseling or other staff member who is most knowledgeable about the school's dual credit, Advanced Placement, and International Baccalaureate courses. Respondents were offered the option of completing the survey via the web or by mail. Telephone followup for survey nonresponse and data clarification was initiated in early October 2003 and completed in early January 2004.

To calculate response rates, NCES uses standard formulas established by the American Association of Public Opinion Research.1 Thus, unit response rates (RRU) are calculated as the ratio the weighted number of completed interviews (I) to the weighted number of in-scope sample cases. There are a number of different categories of cases that compose the total number of in-scope cases:

I = weighted number of completed interviews;
R = weighted number of refused interview cases;
O = weighted number of eligible sample units not responding for reasons other than refusal;
NC = weighted number of noncontacted sample units known to be eligible;
U = weighted number of sample units of unknown eligibility, with no interview; and
e = estimated proportion of sample units of unknown eligibility that are eligible.
The unit response rate represents a composite of the components:
components of the unit response rate

Of the 1,499 schools in the sample, 11 were found to be ineligible for the survey because they did not have an 11th or 12th grade. Another 21 were found to be ineligible because the school was closed or did not meet some other criteria for inclusion in the sample (e.g., it was an alternative school). This left a total of 1,467 eligible schools in the sample. Completed questionnaires were received from 1,353 schools, or 92 percent of the eligible schools (table A-1). The weighted response rate was also 92 percent. The weighted number of eligible institutions in the survey represent the estimated universe of regular secondary schools in the 50 states and the District of Columbia (table A-1). The estimated number of schools in the survey universe decreased from the 17,059 schools in the CCD sampling frame to an estimated 16,483 because some of the schools were determined to be ineligible for the FRSS survey during data collection.

Imputation for Item Nonresponse

Although item nonresponse for key items was very low, missing data were imputed for the 39 items listed in table A-2.2 The missing items included both numerical data such as counts of enrollments in Advanced Placement courses, as well as categorical data such as whether there were any requirements that students must meet in order to enroll in courses for dual credit. The missing data were imputed using a "hot-deck" approach to obtain a "donor" school from which the imputed values were derived. Under the hot-deck approach, a donor school that matched selected characteristics of the school with missing data (the recipient school) was identified. The matching characteristics included enrollment size class, and type of locale. Once a donor was found, it was used to derive the imputed values for the school with missing data. For categorical items, the imputed value was simply the corresponding value from the donor school. For numerical items, the imputed value was calculated by taking the donor's response for that item (e.g., enrollment in Advanced Placement courses) and dividing that number by the total number of students enrolled in the donor school. This ratio was then multiplied by the total number of students enrolled in the recipient school to provide an imputed value. All missing items for a given school were imputed from the same donor whenever possible.

Data Reliability

While the "Dual Credit and Exam-Based Courses" survey was designed to account for sampling error and to minimize nonsampling error, estimates produced from the data collected are subject to both types of error. Sampling error occurs because the data are collected from a sample rather than a census of the population and nonsampling errors are errors made during the collection and processing of the data.

Sampling Errors

The responses were weighted to produce national estimates (see table A-1). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the sample selected and, consequently, are subject to sampling variability. General sampling theory was used to estimate the sampling variability of the estimates and to test for statistically significant differences between estimates.

The standard error is a measure of the variability of an estimate due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of public high schools offering courses for dual credit is 71.3 percent, and the standard error is 1.4 percent (see tables 1 and 1a). The 95 percent confidence interval for the statistic extends from [71.3 - (1.4 x 1.96)] to [71.3 + (1.4 x 1.96)], or from 68.6 to 74.0 percent. The 1.96 is the critical value for a statistical test at the 0.05 significance level (where 0.05 indicates the 5 percent of all possible samples that would be outside the range of the confidence interval).

Because the data from the FRSS dual credit and exam-based courses survey were collected using a complex sampling design, the variances of the estimates from this survey (e.g., estimates of proportions) are typically different from what would be expected from data collected with a simple random sample. Not taking the complex sample design into account can lead to an underestimation of the standard errors associated with such estimates. To generate accurate standard errors for the estimates in this report, standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variance of the statistic. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVar) was used to calculate the estimates of standard errors. WesVar is a stand-alone Windows application that computes sampling errors from complex samples for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).

For non-ordered variables (e.g., region), t-tests were used to test comparisons among the categories of the variable. However, when comparing percentage or ratio estimates across a family of three or more ordered categories (e.g., categories defined by school enrollment size), regression analyses were used to test for trends rather than a series of paired comparisons. For percentages, the analyses involved fitting models in WesVar with the ordered categories as the independent variable and the (dichotomous) outcome of interest (e.g., whether or not the school offered courses for dual credit) as the dependent variable. For testing the overall significance, an analysis of variance (ANOVA) model was fitted by treating the categories of the independent variables as nominal categories. For the trend test, a simple linear regression model was used with the categories of the independent variable as an ordinal quantitative variable. In both cases, tests of significance were performed using an adjusted Wald F-test.3 The test is applicable to data collected through complex sample surveys and is analogous to F tests in standard regression analysis. A test was considered significant if the p-value associated with the statistic was less than 0.05.

Nonsampling Errors

Nonsampling error is the term used to describe variations in the estimates that may be caused by population coverage limitations and data collection, processing, and reporting procedures. The sources of nonsampling errors are typically problems like unit and item nonresponse, differences in respondents' interpretations of the meaning of questions, response differences related to the particular time the survey was conducted, and mistakes made during data preparation. It is difficult to identify and estimate either the amount of nonsampling error or the bias caused by this error. To minimize the potential for nonsampling error, this study used a variety of procedures, including a pretest of the questionnaire with directors of guidance counselors or other people at the school who were deemed to be the most knowledgeable about the school's dual credit, AP, and IB courses. The pretest provided the opportunity to check for consistency of interpretation of questions and definitions and to eliminate ambiguous items. The questionnaire and instructions were also extensively reviewed by NCES and the data requester at the Office of Vocational and Adult Education. In addition, manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone to resolve problems. Data were keyed with 100 percent verification for surveys received by mail, fax, or telephone.

Definitions of Analysis Variables

Enrollment Size - This variable indicates the total number of students enrolled in the school based on data from the 2001–02 CCD. The variable was collapsed into the following three categories:

Less than 500 students (small)
500 to 1,199 students (medium)
1,200 or more students (large)

School locale - This variable indicates the type of community in which the school is located, as defined in the 2001–02 CCD (which uses definitions based on U.S. Census Bureau classifications). This variable was based on the eight-category locale variable from CCD, recoded into a four-category analysis variable for this report. Large and midsize cities were coded as city, the urban fringes of large and midsize cities were coded as urban fringe, large and small towns were coded as town, and rural areas outside and inside Metropolitan Statistical Areas (MSAs) were coded as rural. The categories are described in more detail below.

City - A large or midsize central city of a Consolidated Metropolitan Statistical Area (CMSA) or Metropolitan Statistical Area (MSA).
Urban fringe - Any incorporated place, Census-designated place, or non-place territory within a CSMA or MSA of a large or midsize city, and defined as urban by the Census Bureau.
Town - Any incorporated place or Census-designated place with a population greater than or equal to 2,500 and located outside a CMSA or MSA.
Rural - Any incorporated place, Census-designated place, or non-place territory defined as rural by the Census Bureau.

Region - This variable classifies schools into one of the four geographic regions used by the Bureau of Economic Analysis of the U.S. Department of Commerce, the National Assessment of Educational Progress, and the National Education Association. Data were obtained from the 2001–02 CCD School Universe file. The geographic regions are:

Northeast - Connecticut, Delaware, District of Columbia, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont
Southeast - Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia
Central - Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin
West - Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oklahoma, Oregon, Texas, Utah, Washington, and Wyoming

Percent Minority Enrollment - This variable indicates the percentage of students enrolled in the school whose race or ethnicity is classified as one of the following: American Indian or Alaska Native, Asian or Pacific Islander, non-Hispanic Black, or Hispanic, based on data in the 2001–02 CCD School Universe file. Data on this variable were missing for 29 schools; schools with missing data were excluded from all analyses by percent minority enrollment. The percent minority enrollment variable was collapsed into the following four categories:

Less than 6 percent minority
6 to 20 percent minority
21 to 49 percent minority
50 percent or more minority

It is important to note that many of these school characteristics may be related to each other. For example, school enrollment size and locale are related, with city schools typically being larger than rural schools. Other relationships between these analysis variables may exist. However, this E.D. TAB report focuses on bivariate relationships between the analysis variables and questionnaire variables rather than more complex analyses.

Contact Information

For more information about the survey, contact Bernie Greene, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, 1990 K Street NW, Washington, DC 20006, e-mail: Bernard.Greene@ed.gov; telephone (202) 502-7348.


1 See American Association of Public Opinion Research standard calculation (see American Association for Public Opinion Research (AAPOR), Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys (Ann Arbor, MI: AAPOR, 2000). Note that for this survey, there were no sampled units with unknown eligibility.
2 Per NCES standards, all missing questionnaire data are imputed.
3 Westat, WesVar 4.0 User's Guide (Rockville, MD: Author, 2000), C-21.

Top