In the fall of 2006, about 72.7 million persons were enrolled in American schools and colleges (table 1). About 4.5 million persons were employed as elementary and secondary school teachers and as college faculty, in full-time equivalents (FTE). Other professional, administrative, and support staff at educational institutions numbered 5.0 million. All data for 2006 in this Introduction are projected. Some data for other years are projected or estimated as noted.
Enrollment
Enrollment in public elementary and secondary schools rose 24 percent between 1985 and 2006 (table 2). The fastest public school growth occurred in the elementary grades (prekindergarten through grade 8), where enrollment rose 25 percent over this period, from 27.0 million to 33.9 million. Public secondary school enrollment declined 8 percent from 1985 to 1990, but then rose 33 percent from 1990 to 2006, for a net increase of 21 percent. Private school enrollment grew more slowly than public school enrollment from 1985 to 2006, rising 10 percent, from 5.6 million to 6.1 million. As a result, the proportion of students enrolled in private schools declined from 12.4 percent in 1985 to 11.1 percent in 2006. Since the enrollment rates of kindergarten, elementary, and secondary school-age children did not change substantially between 1985 and 2005 (table 7), increases in public and private elementary school enrollment have been driven primarily by increases in the number of children in this age group.
The National Center for Education Statistics (NCES) forecasts record levels of total elementary and secondary enrollment through at least 2015, as the school-age population continues to rise. The projected fall 2006 public school enrollment is expected to be a new record, but new records are expected every year through 2015, the last year for which NCES enrollment projections have been developed (table 3). Public elementary school enrollment (prekindergarten through grade 8) is projected to show a slight decline of 1 percent between 2003 and 2005, and then increase, so that the fall 2015 projected enrollment is 7 percent higher than the 2006 projected enrollment. Public secondary school enrollment (grades 9 through 12) is expected to show a net decline of 2 percent between 2006 and 2015.
Teachers
A projected 3.6 million elementary and secondary school teachers were engaged in classroom instruction in the fall of 2006 (table 4). This number has risen 19 percent since 1996. The 2006 projected number of teachers includes 3.2 million public school teachers and 0.5 million private school teachers.
The number of public school teachers has risen faster than the number of public school students over the past 10 years, resulting in declines in the pupil/teacher ratio (table 61). In the fall of 2006, there were a projected 15.4 public school pupils per teacher, compared with 17.1 public school pupils per teacher 10 years earlier.
The salaries of public school teachers lost purchasing power in the 1970s due to inflation, but increased at a greater rate than inflation in the 1980s, and since 1990–91 the salaries have generally maintained pace with inflation (table 75). The average salary for teachers in 2004–05 was $47,750, about 2 percent higher than in 1994–95, after adjustment for inflation.
Student Performance
Most of the student performance data in the Digest are drawn from the National Assessment of Educational Progress (NAEP). NAEP conducts assessments using three basic designs: long-term trend NAEP, national NAEP, and state NAEP. These three basic designs are described in the paragraphs that follow.
NAEP long-term trend assessments provide information on changes in the basic achievement of America's youth since the early 1970s. They are administered nationally and report student performance at ages 9, 13, and 17 in reading and mathematics. Measuring trends of student achievement or change over time requires the precise replication of past procedures. For example, students of specific ages are sampled in order to maintain consistency with the original sample design. Similarly, the long-term trend instrument does not evolve based on changes in curricula or in educational practices.
The main NAEP assessments provide current information for the nation and specific geographic regions. They include students drawn from both public and private schools and report results for student achievement at grades 4, 8, and 12. The main NAEP assessments follow the frameworks developed by the National Assessment Governing Board and use the latest advances in assessment methodology. The NAEP frameworks are designed to reflect changes in educational objectives and curricula. Because the assessment items reflect curricula associated with specific grade levels, the main NAEP uses samples of students at those grade levels. The differences in procedures between the main NAEP and the long-term trend NAEP mean that their results cannot be compared directly.
Since 1990, NAEP assessments have also been conducted at the state level. Participating states receive assessment results that report on the performance of students in that state. The state assessment is identical in content to the assessment conducted nationally. However, because the national NAEP samples prior to 2002 were not designed to support the reporting of accurate and representative state-level results, separate representative samples of students were selected for each participating jurisdiction/state. From 1990 through 2001, the national sample was a subset of the combined sample of students assessed in each participating state, plus an additional sample from the states that did not participate in the state assessment. Since 2002, a combined sample of public schools has been selected for both state and national NAEP.
Reading
Overall achievement scores on the NAEP long-term trend reading assessment for the country's 9-, 13-, and 17-year-old students are mixed. The average reading scores at ages 9 and 13 were higher in 2004 than in 1971 (table 110). The average score for 17-year-olds in 2004 was similar to that in 1971.
For Black 9-, 13-, and 17-year-olds, average reading scores in 2004 were higher than in 1971. At age 9, Black students scored higher on average in 2004 than in any previous administration year. For Blacks ages 13 and 17, scores increased between 1971 and 2004 (table 110). For White students, the average scores for 9- and 13-year-olds were also higher in 2004 than in 1971. Separate data for Hispanics were not gathered in 1971, but as with the other racial/ethnic groups, the average reading score for Hispanic students at age 9 was higher in 2004 than in any other assessment year. The average score for Hispanic students at age 13 shows an increase between 1975 and 2004. The scores for 17-year-old Hispanic students also increased between 1975 and 2004, but no measurable changes were seen between 1999 and 2004.
The 2005 main NAEP reading assessment of states found that reading proficiency varied among public school fourth-graders in the 53 participating jurisdictions (50 states, Department of Defense overseas and domestic schools, and the District of Columbia) (table 114). The U.S. average score was 217. The scores for the participating jurisdictions ranged from 191 in the District of Columbia and 204 in Mississippi to 231 in Massachusetts.
Mathematics
Results from NAEP long-term trend assessments of mathematics proficiency indicate that the scores of 9- and 13-year-old students were higher in 2004 than in 1973 (table 121). For White, Black, and Hispanic 9-, 13-, and 17-year-olds, average mathematics scale scores were higher in 2004 than in 1973.
The 2005 main NAEP assessment of states found that mathematics proficiency varied among public school eighth-graders in the 53 participating jurisdictions (50 states, Department of Defense overseas and domestic schools, and the District of Columbia) (table 125). Overall, 68 percent of these eighth-grade students performed at or above the Basic level in mathematics, and 29 percent performed at or above the Proficient level.
International Comparisons
In 2003, the performance of U.S. 15-year-olds, as measured by the Program for International Student Assessment (PISA), in mathematics literacy and problem solving was lower than the average performance for most Organization for Economic Cooperation and Development (OECD) countries (table 397). Along with the scale scores, PISA also used seven proficiency levels (below level 1 and levels 1 through 6, with level 6 being the highest level of proficiency) to describe student performance in mathematics literacy (table 398). In mathematics literacy, the United States had greater percentages of students below level 1 and at levels 1 and 2 than the OECD average percentages. The United States also had a lower percentage of students at levels 4, 5, and 6 than the OECD average percentages.
High School Graduates and Dropouts
The projected number of high school graduates in 2006–07 was 3,232,000 (table 99), including 2,912,000 public school graduates and 321,000 private school graduates. High school graduates include only recipients of diplomas, not recipients of equivalency credentials. The 2006–07 record number of high school graduates is higher than the former high points in 2005–06, when a projected 3,176,000 students earned diplomas, and in 1976–77, when 3,152,000 students earned diplomas. In 2003–04, an estimated 74.3 percent of public high school students graduated on time—that is, received a diploma 4 years after beginning their freshman year (table 101). The number of General Educational Development (GED) credentials issued rose from 332,000 in 1977 to 648,000 in 2001, before falling to 406,000 in 2004 (table 103). The status dropout rate—that is, the proportion of 16- to 24-year-olds who are not enrolled in school and have received neither a diploma nor an equivalency credential—declined over this period, from 14 percent in 1977 to 9 percent in 2005 (table 104).
Educational Technology
There has been widespread introduction of computers into the schools in recent years. In 2003, the average public school contained 136 instructional computers (table 422). One important technological advance that has come to classrooms following the introduction of computers has been connections to the Internet. The proportion of instructional rooms with access to the Internet increased from 51 percent in 1998 to 93 percent in 2003 (figure 29). Nearly all schools had access to the Internet in 2003 (table 422).
College Enrollment
College enrollment hit a record level of 17.5 million in fall 2005. Another record of 17.6 million is anticipated for fall 2006 (table 3). Enrollment is expected to increase by an additional 13 percent between 2006 and 2015. Despite decreases in the traditional college age population during the late 1980s and early 1990s, total enrollment increased during the late 1980s and early 1990s (tables 7, 15, 177, and 187). The traditional college-age population (18 to 24 years old) rose 15 percent between 1995 and 2005, which was reflected by an increase in college enrollment. Between 1995 and 2005, the number of full time students increased by 33 percent compared to a 9 percent increase in part-time students (table 175). During the same time period, the number of men enrolled increased 18 percent, while the number of women enrolled increased 27 percent.
Faculty and Staff
In the fall of 2005, degree-granting institutions—defined as postsecondary institutions that grant an associate's or higher degree and are eligible for Title IV federal financial aid programs—employed 1.3 million faculty members, including 0.7 million full-time and 0.6 million part-time faculty (table 228). About 19 percent of full-time faculty taught 15 or more hours per week, compared with 8 percent of part-time faculty (tables 233 and 234). About 9 percent of full-time faculty taught 150 or more students, compared with 2 percent of part-time faculty.
Postsecondary Degrees
The projections of the number of postsecondary degrees conferred during the 2005–06 school year by degree level show 682,000 associate's degrees; 1,456,000 bachelor's degrees; 584,000 master's degrees; 85,100 first-professional degrees; and 49,500 doctor's degrees (table 251).
The U.S. Census Bureau collects annual statistics on the educational attainment of the population. Between 1996 and 2006, the proportion of the adult population 25 years of age and over who had completed high school rose from 82 percent to 85 percent, and the proportion of adults with a bachelor's degree increased from 24 percent to 28 percent (table 8). High school completers include those persons who graduated from high school with a diploma, as well as those who completed high school through equivalency programs. The proportion of young adults (25- to 29-year-olds) who had completed high school in 2006 (86 percent) was about the same as it was in 1996 (87 percent). Also, the proportion of young adults who had completed a bachelor's degree in 2006 (28 percent) was not substantively different from the proportion in 1996 (27 percent).
Expenditures for public and private education, from kindergarten through graduate school (excluding postsecondary schools not awarding associate's or higher degrees), are estimated at $922 billion for 2005–06 (table 25). Expenditures of elementary and secondary schools are expected to total $558 billion, while those of degree-granting postsecondary institutions are expected to total $364 billion. Total expenditures for education are expected to amount to 7.4 percent of the gross domestic product in 2005–06, about 0.5 percentage points higher than in 1995–96.
Readers should be aware of the limitations of statistics. These limitations vary with the exact nature of a particular survey. For example, estimates based on a sample of institutions will differ somewhat from the figures that would have been obtained if a complete census had been taken using the same survey procedures. Standard errors are available for sample survey data appearing in this report. In most cases, standard errors for all items appear in the printed table. In some cases, only standard errors for key items appear in the printed table. Standard errors that do not appear in the tables are available from NCES upon request. Although some of the surveys conducted by NCES are census or universe surveys (which attempt to collect information from all potential respondents), all surveys are subject to design, reporting, and processing errors and errors due to nonresponse. Differences in sampling, data collection procedures, coverage of target population, timing, phrasing of questions, scope of nonresponse, interviewer training, data processing, coding, and so forth mean that the results from the different sources may not be strictly comparable. More information on survey methodologies can be found in the Guide to Sources (appendix A).
Unless otherwise noted, all data in this report are for the 50 states and the District of Columbia. Unless otherwise noted, all financial data are in current dollars, not adjusted for changes in the purchasing power of the dollar. Price indexes for inflation adjustments can be found in table 31.
Common data elements are collected in different ways in different surveys. Since the Digest relies on a number of data sources, there are discrepancies in definitions and data across tables in the volume. For example, several different surveys collect data on public school enrollment, and while similar, the estimates are not identical. The definitions of racial/ethnic groups also differ across surveys, particularly with respect to whether Hispanic origin is considered an ethnic group regardless of race, or counted separately as a racial/ethnic group. Individual tables note the definitions used in the given studies.
All statements cited in the text about differences between two or more groups or changes over time were tested for statistical significance and are statistically significant at the .05 level. Various test procedures were used, depending on the nature of the statement tested. The most commonly used test procedures were t tests, equivalence tests, and linear trend tests. Equivalence tests were used to determine whether two statistics are substantively equivalent or substantively different. This was accomplished by using a hypothesis test to determine whether the confidence interval of the difference between sample estimates is substantively significant (i.e., greater or less than a preset substantively important difference). In most cases involving percentages, a difference of 3.0 was used to determine substantive equivalence or difference. In some comparisons involving only very small percentages, a lower difference was used. In cases involving only relatively large values, a larger difference was used, such as $1,000 in the case of annual salaries. Linear trend tests were conducted by evaluating the significance of the slope of a simple regression of the data over time, and a t test comparing the end points.