Since the 1960s, the United States has participated actively in international projects that are designed to provide key information about the performance of the U.S. education system relative to education systems in other countries. These projects include the International Indicators of Education Systems (INES) project at the Organization for Economic Cooperation and Development (OECD); the Progress in International Reading Literacy Study (PIRLS); the Trends in International Mathematics and Science Study (TIMSS); the Program for International Student Assessment (PISA); and, more recently, the Program for the International Assessment of Adult Competencies (PIAAC). This report draws on the most current information available to present a set of education indicators that compare the education system in the United States with those in other economically developed countries. Updated information from these various projects will be incorporated in subsequent reports.
Although the international education projects cited above involve many countries worldwide, the comparisons in this report focus on the Group of 20 (G-20) countries: Argentina, Australia, Brazil, Canada, China, France, Germany, India, Indonesia, Italy, Japan, Mexico, the Republic of Korea, the Russian Federation, Saudi Arabia, South Africa, Turkey, the United Kingdom, and the United States.1 These are among the most industrialized countries in the world. The G-20 countries were selected as a comparison group because of the similarities in their economic development and because the group includes many of the United States' major economic partners. The leaders of these countries meet regularly to discuss economic and other policy issues.
This is the sixth report in the Comparative Indicators of Education series published by the National Center for Education Statistics (NCES). Whereas all of the prior reports focused on the G-8 countries, this report expands its focus to include the G-20 countries.
About two-thirds of the indicators use 2011 data from PIRLS or TIMSS or 2012 data from PISA or PIAAC. Using these recent data available from all the international assessments in which the United States participates, the report is able to compare the performance of students across the span of primary and secondary education, as well as of adults, in a variety of key subjects and competency areas, such as reading and mathematics. The section on academic performance, for example, includes indicators providing (a) snapshots of performance (or the percentages of the population reaching different proficiency levels or international benchmarks in a variety of content domains) from fourth grade through adulthood; (b) a closer look at student achievement in content subdomains in reading, mathematics, and science; and (c) an examination of changes in student performance over time in reading, mathematics, and science. The student and teacher questionnaires that accompany the international student assessments are also used to provide data for some updated and some new indicators describing the contexts of learning in the G-20 countries. Notably, all of the assessment data and nearly all the related tests for statistical significance were obtained using the NCES International Data Explorer (IDE), which is an online tool (found at http://nces.ed.gov/surveys/international/ide/) allowing users to create statistical tables and charts using data from international assessments.
Most of the remaining one-third of the indicators draw on the international education data compiled by the OECD in the 2013 edition of Education at a Glance or provided in its online database. These data were largely used to update several indicators that have been presented previously.
Many of the indicators in this report refer to at least one of the following education levels: preprimary education, primary education, secondary education, and higher education. A brief overview of the education levels is presented here to provide the reader with a frame of reference (see appendix A for more detailed descriptions of countries' education systems). To ensure comparability in the indicators, each country aligned its national education data to correspond with the definitions of education levels that were developed for the 1997 revision of the International Standard Classification of Education (ISCED97) (United Nations Educational, Scientific and Cultural Organization 1997). The following descriptions highlight the key features of (1) education programs from preprimary through secondary education and (2) higher education programs.
Preprimary education includes programs of education for children at least 3 years of age that involve organized, center-based instructional activities; in most countries, preprimary education is not compulsory. Primary education includes programs that are designed to give students a sound basic education in reading, writing, and mathematics, along with an elementary understanding of other subjects, such as history, geography, science, art, and music. In the international classification, primary education usually begins at the start of compulsory education (around age 6) and lasts for 6 years. In the United States, this is generally synonymous with elementary education. Secondary education encompasses two stages: lower secondary education and upper secondary education. Lower secondary education includes programs that are designed to complete basic education; the standard duration in the international classification is 3 years. Upper secondary education is designed to provide students with more in-depth knowledge of academic or vocational subjects and to prepare them for higher level academic or vocational studies or entry into the labor market. The standard duration of upper secondary education in the international classification is 3 years. In the United States, lower secondary education and upper secondary education generally correspond to junior high school and high school, respectively.
Higher education includes tertiary programs2 that fall into three main categories:
The international classification also includes an education level that straddles the boundary between upper secondary and higher education: postsecondary nontertiary education. These programs of study—which are primarily vocational in nature—are generally taken after the completion of upper secondary education. They are often not significantly more advanced than upper secondary programs, but they serve to broaden the knowledge of participants who have already completed upper secondary education. In the United States, these programs are often in the form of occupationally specific vocational certificate programs, such as 1-year certification programs offered at technical institutes or community colleges.3
Matching the education levels of individual education systems to the ISCED97 classification can be challenging, because the particulars of individual countries seldom fit the ISCED97 perfectly. Using ISCED97 classifications as a starting point, NCES worked with education professionals in other G-20 countries to create a general overview of each country's education system. As an aid to the reader, schematics of how the ISCED97 applies to each of the G-20 education systems are provided in appendix A, accompanied by text describing each system in greater detail.
Following this introductory section, the report presents 29 indicators, each of which compares a different aspect of the U.S. education system and the education systems of the other G-20 countries. The indicators are organized into the following sections:
The first section, population and school enrollment, presents indicators that suggest the potential demand for education in countries as measured by the size and growth of their school-age population and current and past levels of enrollment in formal education. The section concludes with an indicator that examines the extent to which international or foreign students are enrolled in higher education across the G-20 countries.
The next section, academic performance, has indicators spanning school levels and adulthood, as well as subject areas including reading literacy, mathematics, science, and problem solving in technology-rich environments. The indicators present findings on student performance in the G-20 countries, including the distribution of achievement across proficiency levels, average performance on content subscales, and changes in average performance on overall scales in reading, mathematics, and science.
The third section highlights a range of key policy-relevant issues pertaining to contexts for learning across the G-20 countries. This section presents data on differences between males' and females' attitudes toward learning across the grades, as well as on teachers' reports of their instructional strategies, opportunities for collaboration and professional development, and job satisfaction and morale.
The fourth section provides a comparative look at expenditure for education, including one indicator on public school teachers' salaries in primary and secondary education and one on annual changes in education expenditures.
The final section, education returns: educational attainment and income, focuses on graduation rates, educational attainment and degrees, employment rates, and earnings (including disaggregation by sex and field of study)
Each indicator is presented in a two-part format. The first part presents key findings and highlights how the United States compares with its G-20 peers (for which data are available) on the issue examined in the indicator. A section that defines the terms used in the indicator and describes key features of the methodology used to produce it follows the key findings. The second part presents graphical depictions of the data that support the key findings These tables and figures also include the specific data source for the indicator and more detailed notes on interpreting the data.
There are five main sources of data for this report:
Data for indicator 1, on the school-age population, are from the International Data Base (IDB) of the U.S. Census Bureau.
Many of the indicators in this report present student data. Some of the indicators show
Other indicators use the student as the unit of analysis, but the data are reported from the perspective of teachers, such as the percentage of students whose teachers reported participating in professional development (e.g., indicators 18–22).
In several other indicators, the unit of analysis is not the student. For example, the unit of analysis may be
When interpreting the data presented in this report, it is important for readers to be aware of limitations based on the source of information and problems that may exist in verifying comparability in reporting.
Except for indicator 23, which explicitly states that the data pertain only to public school teachers, the indicators in this report include data from both public and private schools.
It should be noted that many of the indicators in this report do not contain data for the complete set of G-20 countries. That is, specific countries are sometimes not included or are only partially included in an indicator. This is the result of source data not being reported, specific countries or jurisdictions within a country not participating in a particular survey, or a country's data not meeting reporting standards. Therefore, these countries do not appear in indicators using these data. However, every G-20 country is featured in at least one indicator. Countries that are only partially included are noted in the respective tables and figures.
One country warrants special mention: the United Kingdom. The United Kingdom—which includes England, Northern Ireland, Scotland, and Wales—participated in the various cycles of PISA as a unified education system; thus, results are reported for the United Kingdom as a whole in indicators drawing on PISA data. However, in TIMSS and PIRLS, England, Northern Ireland, and Scotland participated as individual jurisdictions. Results are reported for these jurisdictions separately—the United Kingdom (England) or the United Kingdom (Northern Ireland), for example—in indicators drawing on TIMSS and PIRLS data. In PIAAC, only England and Northern Ireland participated and their results are shown in a combined fashion. Except for starting salary data all other results not mentioned are reported for the United Kingdom as a whole.4
Additionally, because some of the indicators of academic performance focus on the most recent administration (e.g., 2011 for TIMSS and PIRLS and 2012 for PISA) and some focus on changes across years, the countries included in the indicators may vary even though they draw from the same assessment program's data. For example, Scotland did not participate in PIRLS 2011, but it did participate in 2001 and 2006; thus, it appears in the indicator on changes but not in the indicator on 2011 results. Any country participating in at least two of the years presented in the three-time-point trend indicators was included in those indicators.
In general, the countries shown in exhibit 1-1 below are included in indicators using the identified sources. For example, the countries participating in PIRLS 2011, or at least two other cycles of PIRLS, include Australia, Canada, France, Germany, Indonesia, Italy, the Russian Federation, Saudi Arabia, the United Kingdom (including England, Northern Ireland, and Scotland), and the United States. In indicators using INES data, the reporting G-20 countries vary somewhat; these are shown in each indicator.
While every effort was made to use the most up-to-date data available across the G-20 countries (usually from 2010, 2011, or 2012), data from earlier years were sometimes used if more recent data were not available. To make this clear to the reader, these occurrences are noted in relevant tables and figures.
Exhibit 1-1. G-20 country coverage in indicators, by data source
PIRLS (2011 or at least
two other cycles)
TIMSS (2011 or at least
two other cycles)
PISA (2012 or at least
two other cycles)
|Republic of Korea||✓||✓||✓||Varies|
|1 Although Canada did not participate in TIMSS 2011 as a unified education system, several Canadian provinces participated individually; however, their results are not shown in this report.|
Each of the international assessments has established technical standards of data quality, including participation and response rate standards, which countries must meet in order to be included in the comparative results. For the student assessments (PIRLS, TIMSS, and PISA), response rate standards were set using composites of response rates at the school, classroom, student, and teacher levels, and response rates were calculated with and without the inclusion of substitute schools that were selected to replace schools refusing to participate.5 For the adult assessment (PIAAC), response rate standards were based on the participant response rate and were calculated with and without the inclusion of substitutes. These standards are described in detail in the respective technical reports (Martin and Mullis 2013; OECD 2014; OECD in press).
Consistent with NCES statistical standards, item response rates below 85 percent are footnoted in the tables and figures of this report, as are instances where reporting standards are not met because there are too few observations to provide reliable estimates.
Eleven of the indicators presented in this report (indicators 1–4 and 23–29) are derived either from administrative records that are based on universe collections or from national sample surveys for which standard errors were not available. Consequently, for these indicators, no tests of statistical significance were conducted to establish whether observed differences from the U.S. average were statistically significant. However, for the 18 other indicators derived from PIRLS, TIMSS, PISA, and PIAAC data (indicators 5–22), standard t tests were calculated for comparisons of estimates within or between countries (e.g., to test whether a U.S. estimate is statistically different from other G-20 countries' estimates). Differences were reported if they were found to be statistically significant at the .05 level, using two-tailed tests of significance for comparisons of independent samples. No adjustments were made for multiple comparisons. Where feasible, these differences are noted in the figures with an asterisk. The exceptions are the figures for indicators 5 through 8 and 20 through 21, where their presentation would be too distracting.
Percentage-point differences presented in the text were computed from unrounded numbers; therefore, they may differ from computations made using the rounded whole numbers that appear in the tables and figures.
Prior to this report, NCES produced five earlier reports—in 2011, 2009, 2006, 2004, and 2002—in this series. The earlier reports covered only the G-8 countries:
General information about the International Activities Program at NCES, including work on international comparisons in education, can be found at http://nces.ed.gov/surveys/international.
1 Although the European Union is a member
of the G-20, it is not included in the indicators since it is a political entity
that represents a number of countries (including some for which data are already
included in the indicators).
2 In the international classification, more advanced postsecondary education (such as attending a 4-year college or university) is referred to as "tertiary education." In the current report, the term "higher education" is used because this term is more familiar to American readers.
3 In an indicator on annual education expenditures (indicator 24), postsecondary nontertiary education data are included with secondary education and/or higher education data for one or more countries as specified in the figures. In indicators on the percentage distribution of the population by highest level of education completed (indicator 26), employment rates (indicator 28), and the distribution of the population by education and income (indicator 29), postsecondary nontertiary education data are included with upper secondary education data for all G-20 countries reporting data.
4 Data are available for subnational jurisdictions in three other countries: Canada, China, and the United States. However in contrast to the United Kingdom, these data have been excluded from the report for several reasons. First, none of the Canadian, Chinese, or U.S. subnational jurisdictions have the level of autonomy that the United Kingdom's subnational jurisdictions have, which are autonomous except in foreign affairs. Second, whereas the United Kingdom's subnational jurisdictions represent a large percentage of the total population and there is little variation economically across the jurisdictions, this is not the case for the subnational jurisdictions in the other countries.
5 International requirements state that each country must make every effort to obtain cooperation from the sampled schools, but the requirements also recognize that this is not always possible. Thus, it is allowable to use substitute schools as a means to avoid sample size loss associated with school nonresponse. To do this, each sampled school was assigned two substitute schools in the sampling frame. Substitutes for noncooperating sampled schools were identified by assigning as substitute schools the schools that immediately preceded and followed the sampled school in the frame. The sampling frame was sorted by the stratification variables and by a measure of size to ensure that any sampled school's substitute had similar characteristics.