Skip Navigation

Appendix A: Guide to Data Sources for Indicators


National Center for Education Statistics (NCES)

National Assessment of Educational Progress

The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects. The assessment is based on frameworks developed by the National Assessment Governing Board (NAGB). It includes both multiple-choice items and constructed-response items (those requiring written answers). Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students performing at or above three achievement levels (Basic, Proficient, and Advanced) are also reported for these groups.

From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate. In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main NAEP, and an aggregate of all state samples replaced the separate national sample. (School district-level assessments—under the Trial Urban District Assessment [TUDA] program—also began in 2002.)

Results are available for the mathematics assessments administered in 2000, 2003, 2005, 2007, 2009, 2011, 2013, and 2015. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and better assess the specific objectives for students at each grade level.

The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.

Since the 2005 changes to the mathematics framework were minimal for grades 4 and 8, comparisons over time can be made between assessments conducted before and after the framework's implementation for these grades. The changes that the 2005 framework made to the grade 12 assessment, however, were too drastic to allow grade 12 results from before and after implementation to be directly compared. These changes included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework; merging the measurement and geometry content areas; and changing the reporting scale from 0–500 to 0–300. For more information regarding the 2005 mathematics framework revisions, see http://nces.ed.gov/nationsreportcard/mathematics/frameworkcomparison.asp.

Results are available for the reading assessments administered in 2000, 2002, 2003, 2005, 2007, 2009, 2011, 2013, and 2015. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.

Both a content alignment study and a reading trend, or bridge, study were conducted to determine if the new reading assessment was comparable to the prior assessment. Overall, the results of the special analyses suggested that the assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see http://nces.ed.gov/nationsreportcard/reading/whatmeasure.asp.

In 2014, the first administration of the NAEP Technology and Engineering Literacy (TEL) Assessment asked 8th-graders to respond to questions aimed at assessing their knowledge and skill in understanding technological principles, solving technology and engineering-related problems, and using technology to communicate and collaborate. The online report The Nation's Report Card: Technology and Engineering Literacy (NCES 2016-119) presents national results for 8th-graders on the TEL assessment.

The Nation's Report Card: 2015 Mathematics and Reading Assessments (NCES 2015-136) is an online interactive report that presents national and state results for 4th- and 8th-graders on the NAEP 2015 mathematics and reading assessments. The report also presents TUDA results in mathematics and reading for 4th- and 8th-graders. The online interactive report The Nation's Report Card: 2015 Mathematics and Reading at Grade 12 (NCES 2016-018) presents grade 12 results from the NAEP 2015 mathematics and reading assessments.

Results from the 2015 NAEP science assessment are presented in the online report The Nation's Report Card: 2015 Science at Grades 4, 8, and 12 (NCES 2016-162). The assessment measures 4th-, 8th-, and 12th-graders' knowledge in three science content areas (physical science, life science, and Earth and space sciences) and their understanding of four science practices (identifying science principles, using science principles, using scientific inquiry, and using technological design). National results are reported for grades 4, 8, and 12, and results from 46 participating states and 1 jurisdiction are reported for grades 4 and 8. Since a new NAEP science framework was introduced in 2009, results from the 2015 science assessment can be compared to results from the 2009 and 2011 science assessments, but cannot be compared to the science assessments conducted prior to 2009.

NAEP is in the process of transitioning from paper-based assessments to technology-based assessments; consequently, data are needed regarding students' access to and familiarity with technology, at home and at school. The Computer Access and Familiarity Study (CAFS) is designed to fulfill this need. CAFS was conducted as part of the main administration of the 2015 NAEP. A subset of the grade 4, 8, and 12 students who took the main NAEP were chosen to take the additional CAFS questionnaire. The main 2015 NAEP was administered in a paper-and-pencil format to some students and a digital-based format to others; CAFS participants were given questionnaires in the same format as their NAEP questionnaires.

Further information on NAEP may be obtained from:

Daniel McGrath
Reporting and Dissemination Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
daniel.mcgrath@ed.gov
http://nces.ed.gov/nationsreportcard

For the 2015 National Assessment of Educational Progress (NAEP) questionnaire, please see: https://nces.ed.gov/nationsreportcard/bgquest.aspx and https://www.nationsreportcard.gov/sample_questions.aspx.

For the 2015 Computer Access and Familiarity Study (CAFS) questionnaire, please see: https://nces.ed.gov/nationsreportcard/subject/field_pubs/sqb/pdf/2015_sq_computer_access_familiarity.pdf.

Top

Census Bureau

American Community Survey

The Census Bureau introduced the American Community Survey (ACS) in 1996. Fully implemented in 2005, it provides a large monthly sample of demographic, socioeconomic, and housing data comparable in content to the Long Forms of the Decennial Census up to and including the 2000 long form. Aggregated over time, these data serve as a replacement for the Long Form of the Decennial Census. The survey includes questions mandated by federal law, federal regulations, and court decisions.

Since 2011, the survey has been mailed to approximately 295,000 addresses in the United States and Puerto Rico each month, or about 3.5 million addresses annually. A larger proportion of addresses in small governmental units (e.g., American Indian reservations, small counties, and towns) also receive the survey. The monthly sample size is designed to approximate the ratio used in the 2000 Census, which requires more intensive distribution in these areas. The ACS covers the U.S. resident population, which includes the entire civilian, noninstitutionalized population; incarcerated persons; institutionalized persons; and the active duty military who are in the United States. In 2006, the ACS began interviewing residents in group quarter facilities. Institutionalized group quarters include adult and juvenile correctional facilities, nursing facilities, and other health care facilities. Noninstitutionalized group quarters include college and university housing, military barracks, and other noninstitutional facilities such as workers and religious group quarters and temporary shelters for the homeless.

National-level data from the ACS are available from 2000 onward. The ACS produces 1-year estimates for jurisdictions with populations of 65,000 and over and 5-year estimates for jurisdictions with smaller populations. The 1-year estimates for 2015 used data collected between January 1, 2015, and December  31, 2015, and the 5-year estimates for 2011–2015 used data collected between January 1, 2011, and December 31, 2015. The ACS produced 3-year estimates (for jurisdictions with populations of 20,000 or over) for the periods 2005–2007, 2006–2008, 2007–2009, 2008–2010, 2009–2011, 2010–2012, and 2011–2013. Three-year estimates for these periods will continue to be available to data users, but no further 3-year estimates will be produced.

Further information about the ACS is available at https://www.census.gov/programs-surveys/acs/.
For the 2015 American Community Survey (ACS) questionnaire, please see: https://www.census.gov/programs-surveys/acs/methodology/questionnaire-archive.html.

Current Population Survey

The Current Population Survey (CPS) is a monthly survey of about 60,000 households conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. The CPS is the primary source of information of labor force statistics for the U.S. noninstitutionalized population (e.g., it excludes military personnel and their families living on bases and inmates of correctional institutions). In addition, supplemental questionnaires are used to provide further information about the U.S. population. The March supplement presents detailed questions regarding income. The October supplement presents detailed questions regarding school enrollment and school characteristics; in some years, this supplement has also contained additional questions about computer and internet use. In the July supplement, questions about computer and internet use are the principal focus.

The current sample design, introduced in July 2001, includes about 72,000 households. Each month about 58,900 of the 72,000 households are eligible for interview, and of those, 7 to 10 percent are not interviewed because of temporary absence or unavailability. Information is obtained each month from those in the household who are 15 years of age and older, and demographic data are collected for children 0–14 years of age. In addition, supplemental questions regarding school enrollment are asked about eligible household members ages 3 and older in the October survey. Prior to July 2001, data were collected in the CPS from about 50,000 dwelling units. The samples are initially selected based on the decennial census files and are periodically updated to reflect new housing construction.

A major redesign of the CPS was implemented in January 1994 to improve the quality of the data collected. Survey questions were revised, new questions were added, and computer-assisted interviewing methods were used for the survey data collection. Further information about the redesign is available in Current Population Survey, October 1995: (School Enrollment Supplement) Technical Documentation at http://www.census.gov/prod/techdoc/cps/cpsoct95.pdf.

Caution should be used when comparing data from 1994 through 2001 with data from 1993 and earlier. Data from 1994 through 2001 reflect 1990 census-based population controls, while data from 1993 and earlier reflect 1980 or earlier census-based population controls. Changes in population controls generally have relatively little impact on summary measures such as means, medians, and percentage distributions. They can have a significant impact on population counts. For example, use of the 1990 census-based population controls resulted in about a 1 percent increase in the civilian noninstitutional population and in the number of families and households. Thus, estimates of levels for data collected in 1994 and later years will differ from those for earlier years by more than what could be attributed to actual changes in the population. These differences could be disproportionately greater for certain subpopulation groups than for the total population.

Beginning in 2003, the race/ethnicity questions were expanded. Information on people of Two or more races was included, and the Asian and Pacific Islander race category was split into two categories—Asian and Native Hawaiian or Other Pacific Islander. In addition, questions were reworded to make it clear that self-reported data on race/ethnicity should reflect the race/ethnicity with which the responder identifies, rather than what may be written in official documentation.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services. Generalized standard error tables are provided in the Current Population Reports; methods for deriving standard errors can be found within the CPS technical documentation at http://www.census.gov/programs-surveys/cps/technical-documentation/complete.html. The CPS data are subject to both nonsampling and sampling errors.

Prior to 2009, standard errors were estimated using the generalized variance function. The generalized variance function is a simple model that expresses the variance as a function of the expected value of a survey estimate. Beginning with March 2009 CPS data, standard errors were estimated using replicate weight methodology. Those interested in using CPS household-level supplement replicate weights to calculate variances may refer to Estimating Current Population Survey (CPS) Household-Level Supplement Variances Using Replicate Weights at http://thedataweb.rm.census.gov/pub/cps/supps/HH-level_Use_of_the_Public_Use_Replicate_Weight_File.doc.

Further information on the CPS may be obtained from:

Education and Social Stratification Branch
Population Division
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
https://www.census.gov/programs-surveys/cps.html

School Enrollment

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population ages 3 years and over. Prior to 2001, the October supplement consisted of approximately 47,000 interviewed households. Beginning with the October 2001 supplement, the sample was expanded by 9,000 to a total of approximately 56,000 interviewed households. The main sources of nonsampling variability in the responses to the supplement are those inherent in the survey instrument. The question of current enrollment may not be answered accurately for various reasons. Some respondents may not know current grade information for every student in the household, a problem especially prevalent for households with members in college or in nursery school. Confusion over college credits or hours taken by a student may make it difficult to determine the year in which the student is enrolled. Problems may occur with the definition of nursery school (a group or class organized to provide educational experiences for children) where respondents' interpretations of "educational experiences" vary.

For the October 2015 basic CPS, the household-level nonresponse rate was 12.9 percent. The person-level nonresponse rate for the school enrollment supplement was an additional 8.9 percent. Since the basic CPS nonresponse rate is a household-level rate and the school enrollment supplement nonresponse rate is a person-level rate, these rates cannot be combined to derive an overall nonresponse rate. Nonresponding households may have fewer persons than interviewed ones, so combining these rates may lead to an overestimate of the true overall nonresponse rate for persons for the school enrollment supplement.

Although the principal focus of the October supplement is school enrollment, in some years the supplement has included additional questions on other topics. In 2009, 2010, and 2012, for example, the October supplement included additional questions on computer and internet use.

Further information on the CPS School Enrollment Supplement may be obtained from:

Education and Social Stratification Branch
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
https://www2.census.gov/programs-surveys/cps/techdocs/cpsoct15.pdf

For the 2012 CPS School Enrollment Supplement questionnaire, please see: https://www2.census.gov/programs-surveys/cps/techdocs/cpsoct12.pdf.

Computer and Internet Use

The Current Population Survey (CPS) has been conducting supplemental data collections regarding computer use since 1984. In 1997, these supplemental data collections were expanded to include data on internet access. More recently, data regarding computer and internet use were collected in October 2010, July 2011, October 2012, July 2013, and July 2015.

In the July 2011, 2013, and 2015 supplements, the sole focus was on computer and internet use. In the October 2010 and 2012 supplements questions on school enrollment were the principal focus, and questions on computer and internet use were less prominent. Measurable differences in estimates taken from these supplements across years could reflect actual changes in the population; however, differences could also reflect seasonal variations in data collection or differences between the content of the July and October supplements. Therefore, caution should be used when making year-to-year comparisons of CPS computer and internet use estimates.

The most recent computer and internet use supplement, conducted in July 2015, collected household information from all eligible CPS households, as well as person information from household members age 3 and over. Information was collected about the household's computer and internet use and the household member's use of the Internet from any location in the past year. Additionally, information was gathered regarding a randomly selected household respondent's use of the Internet. 

For the July 2015 basic CPS, the household-level nonresponse rate was 13.0 percent. The person-level nonresponse rate for the computer and internet use supplement was an additional 23.0 percent. Since one rate is a person-level rate and the other a household-level rate, the rates cannot be combined to derive an overall rate.

Further information on the CPS Computer and Internet Use Supplement may be obtained from:

Education and Social Stratification Branch
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://census.gov/topics/population/computer-internet.html

For the 2015 CPS Computer and Internet Use Supplement questionnaire, please see: https://www2.census.gov/programs-surveys/cps/techdocs/cpsjul15.pdf.

Top

International Association for the Evaluation of Educational Achievement

The International Association for the Evaluation of Educational Achievement (IEA) is composed of governmental research centers and national research institutions around the world whose aim is to investigate education problems common among countries. Since its inception in 1958, the IEA has conducted more than 30 research studies of cross-national achievement. The regular cycle of studies encompasses learning in basic school subjects. Examples are the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS). IEA projects also include studies of particular interest to IEA members, such as the TIMSS 1999 Video Study of Mathematics and Science Teaching, the Civic Education Study, and studies on information technology in education.

The international bodies that coordinate international assessments vary in the labels they apply to participating education systems, most of which are countries. IEA differentiates between IEA members, which IEA refers to as "countries" in all cases, and "benchmarking participants." IEA members include countries such as the United States and Ireland, as well as subnational entities such as England and Scotland (which are both part of the United Kingdom), the Flemish community of Belgium, and Hong Kong (a Special Administrative Region of China). IEA benchmarking participants are all subnational entities and include Canadian provinces, U.S. states, and Dubai in the United Arab Emirates (among others). Benchmarking participants, like the participating countries, are given the opportunity to assess the comparative international standing of their students' achievement and to view their curriculum and instruction in an international context.

Some IEA studies, such as TIMSS and PIRLS, include an assessment portion, as well as contextual questionnaires for collecting information about students' home and school experiences. The TIMSS and PIRLS scales, including the scale averages and standard deviations, are designed to remain constant from assessment to assessment so that education systems (including countries and subnational education systems) can compare their scores over time as well as compare their scores directly with the scores of other education systems. Although each scale was created to have a mean of 500 and a standard deviation of 100, the subject matter and the level of difficulty of items necessarily differ by grade, subject, and domain/dimension. Therefore, direct comparisons between scores across grades, subjects, and different domain/dimension types should not be made.

Further information on the International Association for the Evaluation of Educational Achievement may be obtained from http://www.iea.nl.

Trends in International Mathematics and Science Study

The Trends in International Mathematics and Science Study (TIMSS, formerly known as the Third International Mathematics and Science Study) provides data on the mathematics and science achievement of U.S. 4th- and 8th-graders compared with that of their peers in other countries. TIMSS collects information through mathematics and science assessments and questionnaires. The questionnaires request information to help provide a context for student performance. They focus on such topics as students' attitudes and beliefs about learning mathematics and science, what students do as part of their mathematics and science lessons, students' completion of homework, and their lives both in and outside of school; teachers' perceptions of their preparedness for teaching mathematics and science, teaching assignments, class size and organization, instructional content and practices, collaboration with other teachers, and participation in professional development activities; and principals' viewpoints on policy and budget responsibilities, curriculum and instruction issues, and student behavior. The questionnaires also elicit information on the organization of schools and courses. The assessments and questionnaires are designed to specifications in a guiding framework. The TIMSS framework describes the mathematics and science content to be assessed and provides grade-specific objectives, an overview of the assessment design, and guidelines for item development.

TIMSS is on a 4-year cycle. Data collections occurred in 1995, 1999 (8th grade only), 2003, 2007, 2011, and 2015. TIMSS 2015 consisted of five assessments: 4th-grade mathematics; numeracy (a less difficult version of 4th-grade mathematics, newly developed for 2015); 8th-grade mathematics; 4th-grade science; and 8th-grade science. In addition to the 4th- and 8th-grade assessments, the third administration of TIMSS Advanced since 1995 was conducted. TIMSS Advanced assessed final-year (12th-grade) secondary students' achievement in advanced mathematics and physics. The study also collected policy-relevant information about students, curriculum emphasis, technology use, and teacher preparation and training.

TIMSS Sampling and Response Rates

TIMSS 2015 was administered between March and May of 2015 in the United States. The U.S. sample was randomly selected and weighted to be representative of the nation. In order to reliably and accurately represent the performance of each country, international guidelines required that countries sample at least 150 schools and at least 4,000 students per grade (countries with small class sizes of fewer than 30 students per school were directed to consider sampling more schools, more classrooms per school, or both, to meet the minimum target of 4,000 tested students). In the United States, a total of 250 schools and 10,029 students participated in the grade 4 TIMSS survey, and 246 schools and 10,221 students participated in the grade 8 TIMSS (these figures do not include the participation of the state of Florida as a subnational education system, which was separate from and additional to its participation in the U.S. national sample).

TIMSS Advanced, also administered between March and May of 2015 in the United States, required participating countries and other education systems to draw probability samples of students in their final year of secondary school—ISCED Level 3—who were taking or had taken courses in advanced mathematics or who were taking or had taken courses in physics. International guidelines for TIMSS Advanced called for a minimum of 120 schools to be sampled, with a minimum of 3,600 students assessed per subject. In the United States, a total of 241 schools and 2,954 students participated in advanced mathematics, and 165 schools and 2,932 students participated in physics.

In TIMSS 2015, the weighted school response rate for the United States was 77 percent for grade 4 before the use of substitute schools (schools substituted for originally sampled schools that refused to participate) and 85 percent with the inclusion of substitute schools. For grade 8, the weighted school response rate before the use of substitute schools was 78 percent, and it was 84 percent with the inclusion of substitute schools. The weighted student response rate was 96 percent for grade 4 and 94 percent for grade 8.

In TIMSS Advanced 2015, the weighted school response rate for the United States for advanced mathematics was 72 percent before the use of substitute schools and 76 percent with the inclusion of substitute schools. The weighted school response rate for the United States for physics was 65 percent before the use of substitute schools and 68 percent with the inclusion of substitute schools. The weighted student response rate was 87 percent for advanced mathematics and 85 percent for physics. Student response rates are based on a combined total of students from both sampled and substitute schools.

Further information on the TIMSS study may be obtained from:

Stephen Provasnik
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-6442
stephen.provasnik@ed.gov
http://nces.ed.gov/timss
http://www.iea.nl/timss

For the 2015 TIMSS questionnaire, please see: https://nces.ed.gov/timss/questionnaire.asp.

Top

Organization for Economic Cooperation and Development

The Organization for Economic Cooperation and Development (OECD) publishes analyses of national policies and survey data in education, training, and economics in OECD and partner countries. Newer studies include student survey data on financial literacy and on digital literacy.

Program for International Student Assessment

The Program for International Student Assessment (PISA) is a system of international assessments organized by the Organization for Economic Cooperation and Development (OECD), an intergovernmental organization of industrialized countries, that focuses on 15-year-olds' capabilities in reading literacy, mathematics literacy, and science literacy. PISA also includes measures of general, or cross-curricular, competencies such as learning strategies. PISA emphasizes functional skills that students have acquired as they near the end of compulsory schooling.

PISA is a 2-hour exam. Assessment items include a combination of multiple-choice questions and open-ended questions that require students to develop their own response. PISA scores are reported on a scale that ranges from 0 to 1,000, with the OECD mean set at 500 and a standard deviation set at 100. In 2015, literacy in science, reading, and mathematics were assessed through a computer-based assessment in the majority of countries, including the United States. Education systems could also participate in optional pencil-and-paper financial literacy assessments and computer-based mathematics and reading assessments. In each education system, the assessment is translated into the primary language of instruction; in the United States, all materials are written in English.

Forty-three education systems participated in the 2000 PISA; 41 education systems participated in 2003; 57 (30 OECD member countries and 27 nonmember countries or education systems) participated in 2006; and 65 (34 OECD member countries and 31 nonmember countries or education systems) participated in 2009. (An additional nine education systems administered the 2009 PISA in 2010.) In PISA 2012, 65 education systems (34 OECD member countries and 31 nonmember countries or education systems), as well as the U.S. states of Connecticut, Florida, and Massachusetts, participated. In the 2015 PISA, 73 education systems (35 OECD member countries and 31 nonmember countries or education systems), as well as the states of Massachusetts and North Carolina and the territory of Puerto Rico, participated.

To implement PISA, each of the participating education systems scientifically draws a nationally representative sample of 15-year-olds, regardless of grade level. In the PISA 2015 national sample for the United States, about 5,700 students from 177 public and private schools were represented. Massachusetts, North Carolina, and Puerto Rico also participated in PISA 2015 as separate education systems. In Massachusetts, about 1,400 students from 48 public schools participated; in North Carolina, about 1,900 students from 54 public schools participated; and in Puerto Rico, about 1,400 students in 47 public and private schools participated.

The intent of PISA reporting is to provide an overall description of performance in reading literacy, mathematics literacy, and science literacy every 3 years, and to provide a more detailed look at each domain in the years when it is the major focus. These cycles will allow education systems to compare changes in trends for each of the three subject areas over time. In the first cycle, PISA 2000, reading literacy was the major focus, occupying roughly two-thirds of assessment time. For 2003, PISA focused on mathematics literacy as well as the ability of students to solve problems in real-life settings. In 2006, PISA focused on science literacy; in 2009, it focused on reading literacy again; and in 2012, it focused on mathematics literacy. PISA 2015 focused on science, as it did in 2006.

Further information on PISA may be obtained from:

Patrick Gonzales
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
patrick.gonzales@ed.gov
http://nces.ed.gov/surveys/pisa

For the 2015 PISA questionnaire, please see: https://nces.ed.gov/surveys/pisa/questionnaire.asp.

Program for the International Assessment of Adult Competencies

The Program for the International Assessment of Adult Competencies (PIAAC) is a cyclical, large-scale study that aims to assess and compare the broad range of basic skills and competencies of adults around the world. Developed under the auspices of the Organization for Economic Cooperation and Development (OECD), it is the most comprehensive international survey of adult skills ever undertaken. Adults were surveyed in 24 participating countries in 2012 and in an additional 9 countries in 2014.

PIAAC focuses on what are deemed basic cognitive and workplace skills necessary to adults' successful participation in 21st-century society and in the global economy. Skills assessed include literacy, numeracy, problem solving in technology-rich environments, and basic reading skills. PIAAC measures the relationships between these skills and other characteristics such as individuals' educational background, workplace experiences, and occupational attainment. PIAAC was administered on laptop computers or in paper-and-pencil mode. In the United States, the background questionnaire was administered in both English and Spanish, and the cognitive assessment was administered only in English.

The 2012 PIAAC assessment for the United States included a nationally representative probability sample of households. This household sample was selected on the basis of a four-stage, stratified area sample: (1) primary sampling units (PSUs) consisting of counties or groups of contiguous counties; (2) secondary sampling units (referred to as segments) consisting of area blocks; (3) housing units containing households; and (4) eligible persons within households. Person-level data were collected through a screener, a background questionnaire, and the assessment.

Based on the screener data, 6,100 U.S. respondents ages 16 to 65 were selected to complete the 2012 background questionnaire and the assessment; 4,898 actually completed the background questionnaire. Of the 1,202 respondents who did not complete the background questionnaire, 112 were unable to do so because of a literacy-related barrier: either the inability to communicate in English or Spanish or a mental disability. Twenty others were unable to complete the questionnaire due to technical problems. The final response rate for the background questionnaire—which included respondents who completed it and respondents who were unable to complete it because of a language problem or mental disability—was 82.2 percent weighted. The overall weighted response rate for the household sample—the product of the component response rates—was 70.3 percent.

The 2014 PIAAC supplement repeated the 2012 administration of PIAAC to an additional sample of U.S. adults in order to enhance the 2012 sample. It included a sample of participants from different households in the PSUs from the 2012 sample.   

Key to PIAAC's value is its collaborative and international nature. In the United States, NCES has consulted extensively with the Department of Labor in the development of the survey, and staff from both agencies are co-representatives of the United States in PIAAC's international governing body. Internationally, PIAAC has been developed through the collaboration of OECD staff and participating countries' representatives from their ministries or departments of education and labor. Through this cooperative effort, all participating countries follow the quality assurance guidelines set by the OECD consortium and closely follow all agreed-upon standards set for survey design, assessment implementation, and reporting of results.

Further information on PIAAC may be obtained from:

Holly Xie
International Assessment Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
holly.xie@ed.gov
https://nces.ed.gov/surveys/piaac/
http://www.oecd.org/skills/piaac/

For the 2015 PIAAC Background Questionnaire, please see: https://nces.ed.gov/surveys/piaac/bgquestionnaire.asp. For the 2015 PIAAC sample items, please see: https://nces.ed.gov/surveys/piaac/sample_lit.asp.

Top