Skip Navigation

Frequently Asked Questions

In addition to the following specific questions about PIAAC, more general FAQs are available about large-scale international assessments in general.


General information about the PIAAC study

PIAAC assesses three key skill areas needed for successful participation in modern society and the global economy—literacy, numeracy, and digital problem solving—and collects information on adults’ education, work experience, and other factors.

Literacy refers to the ability to understand, use, and respond appropriately to written texts. Numeracy refers to the ability to use basic mathematical and computational skills. Digital Problem Solving (also referred to as “problem solving in technology-rich environments”) means the ability to access and interpret information in digital environments to perform practical tasks.

All of the countries that participated in PIAAC in 2012 assessed the domains of literacy and numeracy in both a paper-and-pencil mode and a computer-administered mode. In addition, some countries assessed problem solving (administered on a computer) as well as the components of reading (administered only in a paper-and-pencil mode). The United States assessed all four domains.

For more information on what PIAAC is designed to measure, please refer to What PIAAC Measures.

CLOSE

The assessment was designed to be valid cross-culturally and cross-nationally.

PIAAC assessment questions are developed in a collaborative, international process and are based on frameworks developed by internationally known experts in each subject or domain. Assessment experts and developers from ministries/departments of education and labor and OECD staff participated in the conceptualization, creation, and extensive year-long reviews of assessment questions. In addition, the PIAAC Consortium's support staff, assisted by expert panels, researchers, and working groups, developed PIAAC's Background Questionnaire. The PIAAC Consortium also guided the development of common standards and procedures for collecting and reporting data, as well as the international “virtual machine” software that administers the assessment uniformly across countries. All PIAAC countries follow the common standards and procedures and use the virtual machine software when conducting the survey and assessment. As a result, PIAAC can provide a reliable and comparable measure of literacy skills in the adult population of participating countries.

Before the administration of the assessment, a field test was conducted in the participating countries. The PIAAC Consortium analyzed the field-test data and implemented changes to eliminate problematic test items or revise procedures prior to the administration of the assessment.

CLOSE

The problem solving in technology-rich environments (PS-TRE) domain, also referred to as "digital problem solving" assesses the cognitive processes of problem solving: goal setting, planning, selecting, evaluating, organizing, and communicating results. In a digital environment, these skills involve understanding electronic texts, images, graphics, and numerical data, as well as locating, evaluating, and critically judging the validity, accuracy, and appropriateness of the accessed information.

The environment in which PIAAC problem solving is assessed is meant to reflect the fact that digital technology has changed the ways in which individuals live their day-to-day lives, communicate with others, work, conduct their affairs, and access information. Information and communication technology tools such as computer applications, the Internet, and mobile technologies are all part of the environments in which individuals operate. In PIAAC, items for problem solving in technology-rich environments are presented on laptop computers in simulated software applications using commands and functions commonly found in e-mail, web browsers, and spreadsheets. Definitions and example items for the PS-TRE domain can be found on the NCES What PIAAC Measures page.

CLOSE

PIAAC assesses adults in the official language or languages of each participating country. Based on a 1988 congressional mandate and the 1991 National Literacy Act, the U.S. Department of Education is required to evaluate the status and progress of adults' literacy in English. However, in order to obtain background information from a wide range of respondents in the United States, the PIAAC Background Questionnaire was administered in both English and Spanish.

CLOSE

Over the past decade, various estimates of the number of U.S. adults with low skills—i.e., low levels of literacy (at or below PIAAC proficiency level 1) and low levels of numeracy (at or below PIAAC proficiency level 1) have been reported. These estimates reflect different populations, as explained below.

Literacy

When looking at the population of all working-age adults (ages 16–65) and older adults (ages 66–74) combined, PIAAC data show that there are 52 million U.S. adults with low literacy skills. This estimate can be calculated using the combined data from all three rounds of the U.S. PIAAC data collection (2012, 2014, and 2017) by including (a) the internationally comparative adult population (ages 16–65) who are at PIAAC’s proficiency level 1 or below, (b) older adults (ages 66–74) who are at PIAAC’s proficiency level 1 or below, and (c) those adults who could not participate in PIAAC’s background survey due to a language barrier or a cognitive or physical inability to be interviewed; they are officially called “literacy-related nonrespondents.”

When looking at the population of just working-age adults (ages 16–65), PIAAC data show that there are 43 million U.S. adults with low literacy skills. This estimate can be calculated using the combined data from the first two rounds of the U.S. PIAAC data collection (2012 and 2014) by including (a) the internationally comparative adult population (ages 16–65) who are at PIAAC’s proficiency level 1 or below and (b) literacy-related nonrespondents who could not participate in PIAAC’s background survey due to a language barrier or a cognitive or physical inability to be interviewed. This estimate was reported in Adult Literacy in the United States.

When looking at the population of just working-age adults (ages 16–65), excluding adults who could not participate in PIAAC’s background survey, PIAAC data show that there are 36 million U.S. adults with low literacy skills. This estimate was originally calculated using data from the first round of PIAAC (PIAAC 2012) and published in the OECD’s 2013 report Time for the U.S. to Reskill? This estimate is no longer recommended for use as it does not include literacy-related nonrespondents who could not participate in PIAAC’s background survey due to a language barrier or a cognitive or physical inability to be interviewed.

Table 1. Estimated number of U.S. adults with low literacy levels
Estimate Age range Literacy-related
nonrespondents1
Literacy
proficiency levels
included2
Source Reference
year(s)
52.0 million 16–65 and 66–74 Included Level 1,
Below Level 1
This estimate is calculated using the latest combined 2012/2014/2017 file. 2012/2014/2017
43.0 million 16–65 Included Level 1,
Below Level
This estimate was reported in the Data Point on Adult Literacy in the United States. 2012/2014
36.0 million3 16–65 Excluded Level 1,
Below Level 1
This estimate was reported in Time for the U.S. to Reskill? and has been cited in other reports, such as Making Skills Everyone’s Business. 2012

1 Literacy-related nonrespondents are those adults who could not participate in PIAAC due to a language barrier or a cognitive or physical inability to be interviewed.
2 For definitions of literacy proficiency levels, see the PIAAC Proficiency Levels for Literacy.
3 This estimate is no longer recommended for use.


Numeracy

When looking at the population of all working-age adults (ages 16–65) and older adults (ages 66–74) combined, PIAAC data show that there are 74 million U.S. adults with low numeracy skills. This estimate can be calculated using the combined data from all three rounds of the U.S. PIAAC data collection (2012, 2014, and 2017) by including (a) the internationally comparative adult population (ages 16–65) who are at PIAAC’s proficiency level 1 or below, (b) older adults (ages 66–74) who are at PIAAC’s proficiency level 1 or below, and (c) those adults who could not participate in PIAAC’s background survey due to a language barrier or a cognitive or physical inability to be interviewed; they are officially called “literacy-related nonrespondents.”

When looking at the population of just working-age adults (ages 16–65), PIAAC data show that there are 63 million U.S. adults with low numeracy skills. This estimate can be calculated using the combined data from the first two rounds of the U.S. PIAAC data collection (2012 and 2014) by including (a) the internationally comparative adult population (ages 16–65) who are at PIAAC’s proficiency level 1 or below and (b) literacy-related nonrespondents who could not participate in PIAAC’s background survey due to a language barrier or a cognitive or physical inability to be interviewed. This estimate was reported in Adult Numeracy in the United States.

Table 2. Estimated number of U.S. adults with low numeracy levels
Estimate Age range Literacy-related
nonrespondents1
Numeracy
proficiency levels
included 2
Source Reference
year(s)
73.9 million 16–74 Included Level 1,
Below Level 1
This estimate is calculated using the latest combined 2012/2014/2017 file. 2012/2014/2017
62.7 million 16–65 Included Level 1,
Below Level
This estimate was reported in the Data Point on Adult Numeracy in the United States. 2012/2014

1 Literacy-related nonrespondents are those adults who could not participate in PIAAC due to a language barrier or a cognitive or physical inability to be interviewed.
2 For definitions of numeracy proficiency levels, see PIAAC Proficiency Levels for Numeracy.
Note: No estimate for the number of adults with low numeracy skills was published by the OECD using just the PIAAC 2012 data.

CLOSE

Countries that participate in PIAAC must draw a sample of individuals ages 16-65 that represents the entire population of adults living in households in the country. Some countries draw their samples from national registries of all persons in the country; others draw their samples from census data. In the United States, a nationally representative household sample was drawn from the most current Census Bureau population estimates.

The U.S. sample design employed by PIAAC in the U.S. data collections is generally referred to as a four-stage stratified area probability sample. This method involves the selection of (1) primary sampling units (PSUs) consisting of counties or groups of contiguous counties, (2) secondary sampling units (referred to as segments) consisting of area blocks, (3) dwelling units (DUs) selected from address listings, and (4) eligible persons (the ultimate sampling unit) within DUs. Random selection methods are used at each stage of sampling. This sample design ensured the production of reliable statistics for a minimum of 5,000 completed cases for the first round of data collection. For more information about the sample design used in each round of U.S. data collection, see question 24. More details on the sample design can also be found in the U.S. PIAAC 2012/14/17 Technical Report.

CLOSE

No, PIAAC is a voluntary assessment.

CLOSE

In the second round of the U.S. PIAAC data collection in 2014, a monetary incentive of $5 was paid to household representatives who completed the screener, which contained questions that would determine the eligibility of household members to be included in the sample. In the first round in 2012 and the third round in 2017, no monetary incentive was paid to the household representative for completing the screener. The screener incentive used in the second round of data collection was intended to help reduce nonresponse to a screener that was slightly longer than that used in the first round. Specifically, the second-round screener included various questions about unemployment status that were not included in the first-round screener. Upon review of the 2014 screening results and the logistics required to track incentives, combined with the expectation that most PIAAC 2017 households would have at least one participant selected, the $5 incentive was eliminated for the third round.

In all three rounds of data collection, following the completion of the assessment, an additional monetary incentive of $50 was paid to each respondent. The incentive was also paid to those adults who attempted to complete the assessment but were legitimately not able to complete it because of language barriers or physical or mental disabilities. Respondents who refused to continue with the assessment were not compensated. No incentives were offered to the 2014 prison sample respondents.

CLOSE

Separately, the U.S. PIAAC 2012/2014 sample and the U.S. PIAAC 2017 sample do not have enough respondents to produce accurate estimates of adults’ skills at the state or county level. However, using a technique called small area estimation (SAE), NCES was able to develop a model that combines the 2012/2014 and 2017 samples to produce estimates of adults’ skills for all U.S. states and counties.

In April 2020, NCES released an interactive, user-friendly mapping tool called the U.S. PIAAC Skills Map: State and County Indicators of Adult Literacy and Numeracy. The Skills Map includes state- and county-level estimates of average literacy and numeracy scores for each U.S. state and county, including the District of Columbia, as well as the proportions of adults at different PIAAC proficiency levels.

CLOSE

All adults, regardless of immigration status, were part of the PIAAC target population. In order to get a representative sample of the adult population currently residing in the United States, respondents were not asked about citizenship status before taking the assessment and were guaranteed anonymity for all their answers to the Background Questionnaire. Although the assessment was administered only in English, the Background Questionnaire was offered in both Spanish and English. These procedures allowed the estimates to be applicable to all adults in the United States, regardless of citizenship or legal status, and they mitigated the effects of low English language proficiency. The Highlights of the 2017 U.S. Results Web Report provides results specifically for both native and non-native adults.

As in most participating countries, non-native-born adults in the United States had, on average, lower scores than native-born adults. The percentage of non-native-born adults in the United States in 2012/2014 was 15 percent. The average percentage of non-native-born adults across all the 23 participating countries for which data are available was 12 percent, ranging from less than 1 percent in Japan to 29 percent in New Zealand.

CLOSE

PIAAC has improved and expanded on the cognitive frameworks of previous large-scale adult literacy assessments (including the National Assessments of Adult Literacy, National Adult Literacy Survey, International Adult Literacy Survey, and Adult Literacy and Lifeskills Survey) and added an assessment of problem solving via computer, which was not a component of these earlier surveys. In addition, PIAAC is capitalizing on prior experiences with large-scale assessments in its approach to survey design and sampling, measurement, data collection procedures, data processing, and weighting and estimation.

The most significant difference between PIAAC and previous large-scale assessments is that PIAAC is administered on laptop computers and is designed to be a computer-adaptive assessment, so respondents receive groups of items targeted to their performance levels (respondents not able to or not wishing to take the assessment on computer are provided with an equivalent paper-and-pencil version of the literacy and numeracy items). Because of these differences, PIAAC introduced a new set of scales to measure adult literacy, numeracy, and problem solving. Some scales from these previous adult assessments have been mapped to the PIAAC scales so that performance can be measured over time.

CLOSE

The results from the National Adult Literacy Survey (NALS), International Adult Literacy Survey (IALS) and the Adult Literacy and Lifeskills Survey (ALL) have been rescaled to match the statistical models used in creating PIAAC scores for literacy and, in the case of ALL, numeracy. Rescaling was possible because PIAAC repeated a sufficient number of the same test questions used in NALS, IALS, and ALL. Rescaled plausible values for literacy for IALS and ALL and numeracy for ALL, along with some trend background questionnaire items, allow for trend analysis with the U.S. PIAAC results along an international trend line. Separately, the rescaled plausible values for literacy for NALS, along with some trend background questionnaire items, allow for trend analysis with the U.S. PIAAC results along a national trend line. For more information on use of the rescaled IALS, ALL, and NALS files, view the blog Rescaled Data Files for Analyses of Trends in Adult Skills.

The results from the National Assessment for Adult Literacy (NAAL) were not rescaled to match the model used in creating PIAAC scores. For various reasons, including that there is no overlap between NAAL and PIAAC literacy items, NAAL and PIAAC are thought to be the least comparable of the adult literacy assessments (or practically, not comparable) in terms of literacy items.

CLOSE

PIAAC and the Program for International Student Assessment (PISA) both emphasize knowledge and skills in the context of everyday situations, asking students and adults to perform tasks that involve real-world materials as much as possible. PISA is designed to show the knowledge and skills that 15-year-old students have accumulated within and outside of school. It is intended to provide insight into what students who are about to complete compulsory education know and are able to do.

PIAAC focuses on adults who are already eligible to be in the workforce and aims to measure the set of literacy, numeracy, and technology-based problem-solving skills an individual needs in order to function successfully in society. Therefore, PIAAC does not directly measure the academic skills or knowledge that adults may have learned in school. Instead, the PIAAC assessment focuses on tasks that adults may encounter in their lives at home, at work, or in their community.

CLOSE

The PIAAC skills results (i.e., proficiency levels) do not specifically correspond to measures such as grade levels at school. The PIAAC proficiency levels have a use-oriented conception of competency and focus on describing what types of tasks adults at each level can typically do and their ability to apply information from the task to accomplish goals they may encounter in everyday life; for example, identifying a job search result that meets certain criteria. PIAAC is not designed to measure specific outcomes of schooling, including what students would be expected to learn in a particular grade or skills they would be expected to have mastered before progressing to a higher grade level, such as the ability to read or comprehend a particular text or use certain subskills like alphabetics and vocabulary. Additionally, grade-level equivalents may be unsuitable for characterizing the skills of adults, who often have uneven skill development across different areas.

While NCES, as well as the Organization for Economic Cooperation and Development (OECD), which coordinates the PIAAC study internationally, does not have an official mapping between proficiency levels and grade levels at school, the PIAAC proficiency levels are defined clearly and there is information to help one better understand them. A general overview of the literacy and numeracy proficiency levels and what adults at different levels can do is available here (in the second FAQ). A more detailed description of what adults at each of these proficiency levels can do—and the types of tasks that adults at these proficiency levels can successfully complete—is available in the NCES PIAAC Literacy section here. Example items for various proficiency levels, which could help one to understand these levels, are available in the Sample Items section on the NCES What PIAAC Measures page or in the Sample PIAAC Tasks presentation from the Outreach Toolkits on the PIAAC Gateway website.

CLOSE
Adaptive problem solving (APS) replacing problem solving in technology-rich environments (PS-TRE)

In addition to the literacy and numeracy domains, PIAAC Cycle II includes a new domain called adaptive problem solving (APS), replacing Cycle I’s problem solving in technology-rich environments domain. APS will measure the ability to achieve one’s own goals in a dynamic situation in which a method for reaching a solution is not directly available (for more information, see FAQ 16. Why is adaptive problem solving (APS) replacing problem solving in technology-rich environments (PS-TRE) in PIAAC Cycle II? How do APS and PS-TRE compare?)

Numeracy components

Furthermore, Cycle II includes a new measure of numeracy component skills, as well as the measure of reading components that was included in Cycle I. These numeracy component tasks focus on number sense and provide more detailed information about adults with low numeracy skills.

New sections of background questionnaire including financial literacy

The Cycle II background questionnaire includes some trend questions related to demographic characteristics, education, and employment. In addition, two new components, Social Emotional Skills and Quality of Work Environment, have been added to the Cycle II international questionnaire; note, however, that the U.S. questionnaire does not include the section on Social Emotional Skills. The U.S. background questionnaire does include a new section on financial literacy, collecting information on the financial knowledge and behavior of U.S. adults.

Administration on tablets

Additionally, all respondents in PIAAC Cycle II take the assessment on a digital tablet, while Cycle I was offered on either a laptop or paper-and-pencil version.

CLOSE

One of the main justifications for introducing APS is that “the rapid changes in the social, physical, and technological world require individuals to be more vigilant to changes, more adaptive, and more willing to modify their plans in pursuit of their goals” and therefore “competence to solve problems and to adapt to changing conditions is of crucial importance in the 21st century.” (OECD)

The PS-TRE assessment in PIAAC Cycle I (also referred to as digital problem solving or DPS) focused mainly on measuring proficiency in the use of specific digital applications to access, search, manage, interpret, and evaluate information. APS focuses on rapidly changing work and everyday life, underlining that problem solving is a process that takes place in complex environments, with solutions requiring a constant attempt to adapt to a new situation rather than a static sequence of a number of pre-set steps. Additionally, while use of computer applications is required in all PS-TRE tasks, in APS use of computer applications is not necessarily required.

The following three core aspects distinguish APS from PS-TRE:

  1. APS focuses on the need for skills that enable adults to adjust their thinking and reasoning to novel and changing information, such as a change in the nature of the problem or unexpected difficulties.
  2. Radical changes in digital technologies and communication media have changed characteristics of the types of problems that individuals encounter in their work and daily life, both online and offline. The amount of information, and the shift in the information environment that people are confronted with, is reflected in APS.
  3. In a highly adaptive complex environment, APS focuses more on metacognitive processes where problem solvers need to have an “ability to calibrate one’s comprehension of the problem, evaluate potential solutions, and monitor progress towards the goals.”

Further details on the Adaptive Problem Solving (APS) domain of PIAAC Cycle II, including example items, and comparison to Digital Problem Solving domain of PIAAC Cycle I, can be found in the OECD PIAAC Cycle II Assessment Frameworks. Definitions and example items for the PS-TRE domain can be found on the NCES What PIAAC Measures page.

CLOSE
Back to Top Back to Top

PIAAC in other countries

The design and implementation of PIAAC was guided by technical standards and guidelines developed by literacy experts to ensure that the survey yielded high-quality and internationally comparable data. For example, for their survey operations, participating countries were required to develop a quality assurance and quality control program that included information about the design and implementation of the PIAAC data collection. In addition, all countries were required to adhere to recognized standards of ethical research practices with regard to respect for respondent privacy and confidentiality, the importance of ethics and scientific rigor in research involving human subjects, and the avoidance of practices or methods that might harm or seriously mislead survey participants. Compliance with the technical standards was mandatory and monitored throughout the development and implementation phases of the data collection through direct contact, submission of evidence that required activities had been completed, and ongoing collection of data from countries concerning key aspects of implementation.

In addition, participating countries provided standardized training to the interviewers who administered the assessment in order to familiarize them with survey procedures that would allow them to administer the assessment consistently across respondents and reduce the potential for erroneous data. After the data collection process, the quality of each participating country's data was reviewed prior to publication. The review was based on the analysis of the psychometric characteristics of the data and evidence of compliance with the technical standards.

CLOSE

Sampling is carefully planned and monitored. The rules of participation require that countries design a sampling plan that meets the standards in the PIAAC Technical Standards and Guidelines and submit it to the PIAAC Consortium for approval. In addition, countries are required to complete quality control forms to verify that their sample was selected in an unbiased and randomized way. Quality checks are performed by the PIAAC Consortium to ensure that the submitted sampling plans have been followed accurately.

CLOSE

The PIAAC results are nationally representative and therefore reflect countries as they are: highly diverse or not. PIAAC collects extensive information about respondents' backgrounds and therefore supports analyses that take into account differences in the level of diversity across countries. International PIAAC reports produced by the OECD present some analyses that examine issues of diversity.

CLOSE

As an international assessment of adult competencies, PIAAC differs from student assessments in several ways. PIAAC assesses a wide range of ages (16–65), whereas student assessments target a specific age (e.g., 15-year-olds in the case of the Program for International Student Assessment (PISA)) or grade (e.g., grade 4 in the Progress in International Reading Literacy Study (PIRLS)). PIAAC is a household assessment (i.e., an assessment administered in individuals' homes), whereas the international student assessments (PIRLS, PISA, and Trends in International Mathematics and Science Study (TIMSS)) are conducted in schools. The skills that are measured in each assessment also differ based on the goals of the assessment. Both TIMSS and PIRLS are curriculum based and are designed to assess what students have been taught in school in specific subjects (such as science, mathematics, or reading) using multiple-choice and open-ended test questions. In contrast, PIAAC and PISA are “literacy” assessments, designed to measure performance in certain skill areas at a broader level than school curricula. So while TIMSS and PIRLS aim to assess the particular academic knowledge that students are expected to be taught at particular grades, PISA and PIAAC encompass a broader set of skills that students and adults have acquired throughout life.

CLOSE

Each country can collect data for subgroups of the population that have national importance. In some countries, these subgroups are identified by language usage; in other countries, they are distinguished by tribal affiliation. In the United States, different racial and ethnic subgroups are of national importance. However, categories of race and ethnicity are social and cultural categories that differ greatly across countries. As a result, they cannot be compared accurately across countries.

CLOSE

PIAAC collects extensive information on educational attainment and years of schooling. For the purpose of cross-country comparisons of educational attainment, the education level classifications of each country are standardized using the International Standard Classification of Education (ISCED). For example, the ISCED level for short-cycle tertiary education (ISCED level 5) is equivalent to an associate degree in the United States; therefore, comparisons of adults with an associate degree or its equivalent can be made across countries using this classification. Please note that the education variables in PIAAC 2012 were classified using the ISCED97. Additional education variables that were classified using ISCED11 are available in the U.S. PIAAC 2012/2014 and 2017 datasets.

CLOSE
Back to Top Back to Top

PIAAC rounds of data collection in the United States

First round: the 2012 PIAAC Main Study

The first round of data collection in the United States, which was conducted in 2011–12 and surveyed adults ages 16–65, provided initial results on the skills of U.S. adults. The sample from this round of data collection is now combined with the sample from the second round of the U.S. data collection in 2014 to produce more in-depth analyses.

CLOSE
Second round: the 2014 PIAAC National Supplement

The National Supplement, conducted in 2013–14, was the second round of data collection for PIAAC in the United States; it followed the Main Study, the first round of data collection, which was conducted in 2011–12 and surveyed adults ages 16–65. The National Supplement increased the number of unemployed adults (ages 16–65) and young adults (ages 16–34) in the sample and added older adults (ages 66–74) as well as incarcerated adults (ages 16–74).

CLOSE

The second round of data collection for PIAAC in the United States was conducted to augment the first round of PIAAC data by increasing the sample size. When the data from both rounds are combined, more in-depth analyses of the cognitive and workplace skills of the U.S. population can be produced (in particular, of unemployed and young adults, who were oversampled in the second round).

CLOSE
Third round: the 2017 National Data Collection

The PIAAC 2017 national data collection was conducted for two main purposes. First, the 2017 data was designed to provide a second point in time for comparisons to the 2012/2014 data. Second, by combining all three rounds of U.S. PIAAC data (2012, 2014, and 2017), NCES was able to develop estimates of adults' skills at the state and county levels. The 2012/2014 sample alone was not large enough to produce direct estimates of skills for smaller geographic areas.

CLOSE
All three rounds

In all three rounds of PIAAC in the United States, the same instruments and procedures, including the Background Questionnaire and Direct Assessment, were used for the household survey, with the exception of several new questions added to the Background Questionnaire in 2017. These new questions in 2017 included items on GED completion, certifications and licenses, whether a full-time job or permanent contract was preferred, military service, and household income. For the prison study, the Background Questionnaire was modified to collect information related to the needs and experiences of incarcerated adults.

However, the data collections did sample different populations. The first round of data collection surveyed a nationally representative sample of adults ages 16–65, while the second round did not survey a nationally representative sample of adults, but rather only the key subgroups of interest. The second round of PIAAC also surveyed two subgroups of the population that were not part of the first round of data collection: older adults (ages 66–74) and incarcerated adults (ages 16–74). Note that the two household samples from the first and second rounds of data collection were combined to provide a nationally representative sample of 16- to 74-year-old adults across the period of data collection (2011–2014). The third round of data collection surveyed a nationally representative sample of adults ages 16–74.

CLOSE

The first round of data collection for PIAAC (in 2012) had a sample of 5,010 U.S. adults (ages 16–65) and did not oversample any specific subgroups of interest. The second round of data collection for PIAAC (in 2014) sampled 3,660 U.S. adults who were unemployed (ages 16–65), young (ages 16–34), or older (ages 66–74). The household sample selection in the second round differed from the first and third rounds in that only persons in the target groups were selected. The sampling approach in the second round consisted of an area sample that used the same primary sampling units (PSUs) as in the first round; in addition, it included a list sample of dwelling units from high-unemployment Census tracts in order to obtain the oversample of unemployed adults. When the data from the first and second rounds are combined, they produce a nationally representative sample with larger subgroup sample sizes that can produce estimates of higher precision for the subgroups of interest.

The third round of data collection for PIAAC (in 2017) had a sample of 3,660 U.S. adults (ages 16–74) and, similar to the first round, did not oversample any specific subgroups of interest. The sample design for the third round minimized overlap with the PIAAC 2012/2014 PSUs in order to add sample cases from counties with different demographic characteristics. This was done in order to optimize the combined 2012/14/17 sample for county-level estimation.

CLOSE
Back to Top Back to Top

U.S. State and County Estimates

See State and County Estimates FAQs at the U.S. State and County Estimates Resources page.


Additional U.S. PIAAC data

U.S. PIAAC Combined 2012/2014 Data

The United States conducted two rounds of data collection for PIAAC, but not two independent studies. The first and second rounds of data collected are meant to be combined and analyzed together, but they cannot be compared.

Because of the timing of the first and second rounds of the PIAAC data collection in the United States, the information available for each round’s sampling frames differed. Specifically, the 2012 data were based on the 2000 U.S. Census, while the 2014 data were based on the 2010 U.S. Census. Therefore, in addition to the larger combined sample (8,670 for the household), the improved accuracy of estimates is due in part to the revised population estimates based on the 2010 Census data, which were unavailable when PIAAC 2012 went into the field.

For the 2012 data collection, weights for all respondents were calibrated to the U.S. Census Bureau's 2010 American Community Survey population totals for those ages 16–65. (The 2010 American Community Survey population totals were derived from 2000 U.S. Census projections because the full 2010 U.S. Census population results were not yet available.) Once the 2010 U.S. Census population results were finalized, the U.S. Census refreshed its entire time series of estimates going back to the previous census each year using the most current data and methodology. One result of this refresh is a shift in the proportion of the population with more education.

A comparison of the population totals used to calibrate the 2012 Main Study data with those used to calibrate the composite 2012/2014 dataset reveals that the percentage of the U.S. population ages 16–65 with college experience (some college or a college degree) increased by 3 to 4 percent and the percentage of the population ages 16–65 with less than a high school diploma decreased by 4 percent. This change has no effect on PIAAC's measurement of skills in the United States, but it does mean that the proportion of the population with higher skills has been found to be larger than previously estimated for the 2012 Main Study. Therefore, adults' skills did not change in this time period, but due to the larger sample and the updated Census data, the estimates of skills reported with the combined 2012/2014 sample are more accurate.

CLOSE

The combined 2012/2014 U.S. household sample of all adults ages 16–65 can be compared to the samples from other countries that participated in PIAAC. Two of the additional subsamples that were a focus of the National Supplement can also be compared to international samples: the sample of younger adults ages 16–34 and unemployed adults ages 16–65.

Two of the other household samples are unique to the U.S. supplemental study and cannot be compared to samples from other countries: the sample of older adults 66–74 and the total sample of adults 16–74.

CLOSE
U.S. PIAAC 2017 Data

In national U.S. reporting, the 2012/2014 sample will be used as the point of comparison to the samples of other countries that participated in Cycle I (Rounds 1, 2, and 3). Although the 2017 sample is also nationally representative, NCES has decided to use the 2012/2014 data in order to keep a single point of comparison for the United States and to keep the reporting of where the United States stands internationally and the international average more consistent with previous NCES reporting. Additionally, using 2012/2014 data rather than 2017 data provides a larger sample size and data collected at a time point closer to when the majority of the Cycle I participating countries collected their data.

In international reporting by the OECD, both U.S. 2012/2014 and 2017 data are included in international comparisons, with the two U.S. time points included as two separate data points.

CLOSE

The Highlights of 2017 U.S. Results Web Report includes 2017 U.S. PIAAC data. The NCES International Data Explorer (IDE) has been updated to allow users to conduct analyses with both the U.S. PIAAC 2017 and 2012/2014 data. Additionally, the U.S. PIAAC 2017 public-use data files and restricted-use data files are now available. The international report published by the OECD, Skills Matter: Additional Results from the Survey of Adult Skills, includes U.S. 2017 and 2012/2014 data.

CLOSE

The PIAAC international averages in the U.S. 2012 PIAAC First Look report were calculated by the OECD using restricted data from all countries participating in Cycle I, Round 1. However, restricted data from Australia and Canada are not available to the United States because of national restrictions on the use of their data. Thus, with the exception of figures 1 and 2, the PIAAC international averages in the U.S. 2012/2014 PIAAC First Look report were calculated (a) with available data from Round 1 countries, (b) without Australia's data, (c) with Canada's publicly available data, and (d) with the combined 2012 and 2014 U.S. data. Differences in the international averages calculated for the U.S. 2012 PIAAC First Look report and the U.S. 2012/2014 PIAAC First Look report are very small, but some estimates round differently.

With the release of data from countries participating in Cycle 1, Round 2, the PIAAC international averages in the U.S. PIAAC International Highlights Web Report were calculated (a) with available data from Round 1 and Round 2 countries, (b) without Australia's data, (c) with Canada's publicly available data, and (d) with the 2012/2014 U.S. data. As there are several new countries included in the international average, some estimates of this average and how the U.S. results compare to it may be different from estimates included in the U.S. 2012 and U.S. 2012/2014 First Look reports.

CLOSE
Back to Top Back to Top

U.S. PIAAC Study of Incarcerated Adults

The PIAAC Prison Study is an assessment of the literacy, numeracy, and digital problem-solving skills of incarcerated adults between the ages of 18 and 74 in U.S. prisons. The study was administered to approximately 1,300 incarcerated adults. The prison sample is nationally representative of the approximately 1.5 million adults in state and federal prisons and in private prisons housing state and federal inmates. The results provide comprehensive information on the skills and background of the U.S. adult prison population. Where applicable, the results compares the skills of U.S. incarcerated and household populations across different characteristics, including age, race, gender, educational attainment, language spoken at home before starting school, and parents' educational attainment.

CLOSE

The goal of the PIAAC Prison Study was to provide detailed nationally representative data on the skills of incarcerated adults for researchers, correctional administrators, and policymakers to

  • allow them to craft sound policies that may result in the development of education and training policies and programs that could improve opportunities for incarcerated adults; and
  • enable further research and analyses that support improvements in incarcerated adults' ability to function in society upon their release from prison and, consequently, to reduce recidivism
CLOSE

Currently, no other countries have used PIAAC to assess the skills of their incarcerated adults.

CLOSE

No, NCES has conducted two previous studies of incarcerated adults. The first was conducted in the early 1990s as part of the National Adult Literacy Survey (NALS) and the second in the early 2000s as part of the National Assessment of Adult Literacy (NAAL). Thus, the PIAAC Prison Study is the third such study of the U.S. incarcerated population, and it assesses a broader range of skills than the previous studies. Results from the previous studies can be found in Literacy Behind Prison Walls and Literacy Behind Bars: Results From the 2003 National Assessment of Adult Literacy Prison Survey.

CLOSE

Yes, the PIAAC Prison Study is a nationally representative sample of incarcerated adults in state and federal prisons and in private prisons housing state and federal inmates. A two-stage sample design with random sampling methods at each stage was used to select the inmates. In the first stage of sampling, 100 prisons were selected, of which 98 participated (80 were male-only or co-ed and 18 were female-only). Probability of selection of the prison was based on whether or not it housed only female inmates. In the second stage of selection, inmates were randomly selected from a listing of inmates occupying a bed the previous night or, for prisons operated by the Bureau of Prisons, based on a roster of inmates provided a week before the visit. Approximately 15 inmates, on average, were selected from the sampled facilities.

CLOSE

Facilities were included in the sample if they

Based on the recommendation of adult corrections experts the following types of facilities and institutions were excluded:

  • private facilities not primarily for state or federal inmates;
  • military facilities;
  • Immigration and Customs Enforcement (ICE) facilities;
  • Bureau of Indian Affairs facilities;
  • facilities operated by or for local government, including those housing state prisoners;
  • facilities operated by the United States Marshals Service;
  • hospital wings and wards reserved for state prisoners;
  • facilities that hold only juveniles; and
  • community corrections facilities (such as halfway-houses, boot camps, weekend programs, and other entities in which individuals are locked up overnight).
CLOSE

Even though juvenile facilities contain inmates up to age 21, they were excluded from the PIAAC prison sample for two reasons: (1) to be consistent with the facilities listed in the 2005 Prison Census (Bureau of Justice Statistics Census of State and Federal Adult Correctional Facilities) and (2) to be cost effective. It would not be cost effective to visit these facilities to sample the small number of inmates 16 years of age and older (approximately 24,000) when there are 1.5 million adult inmates in state or federal correctional facilities.

CLOSE

Female-only prisons were oversampled in order to ensure an adequate sample size of female inmates, to provide estimates of the skills of this group, and to permit comparisons with male inmates.

CLOSE

The Prison Study sampling frame was created from two data sources:

  1. the most recent (2005) Bureau of Justice Statistics Census of State and Federal Adult Correctional Facilities (referred to as the Prison Census) and
  2. the most recent (2012) Directory of Adult and Juvenile Correctional Departments, Institutions, Agencies, and Probation and Parole Authorities available from the American Correctional Association (ACA) (referred to as the ACA Directory).
CLOSE

The same direct assessment of literacy, numeracy, and problem solving in technology-rich environments used with the U.S. household sample was used with the prison sample.

The 2014 Prison Background Questionnaire (also administered in Spanish) was designed to collect information related to the needs and experiences of incarcerated adults based on recommendations of a prison expert panel. Adaptations to the questionnaire for the prison sample included (a) deleting questions that would be irrelevant to respondents in prison; (b) editing question wording or response options to make them relevant to respondents' experience in prison or prior to their incarceration; and (c) adding questions that addressed respondents' specific activities in prison (e.g., participation in academic programs and English as a Second Language (ESL) classes; experiences with prison work assignments; involvement in nonacademic programs, such as life skills). Several of the prison-specific questions were adopted from the National Assessment of Adult Literacy (NAAL) 2003 Prison Background Questionnaire.

CLOSE

Prisons were sampled approximately 9 months prior to the start of data collection, which took place from February through June of 2014. The permission and cooperation of federal, state, and correctional facility officials were required, before the data collection could begin. State- and prison-level contacts were contacted approximately 6 months prior to the start of data collection. By the time an interviewer entered any correctional institution, the project negotiator had already obtained that facility's approval for participation, established a contact within the facility, and finalized interviewing arrangements.

Similar to the household study, once the sample was selected, the Background Questionnaire interview was conducted by the interviewer in English or Spanish in a private setting (provided by the prison authorities in the case of the Prison Study). Upon completion of the Background Questionnaire, the respondent was provided either the paper-and-pencil or computer-based assessment, based on their computer experience, willingness to take the assessment on computer, and performance on a simple computer familiarity test, conducted after the Background Questionnaire. The majority of inmates (61 percent) took the direct assessment on the laptop computers; 37 percent took the paper-based assessment.

CLOSE

The overall weighted response rate for the prison sample was 82 percent. The response rate was 98 percent without substitute prisons and 100 percent with substitute prisons. The final response rate for the Background Questionnaire—which included respondents who completed it and respondents who were unable to complete it because of a literacy-related barrier—was 86 percent (weighted). The final response rate for the overall assessment was 98 percent (weighted).

CLOSE
Back to Top Back to Top