Frequently Asked Questions

In addition to the following questions about PIAAC, more FAQs about international assessments are available at: http://nces.ed.gov/surveys/international/faqs.asp.

General information on PIAAC

  1. What is assessed in PIAAC?
  2. How valid is PIAAC? Are assessment questions that are appropriate for the population in one country necessarily appropriate for the population in another country?
  3. How can you be sure that countries administer the test in the same way?
  4. What does problem solving test or measure?
  5. What are "technology-rich environments"?
  6. Why does PIAAC measure literacy only in English?
  7. How does PIAAC select a representative sample of adults?
  8. Were immigrants, illegal immigrants, or non-English speakers assessed in PIAAC? Did they bring down our scores?
  9. What if some countries select only their highest performing adults to participate in PIAAC? Won't they look better than the other participating countries?
  10. Are adults required to participate in PIAAC?
  11. How do international assessments deal with the fact that adult populations in participating countries are so different? For example, the United States has higher percentages of immigrants than some other countries.
  12. How does PIAAC differ from international student assessments?
  13. How does PIAAC differ from earlier adult literacy assessments, such as NALS, IALS, NAAL and ALL?
  14. How do PIAAC and PISA compare?
  15. Why doesn't PIAAC report differences between minorities in the United States and minorities in other countries?
  16. Can the PIAAC data be used to report scores for states?
  17. How do international assessments deal with the fact that education systems are so different across countries?

About the second round of data collection in the United States (the PIAAC National Supplement)

  1. What is the PIAAC National Supplement?
  2. Why was a second round of PIAAC data collected in the United States?
  3. What are the differences between the first round of data collection for PIAAC in 2012 (the Main Study) and the second round in 2014 (the National Supplement)?
  4. What is the scope of the household sample in the 2014 National Supplement? Is it different from the scope of the household sample in the 2012 Main Study?
  5. What is the scope of the prison sample in the second round of data collection?
  6. Were the same instruments and procedures used in the first and second rounds of data collection? Were the same instruments and procedures used for the household samples and the prison samples?
  7. What incentives were given to participants?

About the updated U.S. PIAAC data (the combined 2012/2014 dataset)

  1. How and why are the current U.S. results (from the combined 2012/2014 dataset) different from the results from the PIAAC Main Study in 2012? Why did the U.S. household scores and ranking change? Did it change because the skills of U.S. adults improved or declined between 2012 and 2014?
  2. How and why are the international averages reported in the 2014 First Look report is different from the averages reported in the 2012 PIAAC First Look report?
  3. Can the combined 2012/2014 U.S. household sample be compared to the samples in other countries? Which subsamples can be compared? Which cannot?
  4. Do all of the currently available estimates include data from the 2014 National Supplement?
  5. When will the results from the U.S. national supplemental study of prisons become available?

General information on PIAAC

1. What is assessed in PIAAC?

PIAAC is designed to assess adults over a broad range of abilities: from simple reading to complex computer-based problem-solving skills. All countries that participated in PIAAC in 2012 assessed the domains of literacy and numeracy in both a paper-and-pencil mode and a computer-administered mode. In addition, some countries assessed problem solving (administered on a computer) as well as components of reading (administered only in a paper-and-pencil mode). The United States assessed all four domains.

2. How valid is PIAAC? Are assessment questions that are appropriate for the population in one country necessarily appropriate for the population in another country?

The assessment was designed to be valid cross-culturally and cross-nationally.

PIAAC assessment questions are developed in a collaborative, international process. PIAAC assessment questions were based on frameworks developed by internationally known experts in each subject or domain. Assessment experts and developers from ministries/departments of education and labor and OECD staff participated in the conceptualization, creation, and extensive year-long reviews of assessment questions. In addition, the PIAAC Consortium's support staff, assisted by expert panels, researchers, and working groups, developed PIAAC's Background Questionnaire. The PIAAC Consortium also guided the development of common standards and procedures for collecting and reporting data, as well as the international "virtual machine" software that administers the assessment uniformly across countries. All PIAAC countries follow the common standards and procedures and use the virtual machine software when conducting the survey and assessment. As a result, PIAAC can provide a reliable and comparable measure of literacy skills in the adult population of participating countries.

Before the administration of the assessment, a field test was conducted in the participating countries. The PIAAC Consortium analyzed the field-test data and implemented changes to eliminate problematic test items or revise procedures prior to the administration of the assessment.

3. How can you be sure that countries administer the test in the same way?

The design and implementation of PIAAC was guided by technical standards and guidelines developed by literacy experts to ensure that the survey yielded high-quality and internationally comparable data. For example, for their survey operations, participating countries were required to develop a quality assurance and quality control program that included information about the design and implementation of the PIAAC data collection. In addition, all countries were required to adhere to recognized standards of ethical research practices with regard to respect for respondent privacy and confidentiality, the importance of ethics and scientific rigor in research involving human subjects, and the avoidance of practices or methods that might harm or seriously mislead survey participants. Compliance with the technical standards was mandatory and monitored throughout the development and implementation phases of the data collection through direct contact, submission of evidence that required activities had been completed, and ongoing collection of data from countries concerning key aspects of implementation.

In addition, participating countries provided standardized training to the interviewers who administered the assessment in order to familiarize them with survey procedures that would allow them to administer the assessment consistently across respondents and reduce the potential for erroneous data. After the data collection process, the quality of each participating country's data was reviewed prior to publication. The review was based on the analysis of the psychometric characteristics of the data and evidence of compliance with the technical standards.

4. What does problem solving test or measure?

The "problem solving in technology-rich environments" domain assesses the cognitive processes of problem solving: goal setting, planning, selecting, evaluating, organizing, and communicating results. In a digital environment, these skills involve understanding electronic texts, images, graphics, and numerical data, as well as locating, evaluating, and critically judging the validity, accuracy, and appropriateness of the accessed information.

Top

5. What are "technology-rich environments"?

The environment in which PIAAC problem solving is assessed is meant to reflect the fact that digital technology has changed the ways in which individuals live their day-to-day lives, communicate with others, work, conduct their affairs, and access information. Information and communication technology tools such as computer applications, the Internet, and mobile technologies are all part of the environments in which individuals operate. In PIAAC, items for problem solving in technology-rich environments are presented on laptop computers in simulated software applications using commands and functions commonly found in e-mail, web browsers, and spreadsheets.

6. Why does PIAAC assess literacy only in English?

PIAAC assesses adults in the official language or languages of each participating country. Based on a 1988 congressional mandate and the 1991 National Literacy Act, the U.S. Department of Education is required to evaluate the status and progress of adults' literacy in English. However, in order to obtain background information from a wide range of respondents in the United States, the PIAAC Background Questionnaire was administered in both English and Spanish.

7. How does PIAAC select a representative sample of adults?

Countries that participate in PIAAC must draw a sample of individuals ages 16-65 that represents the entire population of adults living in households in the country. Some countries draw their samples from national registries of all persons in the country; others draw their samples from census data. In the United States, a nationally representative household sample was drawn from the most current Census Bureau population estimates.

The U.S. sample design employed by PIAAC in the first round of U.S. data collection is generally referred to as a four-stage stratified area probability sample. This method involves the selection of (1) primary sampling units (PSUs) consisting of counties or groups of contiguous counties, (2) secondary sampling units (referred to as segments) consisting of area blocks, (3) dwelling units (DUs), and (4) eligible persons (the ultimate sampling unit) within DUs. Random selection methods are used, with calculable probabilities of selection at each stage of sampling. This sample design ensured the production of reliable statistics for a minimum of 5,000 completed cases for the first round of data collection. For more information about the sample design used in the second round of U.S. data collection, see question 21.

8. Were immigrants, illegal immigrants, or non-English speakers assessed in PIAAC? Did they bring down our scores?

All adults, regardless of immigration status, were part of the PIAAC Main Study's target population for the assessment. In order to get a representative sample of the adult population currently residing in the United States, respondents were not asked about citizenship status before taking the assessment and were guaranteed anonymity for all their answers to the Background Questionnaire. Although the assessment was administered only in English, the Background Questionnaire was offered in both Spanish and English. These procedures allowed the estimates to be applicable to all adults in the United States, regardless of citizenship or legal status, and they mitigated the effects of low-English language proficiency.

As in most participating countries, non-native-born adults in the United States had, on average, lower scores than native-born adults. The percentage of non-native-born adults in the United States was 15 percent. The average percentage of non-native-born adults across all participating countries was 12 percent, ranging from less than 1 percent in Japan to 28 percent in Australia.

9. What if some countries select only their highest performing adults to participate in PIAAC? Won't they look better than the other participating countries?

Sampling is carefully planned and monitored. The rules of participation require that countries design a sampling plan that meets the standards in the PIAAC Technical Standards and Guidelines and submit it to the PIAAC Consortium for approval. In addition, countries were required to complete quality control forms to verify that their sample was selected in an unbiased and randomized way. Quality checks were performed by the PIAAC Consortium to ensure that the submitted sampling plans were followed accurately.

Top

10. Are adults required to participate in PIAAC?

No, PIAAC is a voluntary assessment.

11. How do international assessments deal with the fact that adult populations in participating countries are so different? For example, the United States has higher percentages of immigrants than some other countries.

The PIAAC results are nationally representative and therefore reflect countries as they are: highly diverse or not. PIAAC collects extensive information about respondents' background and therefore supports analyses that take into account differences in the level of diversity across countries. The international PIAAC report produced by the OECD presents some analyses that examine issues of diversity.

12. How does PIAAC differ from international student assessments?

As an international assessment of adult competencies, PIAAC differs from student assessments in several ways. PIAAC assesses a wide range of ages (16-65), whereas student assessments target a specific age (e.g., 15-year-olds in the case of PISA) or grade (e.g., grade 4 in PIRLS). PIAAC is a household assessment (i.e., an assessment administered in individuals' homes), whereas the international student assessments (PIRLS, PISA, and TIMSS) are conducted in schools. The skills that are measured in each assessment also differ based on the goals of the assessment. Both TIMSS and PIRLS are curriculum based and are designed to assess what students have been taught in school in specific subjects (such as science, mathematics, or reading) using multiple-choice and open-ended test questions. In contrast, PIAAC and PISA are "literacy" assessments, designed to measure performance in certain skill areas at a broader level than school curricula. So while TIMSS and PIRLS aim to assess the particular academic knowledge that students are expected to be taught at particular grades, PISA and PIAAC encompass a broader set of skills that students and adults have acquired throughout life.

13. How does PIAAC differ from earlier adult literacy assessments, such as NALS, IALS, NAAL and ALL?

PIAAC has improved and expanded on the cognitive frameworks of previous large-scale adult literacy assessments (including NALS, NAAL, IALS, and ALL) and has added an assessment of problem solving via computer, which was not a component of these earlier surveys. In addition, PIAAC is capitalizing on prior experiences with large-scale assessments in its approach to survey design and sampling, measurement, data collection procedures, data processing, and weighting and estimation. The most significant difference between PIAAC and previous large-scale assessments is that PIAAC is administered on laptop computers and is designed to be a computer-adaptive assessment, so respondents receive groups of items targeted to their performance levels (respondents not able to or not wishing to take the assessment on computer are provided with an equivalent paper-and-pencil version of the literacy and numeracy items). Because of these differences, PIAAC introduced a new set of scales to measure adult literacy, numeracy, and problem solving. Some scales from these previous adult assessments have been mapped to the PIAAC scales so that performance can be measured over time.

14. How do PIAAC and PISA compare?

PIAAC and PISA both emphasize knowledge and skills in the context of everyday situations, asking students and adults to perform tasks that involve real-world materials as much as possible. PISA is designed to show the knowledge and skills that 15-year-old students have accumulated within and outside of school. It is intended to provide insight into what students who are about to complete compulsory education know and are able to do.

PIAAC focuses on adults who are already eligible to be in the workforce and aims to measure the set of literacy, numeracy, and technology-based problem-solving skills an individual needs in order to function successfully in society. Therefore, PIAAC does not directly measure the academic skills or knowledge that adults may have learned in school. Instead, the PIAAC assessment focuses on tasks that adults may encounter in their lives at home, at work, or in their community.

Top

15. Why doesn't PIAAC report differences between minorities in the United States and minorities in other countries?

Each country can collect data for subgroups of the population that have national importance. In some countries, these subgroups are identified by language usage; in other countries, they are distinguished by tribal affiliation. In the United States, different racial and ethnic subgroups are of national importance. However, categories of race and ethnicity are social and cultural categories that differ greatly across countries. As a result, they cannot be compared accurately across countries.

16. Can the PIAAC data be used to report scores for states?

In total, in the United States, 8,670 adults participated in PIAAC in 2012 and 2014, which is not enough respondents to produce accurate estimates at the state or county level. Thus, in the United States, PIAAC results can only be reported at the national level. NCES is in the process of reviewing plans for producing state-level (synthetic) estimates.

17. How do international assessments deal with the fact that education systems are so different across countries?

PIAAC collects extensive information on educational attainment and years of schooling. For the purpose of cross-country comparisons of educational attainment, the education level classifications of each country are standardized using the International Standard Classification of Education (ISCED). For example, the ISCED level for short-cycle tertiary education (ISCED level 5) is equivalent to an associate's degree in the United States; therefore, comparisons of adults with an associate's degree or its equivalent can be made across countries using this classification. Please note that the education variables in PIAAC 2012 were classified using the ISCED97. Additional education variables that were classified using the ISCED11 are available in the PIAAC 2012/2014 dataset.

About the second round of data collection in the United States (the PIAAC National Supplement)

18. What is the PIAAC National Supplement?

The National Supplement, conducted in 2013–14, was the second round of data collection for PIAAC in the United States; it followed the Main Study, the first round of data collection, which was conducted in 2011–12 and surveyed adults ages 16-65. The National Supplement increased the number of unemployed adults (ages 16-65) and young adults (ages 16-34) in the sample and added older adults (ages 66-74) as well as incarcerated adults (ages 16-74).

19. Why was a second round of PIAAC data collected in the United States?

The second round of data collection for PIAAC in the United States was conducted for two reasons. First, augmenting the first round of PIAAC data by increasing the sample size permits more in-depth analyses of the cognitive and workplace skills of the U.S. population (in particular, of unemployed and young adults). Second, the additional information on older adults (ages 66-74) and incarcerated adults makes it possible to compare PIAAC data with rescaled proficiency data from the 2003 National Assessment of Adult Literacy (NAAL). This, in turn, makes it possible to analyze change in adult skills over the decade between the two studies.

Top

20. What are the differences between the first round of data collection for PIAAC in 2012 (the Main Study) and the second round in 2014 (the National Supplement)?

In both rounds of PIAAC in the United States, the same instruments and procedures, including the Background Questionnaire and Direct Assessment, were used for the household survey. For the prison study, the Background Questionnaire was modified to collect information related to the needs and experiences of incarcerated adults.

The two data collections also sampled different populations. The first round of data collection surveyed a nationally representative sample of adults ages 16-65, while the second round did not survey a nationally representative sample of adults, but rather only the key subgroups of interest. The second round of PIAAC also surveyed two subgroups of the population that were not part of the first round of data collection: older adults (ages 66-74) and incarcerated adults (ages 16-74). Note that in the new data release, the two household samples were combined to provide a nationally representative sample of 16-74-year-old adults across the period of data collection (2011–2014).

21. What is the scope of the household sample in the 2014 National Supplement? Is it different from the scope of the household sample in the 2012 Main Study?

The second round of data collection for PIAAC (in 2014) sampled 3,660 U.S. adults who were unemployed (ages 16-65), young (ages 16-34), or older (ages 66-74). The household sample selection in the second round differed from the first round (in 2012) in that only persons in the target groups were selected. The sampling approach in the second round consisted of an area sample that used the same primary sampling units (PSUs) as in the first round; in addition, it included a list sample of dwelling units from high-unemployment Census tracts in order to obtain the oversample of unemployed adults. When the data from both rounds are combined, they produce a nationally representative sample with larger subgroup sample sizes that can produce estimates of higher precision for the subgroups of interest.

22. What is the scope of the prison sample in the second round of data collection?

The Prison Study sample consists of 1,300 adults, ages 16-74, incarcerated in federal and state prisons in the United States. Data collection began in February 2014 and was completed in June 2014. A two-stage sample design was used to select the inmates. In the first stage, 100 prisons were selected (of which 98 participated), and in the second stage, approximately 15 inmates, on average, were selected from the sampled facilities. An oversample of female prisons was selected to ensure an adequate sample of female inmates. The Prison Study sample was selected independently of the PIAAC household sample and is weighted separately from the household sample. Prison weights are calibrated to national prison population totals (for inmates ages 16-74) provided by the Bureau of Justice Statistics.

23. Were the same instruments and procedures used in the first and second rounds of data collection? Were the same instruments and procedures used for the household samples and the prison sample?

The same procedures and instruments, including the Background Questionnaire and Direct Assessment, used in the first round of data collection were employed in the second-round household and prison data collections.

However, the Background Questionnaire for the prison sample was tailored to collect information related to the needs and experiences of incarcerated adults. Adaptations to the questionnaire for the prison population included (a) deleting questions that would be irrelevant to respondents in prison; and (b) adding questions that addressed respondents' specific activities in prison (e.g., participation in academic programs and English as a Second Language (ESL) classes; experiences with prison work assignments; involvement in nonacademic programs, such as life skills and employment readiness classes; and educational attainment and employment prior to incarceration).

The same Direct Assessment used in the household sample was used in the prison sample.

Top

24. What incentives were given to participants?

A monetary incentive of $5 was paid to household representatives who completed the screener—which contained questions that would determine the eligibility of household members to be included in the sample—in the second round of the PIAAC data collection. In the first round, no monetary incentive was paid to household representative for completing the screener.

The screener incentive used in the second round of data collection was intended to help reduce nonresponse to a screener that was slightly longer than that used in the first round. Specifically, the second-round screener included various questions about unemployment status that were not included in the first-round screener. As in the first round of data collection, following the completion of the assessment, an additional monetary incentive of $50 was paid to each respondent. The incentive was also paid to those adults who attempted to complete the assessment, but were legitimately not able to complete it because of language barriers or physical or mental disabilities. Respondents who refused to continue with the assessment were not compensated.

About the updated U.S. PIAAC data (the combined 2012/2014 dataset)

25. How and why are the current U.S. results (from the combined 2012/2014 dataset) different from the results from the PIAAC Main Study in 2012? Why did the U.S. household scores and ranking change? Did it change because the skills of U.S. adults improved or declined between 2012 and 2014?

The United States conducted two rounds of data collection for PIAAC, but not two independent studies. The first and second rounds of data collected are meant to be combined and analyzed together, but they cannot be compared.

Because of the timing of the first and second rounds of the PIAAC data collection in the United States, the information available for the study's sampling frames differed between 2012 and 2014. Specifically, the 2012 data were based on the 2000 U.S. Census, while the 2014 data were based on the 2010 U.S. Census. Therefore, in addition to the larger combined sample (8,670 for the household), the improved accuracy of estimates are due in part to the revised population estimates based on the 2010 Census data, which were unavailable when PIAAC 2012 went into the field.

For the 2012 data collection, weights for all respondents were calibrated to the U.S. Census Bureau's 2010 American Community Survey population totals for those ages 16-65. (The 2010 American Community Survey population totals were derived from 2000 U.S. Census projections because the full 2010 U.S. Census population results were not yet available.) Once the 2010 U.S. Census population results were finalized, the U.S. Census refreshed its entire time series of estimates going back to the previous census each year using the most current data and methodology. One result of this refresh is a shift in the proportion of the population with more education.

A comparison of the population totals used to calibrate the 2012 Main Study data with those used to calibrate the composite 2012/2014 dataset reveals that the percentage of the U.S. population ages 16-65 with college experience (some college or a college degree) increased by 3 to 4 percent and the percentage of the population ages 16-65 with less than a high school diploma decreased by 4 percent. This change has no effect on PIAAC's measurement of skills in the United States, but it does mean that the proportion of the population with higher skills has been found to be larger than previously estimated for the 2012 Main Study. Therefore, adults' skills did not change in this time period, but due to the larger sample and the updated Census data, the estimates of skills reported with the combined 2012/2014 sample are more accurate.

Top

26. How and why are the international averages reported in the 2014 First Look report is different from the averages reported in the 2012 PIAAC First Look report?

The PIAAC international averages in the 2012 PIAAC First Look report were calculated by the OECD using restricted data from all participating countries. However, restricted data from Australia and Canada are not available to the United States because of national restrictions on the use of their data. Thus, with the exception of figures 1 and 2, the PIAAC international averages in the 2014 PIAAC First Look report were calculated (a) without Australia's data, (b) with Canada's publicly available data, and (c) with the 2012/2014 U.S. data. Differences in the international averages calculated for the 2012 PIAAC First Look report and those calculated for the 2014 PIAAC First Look report are very small but, on account of them, some estimates round differently.

27. Can the combined 2012/2014 U.S. household sample be compared to the samples in other countries? Which subsamples can be compared? Which cannot?

The combined 2012/2014 U.S. household sample of all adults ages 16-65 can be compared to samples from the other countries that participated in PIAAC. Two of the additional subsamples that were a focus of the National Supplement can also be compared to international samples: the sample of younger adults ages 16-34 and unemployed adults ages 16-65.

Two of the other household samples are unique to the U.S. supplemental study and cannot be compared to samples from other countries: the sample of older adults 66-74 and the total sample of adults 16-74.

28. Do all of the currently available estimates include data from the 2014 National Supplement?

The estimates included in the 2014 PIAAC First Look report include data from the National Supplement. The NCES PIAAC website has also been updated with results based on the 2012/2014 data, where possible. In addition, NCES PIAAC Results Portal has been updated to show results that include the 2012/2014 data. The NCES International Data Explorer (IDE) has also been updated to allow users to conduct analyses on the U.S. PIAAC 2012/2014 data. Additionally, the U.S. PIAAC 2012/2014 public- and restricted-use data files will soon be available.

The international U.S. public-use file available on the OECD website and the OECD IDE will be updated to include the U.S. PIAAC 2012/2014 data later in 2016.

29. When will the results from the U.S. supplemental study of prisons become available?

Results from the U.S. supplemental study of prisons will be available later in 2016.

Top