In addition to the following questions about PIAAC, more FAQs about international assessments are available at: http://nces.ed.gov/surveys/international/faqs.asp.
PIAAC is designed to assess adults over a broad range of abilities: from simple reading to complex computer-based problem-solving skills. All countries that participated in PIAAC in 2012 assessed the domains of literacy and numeracy in both a paper-and-pencil mode and a computer-administered mode. In addition, some countries assessed problem solving (administered on a computer) as well as components of reading (administered only in a paper-and-pencil mode). The United States assessed all four domains.
The assessment was designed to be valid cross-culturally and cross-nationally.
PIAAC assessment questions are developed in a collaborative, international process. PIAAC assessment questions were based on frameworks developed by internationally known experts in each subject or domain. Assessment experts and developers from ministries/departments of education and labor and OECD staff participated in the conceptualization, creation, and extensive year-long reviews of assessment questions. In addition, the PIAAC Consortium's support staff, assisted by expert panels, researchers, and working groups, developed PIAAC's Background Questionnaire. The PIAAC Consortium also guided the development of common standards and procedures for collecting and reporting data, as well as the international "virtual machine" software that administers the assessment uniformly across countries. All PIAAC countries follow the common standards and procedures and use the virtual machine software when conducting the survey and assessment. As a result, PIAAC can provide a reliable and comparable measure of literacy skills in the adult population of participating countries.
Before the administration of the assessment, a field test was conducted in the participating countries. The PIAAC Consortium analyzed the field-test data and implemented changes to eliminate problematic test items or revise procedures prior to the administration of the assessment.
The design and implementation of PIAAC was guided by technical standards and guidelines developed by literacy experts to ensure that the survey yielded high-quality and internationally comparable data. For example, for their survey operations, participating countries were required to develop a quality assurance and quality control program that included information about the design and implementation of the PIAAC data collection. In addition, all countries were required to adhere to recognized standards of ethical research practices with regard to respect for respondent privacy and confidentiality, the importance of ethics and scientific rigor in research involving human subjects, and the avoidance of practices or methods that might harm or seriously mislead survey participants. Compliance with the technical standards was mandatory and monitored throughout the development and implementation phases of the data collection through direct contact, submission of evidence that required activities had been completed, and ongoing collection of data from countries concerning key aspects of implementation.
In addition, participating countries provided standardized training to the interviewers who administered the assessment in order to familiarize them with survey procedures that would allow them to administer the assessment consistently across respondents and reduce the potential for erroneous data. After the data collection process, the quality of each participating country's data was reviewed prior to publication. The review was based on the analysis of the psychometric characteristics of the data and evidence of compliance with the technical standards.
The "problem solving in technology-rich environments" domain assesses the cognitive processes of problem solving: goal setting, planning, selecting, evaluating, organizing, and communicating results. In a digital environment, these skills involve understanding electronic texts, images, graphics, and numerical data, as well as locating, evaluating, and critically judging the validity, accuracy, and appropriateness of the accessed information.
The environment in which PIAAC problem solving is assessed is meant to reflect the fact that digital technology has changed the ways in which individuals live their day-to-day lives, communicate with others, work, conduct their affairs, and access information. Information and communication technology tools such as computer applications, the Internet, and mobile technologies are all part of the environments in which individuals operate. In PIAAC, items for problem solving in technology-rich environments are presented on laptop computers in simulated software applications using commands and functions commonly found in e-mail, web browsers, and spreadsheets.
PIAAC assesses adults in the official language or languages of each participating country. Based on a 1988 congressional mandate and the 1991 National Literacy Act, the U.S. Department of Education is required to evaluate the status and progress of adults' literacy in English. However, in order to obtain background information from a wide range of respondents in the United States, the PIAAC Background Questionnaire was administered in both English and Spanish.
Countries that participate in PIAAC must draw a sample of individuals ages 16-65 that represents the entire population of adults living in households in the country. Some countries draw their samples from national registries of all persons in the country; others draw their samples from census data. In the United States, a nationally representative household sample was drawn from the most current Census Bureau population estimates.
The U.S. sample design employed by PIAAC in the first round of U.S. data collection is generally referred to as a four-stage stratified area probability sample. This method involves the selection of (1) primary sampling units (PSUs) consisting of counties or groups of contiguous counties, (2) secondary sampling units (referred to as segments) consisting of area blocks, (3) dwelling units (DUs), and (4) eligible persons (the ultimate sampling unit) within DUs. Random selection methods are used, with calculable probabilities of selection at each stage of sampling. This sample design ensured the production of reliable statistics for a minimum of 5,000 completed cases for the first round of data collection. For more information about the sample design used in the second round of U.S. data collection, see question 21.
All adults, regardless of immigration status, were part of the PIAAC Main Study's target population for the assessment. In order to get a representative sample of the adult population currently residing in the United States, respondents were not asked about citizenship status before taking the assessment and were guaranteed anonymity for all their answers to the Background Questionnaire. Although the assessment was administered only in English, the Background Questionnaire was offered in both Spanish and English. These procedures allowed the estimates to be applicable to all adults in the United States, regardless of citizenship or legal status, and they mitigated the effects of low-English language proficiency.
As in most participating countries, non-native-born adults in the United States had, on average, lower scores than native-born adults. The percentage of non-native-born adults in the United States was 15 percent. The average percentage of non-native-born adults across all participating countries was 12 percent, ranging from less than 1 percent in Japan to 28 percent in Australia.
Sampling is carefully planned and monitored. The rules of participation require that countries design a sampling plan that meets the standards in the PIAAC Technical Standards and Guidelines and submit it to the PIAAC Consortium for approval. In addition, countries were required to complete quality control forms to verify that their sample was selected in an unbiased and randomized way. Quality checks were performed by the PIAAC Consortium to ensure that the submitted sampling plans were followed accurately.
No, PIAAC is a voluntary assessment.
11. How do international assessments deal with the fact that adult populations in participating countries are so different? For example, the United States has higher percentages of immigrants than some other countries.
The PIAAC results are nationally representative and therefore reflect countries as they are: highly diverse or not. PIAAC collects extensive information about respondents' background and therefore supports analyses that take into account differences in the level of diversity across countries. The international PIAAC report produced by the OECD presents some analyses that examine issues of diversity.
As an international assessment of adult competencies, PIAAC differs from student assessments in several ways. PIAAC assesses a wide range of ages (16-65), whereas student assessments target a specific age (e.g., 15-year-olds in the case of PISA) or grade (e.g., grade 4 in PIRLS). PIAAC is a household assessment (i.e., an assessment administered in individuals' homes), whereas the international student assessments (PIRLS, PISA, and TIMSS) are conducted in schools. The skills that are measured in each assessment also differ based on the goals of the assessment. Both TIMSS and PIRLS are curriculum based and are designed to assess what students have been taught in school in specific subjects (such as science, mathematics, or reading) using multiple-choice and open-ended test questions. In contrast, PIAAC and PISA are "literacy" assessments, designed to measure performance in certain skill areas at a broader level than school curricula. So while TIMSS and PIRLS aim to assess the particular academic knowledge that students are expected to be taught at particular grades, PISA and PIAAC encompass a broader set of skills that students and adults have acquired throughout life.
PIAAC has improved and expanded on the cognitive frameworks of previous large-scale adult literacy assessments (including NALS, NAAL, IALS, and ALL) and has added an assessment of problem solving via computer, which was not a component of these earlier surveys. In addition, PIAAC is capitalizing on prior experiences with large-scale assessments in its approach to survey design and sampling, measurement, data collection procedures, data processing, and weighting and estimation. The most significant difference between PIAAC and previous large-scale assessments is that PIAAC is administered on laptop computers and is designed to be a computer-adaptive assessment, so respondents receive groups of items targeted to their performance levels (respondents not able to or not wishing to take the assessment on computer are provided with an equivalent paper-and-pencil version of the literacy and numeracy items). Because of these differences, PIAAC introduced a new set of scales to measure adult literacy, numeracy, and problem solving. Some scales from these previous adult assessments have been mapped to the PIAAC scales so that performance can be measured over time.
PIAAC and PISA both emphasize knowledge and skills in the context of everyday situations, asking students and adults to perform tasks that involve real-world materials as much as possible. PISA is designed to show the knowledge and skills that 15-year-old students have accumulated within and outside of school. It is intended to provide insight into what students who are about to complete compulsory education know and are able to do.
PIAAC focuses on adults who are already eligible to be in the workforce and aims to measure the set of literacy, numeracy, and technology-based problem-solving skills an individual needs in order to function successfully in society. Therefore, PIAAC does not directly measure the academic skills or knowledge that adults may have learned in school. Instead, the PIAAC assessment focuses on tasks that adults may encounter in their lives at home, at work, or in their community.
Each country can collect data for subgroups of the population that have national importance. In some countries, these subgroups are identified by language usage; in other countries, they are distinguished by tribal affiliation. In the United States, different racial and ethnic subgroups are of national importance. However, categories of race and ethnicity are social and cultural categories that differ greatly across countries. As a result, they cannot be compared accurately across countries.
In total, in the United States, 8,670 adults participated in PIAAC in 2012 and 2014, which is not enough respondents to produce accurate estimates at the state or county level. Thus, in the United States, PIAAC results can only be reported at the national level. NCES is in the process of reviewing plans for producing state-level (synthetic) estimates.
PIAAC collects extensive information on educational attainment and years of schooling. For the purpose of cross-country comparisons of educational attainment, the education level classifications of each country are standardized using the International Standard Classification of Education (ISCED). For example, the ISCED level for short-cycle tertiary education (ISCED level 5) is equivalent to an associate's degree in the United States; therefore, comparisons of adults with an associate's degree or its equivalent can be made across countries using this classification. Please note that the education variables in PIAAC 2012 were classified using the ISCED97. Additional education variables that were classified using the ISCED11 are available in the PIAAC 2012/2014 dataset.
The National Supplement, conducted in 2013–14, was the second round of data collection for PIAAC in the United States; it followed the Main Study, the first round of data collection, which was conducted in 2011–12 and surveyed adults ages 16-65. The National Supplement increased the number of unemployed adults (ages 16-65) and young adults (ages 16-34) in the sample and added older adults (ages 66-74) as well as incarcerated adults (ages 16-74).
The second round of data collection for PIAAC in the United States was conducted for two reasons. First, augmenting the first round of PIAAC data by increasing the sample size permits more in-depth analyses of the cognitive and workplace skills of the U.S. population (in particular, of unemployed and young adults). Second, the additional information on older adults (ages 66-74) and incarcerated adults makes it possible to compare PIAAC data with rescaled proficiency data from the 2003 National Assessment of Adult Literacy (NAAL). This, in turn, makes it possible to analyze change in adult skills over the decade between the two studies.
In both rounds of PIAAC in the United States, the same instruments and procedures, including the Background Questionnaire and Direct Assessment, were used for the household survey. For the prison study, the Background Questionnaire was modified to collect information related to the needs and experiences of incarcerated adults.
The two data collections also sampled different populations. The first round of data collection surveyed a nationally representative sample of adults ages 16-65, while the second round did not survey a nationally representative sample of adults, but rather only the key subgroups of interest. The second round of PIAAC also surveyed two subgroups of the population that were not part of the first round of data collection: older adults (ages 66-74) and incarcerated adults (ages 16-74). Note that in the new data release, the two household samples were combined to provide a nationally representative sample of 16-74-year-old adults across the period of data collection (2011–2014).
The second round of data collection for PIAAC (in 2014) sampled 3,660 U.S. adults who were unemployed (ages 16-65), young (ages 16-34), or older (ages 66-74). The household sample selection in the second round differed from the first round (in 2012) in that only persons in the target groups were selected. The sampling approach in the second round consisted of an area sample that used the same primary sampling units (PSUs) as in the first round; in addition, it included a list sample of dwelling units from high-unemployment Census tracts in order to obtain the oversample of unemployed adults. When the data from both rounds are combined, they produce a nationally representative sample with larger subgroup sample sizes that can produce estimates of higher precision for the subgroups of interest.
A monetary incentive of $5 was paid to household representatives who completed the screener—which contained questions that would determine the eligibility of household members to be included in the sample—in the second round of the PIAAC data collection. In the first round, no monetary incentive was paid to household representative for completing the screener.
The screener incentive used in the second round of data collection was intended to help reduce nonresponse to a screener that was slightly longer than that used in the first round. Specifically, the second-round screener included various questions about unemployment status that were not included in the first-round screener. As in the first round of data collection, following the completion of the assessment, an additional monetary incentive of $50 was paid to each respondent. The incentive was also paid to those adults who attempted to complete the assessment, but were legitimately not able to complete it because of language barriers or physical or mental disabilities. Respondents who refused to continue with the assessment were not compensated.
23. How and why are the current U.S. results (from the combined 2012/2014 dataset) different from the results from the PIAAC Main Study in 2012? Why did the U.S. household scores and ranking change? Did it change because the skills of U.S. adults improved or declined between 2012 and 2014?
The United States conducted two rounds of data collection for PIAAC, but not two independent studies. The first and second rounds of data collected are meant to be combined and analyzed together, but they cannot be compared.
Because of the timing of the first and second rounds of the PIAAC data collection in the United States, the information available for the study's sampling frames differed between 2012 and 2014. Specifically, the 2012 data were based on the 2000 U.S. Census, while the 2014 data were based on the 2010 U.S. Census. Therefore, in addition to the larger combined sample (8,670 for the household), the improved accuracy of estimates are due in part to the revised population estimates based on the 2010 Census data, which were unavailable when PIAAC 2012 went into the field.
For the 2012 data collection, weights for all respondents were calibrated to the U.S. Census Bureau's 2010 American Community Survey population totals for those ages 16-65. (The 2010 American Community Survey population totals were derived from 2000 U.S. Census projections because the full 2010 U.S. Census population results were not yet available.) Once the 2010 U.S. Census population results were finalized, the U.S. Census refreshed its entire time series of estimates going back to the previous census each year using the most current data and methodology. One result of this refresh is a shift in the proportion of the population with more education.
A comparison of the population totals used to calibrate the 2012 Main Study data with those used to calibrate the composite 2012/2014 dataset reveals that the percentage of the U.S. population ages 16-65 with college experience (some college or a college degree) increased by 3 to 4 percent and the percentage of the population ages 16-65 with less than a high school diploma decreased by 4 percent. This change has no effect on PIAAC's measurement of skills in the United States, but it does mean that the proportion of the population with higher skills has been found to be larger than previously estimated for the 2012 Main Study. Therefore, adults' skills did not change in this time period, but due to the larger sample and the updated Census data, the estimates of skills reported with the combined 2012/2014 sample are more accurate.
The PIAAC international averages in the 2012 PIAAC First Look report were calculated by the OECD using restricted data from all participating countries. However, restricted data from Australia and Canada are not available to the United States because of national restrictions on the use of their data. Thus, with the exception of figures 1 and 2, the PIAAC international averages in the 2014 PIAAC First Look report were calculated (a) without Australia's data, (b) with Canada's publicly available data, and (c) with the 2012/2014 U.S. data. Differences in the international averages calculated for the 2012 PIAAC First Look report and those calculated for the 2014 PIAAC First Look report are very small but, on account of them, some estimates round differently.
The combined 2012/2014 U.S. household sample of all adults ages 16-65 can be compared to samples from the other countries that participated in PIAAC. Two of the additional subsamples that were a focus of the National Supplement can also be compared to international samples: the sample of younger adults ages 16-34 and unemployed adults ages 16-65.
Two of the other household samples are unique to the U.S. supplemental study and cannot be compared to samples from other countries: the sample of older adults 66-74 and the total sample of adults 16-74.
The estimates included in the 2014 PIAAC First Look report include data from the National Supplement. The NCES PIAAC website has also been updated with results based on the 2012/2014 data, where possible. In addition, NCES PIAAC Results Portal has been updated to show results that include the 2012/2014 data. The NCES International Data Explorer (IDE) has also been updated to allow users to conduct analyses on the U.S. PIAAC 2012/2014 data. Additionally, the U.S. PIAAC 2012/2014 public- and restricted-use data files will soon be available.
The international U.S. public-use file available on the OECD website and the OECD IDE will be updated to include the U.S. PIAAC 2012/2014 data later in 2016.
The PIAAC Prison Study is an assessment of the literacy, numeracy and digital problem solving of incarcerated adults between the ages of 18 and 74 in U.S. prisons. The study was administered to approximately 1,300 incarcerated adults. The prison sample is nationally representative of the approximately 1.5 million adults in state and federal prisons, and in private prisons housing state and federal inmates. The results provide comprehensive information on the skills and background of the U.S. adult prison population. Where applicable, it compares the skills of U.S. incarcerated and household populations across different characteristics, including age, race, gender, educational attainment, language spoken at home before starting school, and parents' educational attainment.
The goal of the Prison Study was to provide detailed nationally representative data on the skills of incarcerated adults for researchers, correctional administrators, and policymakers to:
Currently, no other countries have used PIAAC to assess the skills of their incarcerated adults.
No, NCES has conducted two previous studies of incarcerated adults. The first was conducted in the early 1990s as part of the National Adult Literacy Survey (NALS) and the second in the early 2000s as part of the National Assessment of Adult Literacy (NAAL). Thus, the PIAAC Prison Study is the third such study of the U.S. incarcerated population, and assesses a broader range of skills than the previous studies. Results from the previous studies can be found in Literacy Behind Prison Walls and Literacy Behind Bars: Results From the 2003 National Assessment of Adult Literacy Prison Survey.
Yes, the PIAAC Prison Study is a nationally representative sample of incarcerated adults in state and federal prisons, and in private prisons housing state and federal inmates. A two-stage sample design with random sampling methods at each stage was used to select the inmates. In the first stage of sampling, 100 prisons were selected (of which 98 participated, among which 80 were male-only or co-ed prisons and 18 were female-only). Probability of selection of the prison was based on whether or not it housed female inmates only. In the second stage of selection, inmates were randomly selected from a listing of inmates occupying a bed the previous night or, for prisons operated by the Bureau of Prisons, based on a roster of inmates provided a week before the visit. Approximately 15 inmates, on average, were selected from the sampled facilities.
Facilities were included in the sample if they:
Based on the recommendation of the adult corrections experts the following types of facilities and institutions were excluded:
Even though the juvenile facilities contain inmates up to age 21, they were excluded from the PIAAC prison sample for two reasons: (1) to be consistent with the facilities listed in the 2005 Prison Census (Bureau of Justice Statistics Census of State and Federal Adult Correctional Facilities) and (2) to be cost effective. It would not be cost effective to visit these facilities to sample the small number of inmates 16 years of age and older (approx. 24,000) compared with those in the state or federal correctional facilities (1.5 million adult inmates).
Female-only prisons were oversampled in order to ensure an adequate sample size of female inmates to provide estimates of the skills of this group and permit comparisons with male inmates.
The Prison Study sampling frame was created from two data sources:
The same direct assessment of literacy, numeracy, and problem solving in technology-rich environments, which was used with the U.S. household sample, was used with the prison sample.
The Background Questionnaire for the prison sample was designed to collect information related to the needs and experiences of incarcerated adults based on recommendations of a prison expert panel. Adaptations to the questionnaire for the prison sample included (a) deleting questions that would be irrelevant to respondents in prison; (b) editing question wording or response options to make them relevant to respondents' experience in prison or prior to their incarceration; and (c) adding questions that addressed respondents' specific activities in prison (e.g., participation in academic programs and English as a Second Language (ESL) classes; experiences with prison work assignments; involvement in nonacademic programs, such as life skills). Several of the prison-specific questions were adopted from the National Assessment of Adult Literacy (NAAL) 2003 Prison Background Questionnaire.
Prisons were sampled approximately 9 months prior to the start of data collection, which took place from February through June 2014. Permission and cooperation of federal, state, and correctional facility officials was required before the data collection could begin. State- and prison-level contacts were contacted approximately 6 months prior to the start of data collection. By the time an interviewer entered any correctional institution, the project negotiator had already obtained that facility's approval for participation, established a contact within the facility, and finalized interviewing arrangements.
Similar to the household study, once the sample was selected, the Background Questionnaire interview was conducted by the interviewer in English or Spanish in a private setting (provided by the prison authorities in the case of the Prison Study). Upon completion of the Background Questionnaire, the respondent was provided either the paper-and-pencil or computer-based assessment, based on their computer experience, willingness to take the assessment on computer, and performance on a simple computer familiarity test, conducted after the Background Questionnaire. The majority of inmates (61 percent) took the direct assessment on the laptop computers; 37 percent took the paper-based assessment.
The overall weighted response rate for the prison sample was 82 percent. The prison response rate was 98 percent without substitute prisons and 100 percent with substitute prisons. The final response rate for the Background Questionnaire—which included respondents who completed it and respondents who were unable to complete it because of a literacy-related barrier—was 86 percent (weighted). The final response rate for the overall assessment was 98 percent (weighted).