Skip Navigation

How PIAAC Is Administered


Overview

PIAAC was administered in person by trained interviewers who used an official study laptop. The interviewer read out the study’s background questionnaire (BQ), which in the United States was offered in English or Spanish, and entered the participant’s responses into the secure laptop. Once the BQ was completed, the interviewer then gave the laptop to the participant to take the study’s direct assessments. The study assessed adults’ skills in literacy, numeracy, and/or digital problem solving (also called “problem solving in technology-rich environments” (PS-TRE)). If a participant could not use the laptop to take the computer-based assessment (CBA), then a paper-and-pencil assessment (PBA) was offered by the interviewer.




Adaptive Design

PIAAC was one of the first computer-based assessments to incorporate an adaptive design. Assessments with an adaptive design can measure participants’ abilities with fewer test items than a traditional test and can do so more accurately. Adaptive testing means that participants are directed to a set of easier or more difficult test questions based on their answers to the information communication technology (ICT) core and literacy/numeracy core questions (which were automatically scored as correct or incorrect as the respondent completed them). PIAAC’s digital problem-solving domain had no adaptive design.

Each participant’s assignment of easier or more difficult assessment items was based on an algorithm that used a set of variables, including the participant’s: (1) level of education, (2) status as a native or non-native language speaker; and (3) performance in the CBA Core (as well as their performance in the CBA module as they advanced through the assessment).

More detail on the sample design can be found in the U.S. PIAAC 2012/14/17 Technical Report and on the Frequently Asked Questions page.

Back to Top Back to Top

How PIAAC Cycle I (2012–2017) was administered in the United States

Sampling in PIAAC
Countries that participate in PIAAC must draw a sample of individuals ages 16–65 that represents the entire population of adults living in households in the country, regardless of citizenship, nationality, or language. Some countries draw their samples from national registries of all persons in the country; others, including the United States, draw their samples from census data. Sampling is carefully planned and monitored, with quality checks performed by the PIAAC Consortium.

Sampling in U.S. Household Data Collection
In the United States, the sample design used in the household data collections is a four-stage stratified area probability sample. This method involves (1) selecting primary sampling units (PSUs) consisting of counties or groups of contiguous counties; (2) selecting secondary sampling units (referred to as segments) consisting of area blocks; (3) selecting dwelling units (DUs), for example single-family homes or apartments, selected from address listings; and (4) selecting eligible persons within DUs as respondents. Random selection methods are used at each stage of sampling. During data collection, response rates and sample yields are monitored and calculated by key demographic and subgroup characteristics. These sampling methods and checks ensure that the sample requirements were met and that reliable statistics based on a nationally representative sample can be produced.

Sampling in the Three Rounds of U.S. Household Data Collection
The first round of U.S. data collection (2012) surveyed a representative sample of 5,010 adults ages 16–65, while the second round (2014) surveyed 3,660 adults in key subgroups of interest (including young adults, unemployed adults, and older adults ages 66–74); the third round (2017) surveyed 3,660 adults ages 16–74. The household sample in the second round differed from that in the first round and third round in that it was not a nationally representative sample and only included adults in the key subgroups of interest. The sampling approach in the second round consisted of an area sample that used the same PSUs as in the first round; in addition, it included a list sample of dwelling units from high-unemployment Census tracts to obtain the oversample of unemployed adults. When the data from the first and second rounds are combined, they produce a nationally representative sample (referred to as the 2012/2014 sample) of 8,670 adults. The sample design for the third round was similar to that of the first round (a nationally representative sample) except that it minimized overlap with the PIAAC 2012/2014 PSUs. This allowed the addition of sample cases from counties with different demographic characteristics from those that participated in PIAAC 2012/2014 and optimized the combined 2012/14/17 sample of 12,330 adults for county-level estimation. This sample design ensured the production of reliable statistics.

Sampling in the U.S PIAAC Prison Study
The U.S. PIAAC Prison Study is a nationally representative sample of incarcerated adults in state and federal prisons and in private prisons housing state and federal inmates. A two-stage sample design with random sampling methods at each stage was used to select the inmates. In the first stage of sampling, 100 prisons were selected. The probability of selection of the prison was based on whether or not it housed only female inmates. In the second stage of selection, inmates were randomly selected from a listing of inmates occupying a bed the previous night or, for prisons operated by the Bureau of Prisons, from a roster of inmates provided a week before the visit.

More details on the sample design can also be found in the U.S. PIAAC 2012/14/17 Technical Report and the Frequently Asked Questions page.
CLOSE
In the United States, the background questionnaire took approximately 45 minutes to complete in Cycle I. The direct assessment took approximately 60 minutes. However, as PIAAC was not a timed assessment, some participants took considerably less and some considerably more time to complete the assessment.
CLOSE
CLOSE

The assessment began with the Background Questionnaire (BQ), which was adaptive so that the questions given to the interviewer to read to the participant were determined based on the answers to previous questions. Because the BQ was automated in this way, participants only received questions relevant to their experience, education, and work history. For example, participants who said they were retired and not working were not asked questions related to their “current employment.”

The BQ included questions about the participant’s computer experience. The answers to these questions were used to route the participant to either the paper- or computer-based assessment when the BQ interview was completed. Participants with no computer experience were routed to the paper-based assessment, as were participants who declined to take the assessment on the laptop. The remainder were routed to the computer-based assessment (see figure A).

  • Paper-based assessment (PBA): This assessment began with a core of literacy/numeracy items in paper-and-pencil format that took about 10 minutes to complete. Participants who performed at or above a minimum standard on the core were randomly assigned to either a cluster of literacy items or a cluster of numeracy items that took approximately 30 minutes to complete. After completing those, they received an assessment of reading component skills, which took approximately 20 minutes to complete. Participants who performed poorly on the paper literacy-numeracy core proceeded directly to the reading components booklet (see figure A).

  • Computer-based assessment (CBA): Participants who indicated previous experience with computers in the BQ interview were directed to a core “screener” section composed of two parts: an information communication technology (ICT) core, which measured basic computer skills such as highlighting text on a screen with the cursor; and a literacy/numeracy core, which measured basic literacy and numeracy skills in an electronic format. Each core section took approximately 5 minutes to complete. Participants who failed the ICT core received the paper-based assessment and took the paper-based literacy-numeracy core items. Participants who passed the ICT core proceeded to the computer-based literacy-numeracy core. However, if they did not pass the computer-based literacy-numeracy core, they were routed directly to the reading components section of the paper-based assessment.

Participants who performed well on both parts of the computer-based core section were randomly routed to either the computer-based literacy, computer-based numeracy, or digital problem-solving domains. The computer-based assessment (CBA) consisted of Module 1 and Module 2. Each module was a set of literacy, numeracy, or problem-solving units. Respondents who received literacy or numeracy in CBA Module 1 did not repeat the same domain but instead received one of the other two modules in CBA Module 2. Respondents who received digital problem solving in CBA Module 1 had a 50 percent chance of receiving a second set of problem-solving items again and a 50 percent chance of receiving literacy or numeracy items in CBA Module 2.

The diagram below is a simplified version of the workflow of the assessment with the paper-based assessment branching to the right and the computer-based assessment branching to the left. Note that within the computer-based assessment an adaptive design was used for the literacy and numeracy items in Modules 1 and 2.

Figure A. PIAAC Instruments Simplified Workflow
PIAAC Instruments Simplified Workflow

CLOSE
Back to Top Back to Top