Skip Navigation

Program for the International Assessment of Adult Competencies (PIAAC)



4. SURVEY DESIGN

The PIAAC Consortium oversaw all international PIAAC activities of cycle 1 on behalf of the OECD and provided technical support to all participating countries regarding all aspects of PIAAC. Each country was responsible for conducting PIAAC in compliance with the Technical Standards and Guidelines (TS&Gs) provided by the Consortium to ensure that the survey design and implementation yields high–quality and internationally comparable data. The standards were generally based on agreed–upon policies or best practices to be followed by all participating countries when conducting the study.

The PIAAC Consortium specified TS&Gs for all aspects of the sample design, including the identification of the target population, the creation of the sampling frame, sample size requirements, and sample selection methods. All participating countries were required to submit sample design plans detailing these aspects to the Consortium for approval several months before data collection. Also, countries were required to complete quality control sample selection forms, which collected sampling information for each stage of selection. These were designed to capture aggregated information necessary for verifying that the sample is representative of the target population and that sampling was conducted in an unbiased and randomized way.

The Consortium did not conduct quality control monitoring activities for the U.S. national supplement, prison and household, and household 2017 study. However, activities similar to those monitored during the main study were conducted throughout the data collection period, and were reported and approved by NCES.

Target Population in the United States

U.S. Round 1(2011-12) or Main Study. The PIAAC main study target population consisted of non-institutionalized adults 16 to 65 years of age who resided in the United States at the time of interview, where age was determined during the screener questionnaire. Adults were included regardless of citizenship, nationality or language. The target population included only persons living in households or group quarters; it excluded all other persons (such as those living in shelters, the incarcerated, military personnel who lived in barracks or bases, or persons who lived in institutionalized group quarters, such as hospitals or nursing homes). The target population included full- time and part-time members of the military who did not reside in military barracks or military bases, adults in other non-institutional collective dwelling units, such as workers’ quarters or halfway homes, and adults living at school in student group quarters, such as dormitories, fraternities or sororities. Adults who were unable to complete the assessment because of a hearing impairment, blindness/visual impairment or physical disability were considered to be out of scope since the assessment did not accommodate such situations.

The household respondent was asked in the screener questionnaire how many people live in the dwelling and have no usual place of residence elsewhere. Those who thought of the household as their primary place of residence, or who spent most of the year in the household even though they may have had another residence, were listed as eligible household members. The list included persons who usually stayed in the household but were temporarily away on business, vacation, in a hospital or living at school.

U.S. Round 2 (2014) or National Supplement to the Main Study (2014). The target population for the national supplement’s household-based sample consisted of noninstitutionalized adults, 16 to 74 years old, who resided in the United States at the time of interview, excluding adults 35 to 65 years of age who were either employed or not in the labor force as determined by the screener interview.

Prison Study (2014). The target population of the PIAAC prison study was inmates age 16 to 74 from federal, state, and private prisons that housed federal or state inmates in the United States. Based on the recommendation of the PIAAC Prison Expert Group, the following types of facilities and institutions were excluded:

  • private facilities not primarily for state or federal inmates;
  • military facilities;
  • Immigration and Customs Enforcement (ICE) facilities;
  • Bureau of Indian Affairs facilities;
  • facilities operated by or for local government, including those housing state prisoners;
  • facilities operated by the United States Marshals Service;
  • hospital wings and wards reserved for state prisoners;
  • facilities that hold only juveniles; and
  • community corrections facilities (such as halfway- houses, boot camps, weekend programs, and other entities in which individuals are locked up overnight).

U.S. Round 3 (2017) U.S. PIAAC Study. The target population of the 2017 study consisted of noninstitutionalized adults 16 to 74 years of age who resided in the United States at the time of interview, where age was determined during the screener questionnaire. The other details of the target population criteria were similar to those of Round 1 of the household data collection. Throughout the sample design process and implementation, where applicable, the OECD Technical Standards and Guidelines (TSGs) were implemented.

Sample Design in the United States
U.S. Round 1(2011-12) or Main Study. To arrive at the required minimum of 5,000 completed cases among non-institutionalized persons 16-65 years of age, a four-stage, stratified area probability sample was conducted. It involved the selection of:

  • 80 primary sampling units (PSUs) consisting of counties or groups of contiguous counties;
  • 901 secondary sampling units, or segments, consisting of census blocks or block groups;
  • 9,468 dwelling units (DUs); and
  • eligible individuals within DUs, resulting in 5,010 respondents to the survey.

A nationally representative probability sample of 9,468 U.S. households was selected. Of the 9,468 sampled households, 1,285 were either vacant or not a dwelling unit, resulting in a sample of 8,183 households; of these, there were 1,267 households without an adult age 16 to 65. A total of 5,686 of the 6,916 households with eligible adults completed the screener (up to two adults per household could be selected to complete the questionnaire); survey respondents were then selected from eligible adults completing the screener.

The design was similar to the one implemented for the 2003 ALL survey and ensured the production of reliable statistics of comparable quality to the 2003 ALL. Random sampling methods were used, with calculable probabilities of selection at each stage of selection.

For the selection of individuals within DUs, a screener interview was used to identify the eligible persons within selected dwelling units. A sampling algorithm was implemented within the Computer Assisted Personal Interviewing (CAPI) system to select one or two sample persons among those identified to be eligible. Once selected, the background questionnaire interview was completed.

Round 2 (2014) or National Supplement to the Main Study. The national supplement U.S. sample was designed to achieve three core objectives: (a) oversample young adults (age 16-34), (b) oversample unemployed adults (age 16-65), and (c) expand the sample to include older adults (age 66-74). The sample selection method, therefore, differed from the main study sample design. Given the sample size goal for unemployed and the low prevalence of unemployed adults in the population, a dual-frame approach was implemented, which is a more efficient method of sampling rare populations. The dual-frame approach consisted of an area sample and a list sample.

Under this approach, an area sample of DUs was selected from the same PSUs and segments selected for the main study. The DU frame consisted of the PIAAC main study listings after removing the DUs previously released. One or more persons from the national supplement household sample target population was sampled within a household.

To obtain the oversample of unemployed adults, the frame was supplemented with a list of DUs from high unemployment census tracts. Within each of the PSUs, five high unemployment tracts were identified, and one was randomly selected for the national supplement list sample. The USPS address list was purchased for each of the sampled tracts, and a sample of DUs was taken from these lists. Within the sampled DUs, only those who were unemployed were eligible for selection.

Specifically, to arrive at a minimum of 3,600 completed cases for the national supplement, the four-stage, stratified area frame probability sample involved the following steps:

  • 80 PSUs previously selected for the main study consisting of counties or groups of contiguous counties;
  • 896 secondary sampling units (SSUs or segments) previously selected for the main study consisting of census blocks or block groups;
  • 9,579 DUs; and
  • 3,617 individuals within DUs resulting in 2,790 respondents to the survey.

The list sample involved the following steps:

  • 80 PSUs previously selected for the main study consisting of counties or groups of contiguous counties;
  • 80 SSUs consisting of census tracts;
  • 6,956 DUs; and
  • 951 individuals within DUs resulting in 870 respondents to the survey.

The national supplement household sample design resulted in a sample that is not stand-alone and is only nationally representative when combined with the main study.

Prison Study (2014). The prison study had a target of a minimum of 1,200 completed cases, including at least 240 females and at least 960 males. In order to achieve this goal, a two-stage, stratified sample was selected with 100 sampled prisons selected in the first stage, among which 80 were all-male or coed prisons and 20 were all-female prisons. All-female prisons were oversampled in order to permit analyses with data from incarcerated women. Due to higher than expected eligibility and response rates, 1,546 eligible inmates were selected within participating prisons, resulting in 1,319 respondents to the survey.

Round Three (2017) U.S. PIAAC Study. The 2017 U.S. sample was designed to achieve two core objectives. First, to provide a nationally representative sample of the U.S. household adult population 16-74 years old. Second, to arrive at sufficient coverage of different types of counties so that, when combined with previous samples (2012 main study and 2014 national supplement), it could improve the indirect small area county- and state-level estimates. The sample design comprised of a stratified four-stage cluster sample resulted in 3,660 completed cases. At each stage, all sampling units had a non-zero and calculable probability of selection.

To support the second core objective of producing indirect county-level estimates, the overlap with the PIAAC 2012/2014 PSUs was minimized in order to maximize the coverage of the combined sample across demographic variables related to proficiency, such as education attainment, poverty level, minority status, and foreign-born status. That is, by adding sample cases from counties with different demographic characteristics, (related to adult proficiency) as compared to those in the PIAAC 2012/2014, allowed the combined sample to be optimized for county-level estimation, given the available sample size. In PIAAC 2017, adaptive survey design procedures were implemented with the objectives of increasing sample yield through a refreshment sample (at same cost), reducing cost (as measured by contact attempts per completion) through case prioritization, and reducing bias due to nonresponse.

Data Collection and Processing

PIAAC is a voluntary literacy assessment of adults age 16 to 65 internationally and 16 to 74 in the United States.

Reference dates. The PIAAC main study was a new data collection effort and was conducted from August 2011 through April 2012. The national supplement household data collection, round 2 (2014), began in August 2013 and finished in April 2014. The 2017 data collection, round 3, began in March 2017 and finished in November 2017. The prison study (2014) data collection began in February 2014 and finished in June 2014.

Data collection. PIAAC required in-person interviews to complete the background questionnaire, before self-administration of the direct assessments (i.e., literacy, numeracy, reading components and/or problem solving in technology-rich environments). The direct assessments were available in two modes: paper-and-pencil and computer-administered. For the 2012 data collection, approximately 16 percent of the household respondents in the U. S. sample were directed to the paper-and-pencil path. In the 2014 data collection, about 22 percent of the household respondents were directed to the paper-and-pencil path. In the 2017 data collection, about 16 percent of the respondents were directed to the paper-and-pencil path.

The same procedures and instruments used during the main study in 2012 were employed during the subsequent two household data collections in 2014 and 2017. In 2014, the background questionnaire instrument was practically identical, with only changes in terms of periods and years referred to in the questions. In 2017 there were similar updates as in 2014, as well as addition of several new questions including items on non-degree credentials, in influencing skills and labor market outcomes, military service, and total household income.

The background questionnaire for the prison study of 2014 was specifically tailored to collect information related to the needs and experiences of incarcerated adults.

The same direct assessment was used throughout cycle 1 for both household and prison populations.

Incentives. There were no screener incentives provided in the main study (2011-12). However, a $5 incentive was offered to each responding household in the national supplement household sample in order to screen for the subgroups of interest in 2014. Upon review of the 2014 national supplement screening results and the logistics required to track the $5 incentive given to the thousands of 2014 national supplement households, combined with the expectation that most PIAAC 2017 households would have at least one selected participant, the $5 incentive was eliminated for the 2017 study. No screener incentives have been offered to the prison sample (2014).

In the main study, national supplement, and 2017 study household samples, following the completion of the assessment, a monetary incentive of $50 was paid to each respondent. The incentive was also paid to those adults who attempted to complete the assessment but were not able to complete it for reasons of language barriers or physical or mental disabilities. Respondents who refused to continue with the assessment were not compensated.

Data entry and verification. The Consortium required that data preparation and processing be performed in a uniform way within and across countries and with an acceptable quality level. Key data preparation tasks ensured this uniformity and were composed of manual data entry of scoring sheets, generation and review of edits on computer generated data files, management of coding, scoring of related files, validation of the structural consistency of the database, and delivery of the national database to the Consortium. Consortium-provided Data Management Expert (DME) software was used to perform many of these data preparation and processing activities. The Consortium provided each country with the DME software, which was used to assemble, manage, verify, and edit each country’s national database. The national DME database consisted of two parts: (1) data collected by the virtual machine’s processing of the background questionnaire and the computer-based assessment items or tests administered on the interviewer laptops, and (2) scoring data entered manually and generated as the result of scoring the paper- based assessment booklets.

Estimation Methods

This section provides information for rounds 1 (main study), 2 (national supplement), and 3 (2017 study sample).

Simple formulas that assume simple random sampling for variance estimation were not appropriate with PIAAC data due to complex sample design. The properties of a sample selected through a complex sample design then, could be very different from those of a simple random sample, where every individual in the target population has an equal chance of selection, and in which the observations from different sampled individuals can be considered statistically independent of one another. One way of addressing these departures (e.g. dependent observations, probability of selection not identical for all respondents) from standard statistical properties is by using sampling weights.

Weighting. All population and subpopulation characteristics based on the PIAAC data used sampling weights in their estimation. The purpose of calculating sample weights for PIAAC was to permit inferences from sampled persons to the populations from which they were drawn and to allow tabulations to reflect estimates of the population parameters. Sample weights were produced to accomplish the following five objectives: (1) to permit unbiased estimates, taking account of the fact that all persons in the population will not have the same probability of selection; (2) to minimize biases arising from differences between cooperating and noncooperating sampled persons; (3) to utilize auxiliary data on known population characteristics in such a way as to reduce sampling errors and bring data up to the dimensions of the population totals; (4) to reduce the variation of the weights and prevent a small number of observations from dominating domain estimates; and (5) to facilitate sampling error estimation under complex sample designs.

Objective 1 was accomplished by computing base weights for the households selected for screening and, subsequently, for persons selected for the background questionnaire and assessment from the eligible participating households in the household sample. For the prison study, it was accomplished by computing base weights for the sampled prisons and then inmates sampled in the participating prisons for the background questionnaire and assessment.

Objective 2 was accomplished through nonresponse weighting adjustments that accounted for screener nonresponse and background questionnaire nonresponse.

Objective 3, the weights for the household sample were calibrated to known totals from the 2012 American Community Survey (ACS). For the prison study, the weights were calibrated to known totals provided by the Bureau of Justice Statistics. The weights were calibrated using a raking procedure (i.e., iterative poststratification) so that numerous totals calculated with the resulting full- sample weights would agree with the ACS totals.

Objective 4 was addressed by trimming the weights. A small number of weights were reduced using an inspection approach (referred to as the k x median rule) as required by PIAAC weighting guidelines. After the trimming procedure, the weights were again calibrated to ACS totals. No trimming was conducted for the prison sample.

Finally, Objective 5 was accomplished by creating 45 replicate weights using the stratified jackknife method. Full-sample and replicate weights were calculated for each record to facilitate the computation of unbiased estimates and their standard errors. The weighting procedures were repeated for 45 strategically constructed subsets of the sample to create a set of replicate weights for variance estimation using the jackknife method. The replication scheme was designed to produce stable estimates of standard errors.

Weighting was performed separately for the household and prison samples. For the household sample, an additional goal of the weighting process was to improve the precision of estimates for unemployed persons and two groups of young adults (ages 16-24 and 25-34) by combining the main study and national supplement. Composite weights were produced so that national estimates could be generated for the combined sample. The main study sample, national supplement area sample, and national supplement list sample were weighted separately to account for nonresponse, calibrated, composited, and then recalibrated.

In addition, a set of weights for the combined PIAAC 2012/2014/2017 sample was created to allow for the creation of indirect small area county-level estimates and state-level estimates when the PIAAC 2017 sample is combined with the PIAAC 2012/2014 sample. The weights for the combined sample also allow for the production of national estimates for more detailed subgroups of the population than is possible with the separate samples.

Imputation. For the combined household sample, missing values of age category (10 cases) were imputed using the broad age range collected in the screener. Race/ethnicity for cases missing this item (175 cases) was created by imputing ethnicity (Hispanic/not Hispanic) first, and then race. To obtain values for ethnicity, cells were formed by PSU, segment, and language spoken at the screener. Then a hotdeck procedure was used to assign the value from a random donor within the cell to the missing case. To obtain values for race, cells were formed by PSU and segment and values imputed using the hotdeck procedure. For level of education and country of birth--information that was not collected through the screener, a limited amount of imputation was performed to fill in the data for respondents.

Imputation was performed separately for the main study and national supplement but followed the same general procedure. No employment status information was collected in the screener in the main study, so a different imputation approach was needed. For the two respondents with missing values, cells were formed by PSU and segment and values imputed using the hotdeck procedure. Imputation for the literacy-related nonrespondents was done by taking a random draw from the employment distributions from the 2012 ACS. For language problems, this was based on the distribution of employment for those that speak English not well or not at all. For learning/mental disabilities, imputation used the distribution of employment for persons with cognitive difficulty.

Small area estimation (SAE). Since 2013, PIAAC has published a large volume of official statistics about the proficiency of adults in the United States. The published statistics are mainly for the nation and for major subgroups. However, policymakers, business leaders, and educators/researchers often need information about smaller geographic areas. To address this need, PIAAC has used advanced statistical modeling approaches to produce literacy and numeracy estimates for all states and counties.

The objectives of the small area estimation process are to 1) reduce the mean square error associated with the state and county estimates, and to 2) provide accurate estimates of the mean square error to allow user to understand the level uncertainty associated with the small area estimates. The mean square error is a measure of the uncertainty surrounding the small area estimates. The mean square error has two components - bias and variance. To achieve the first objective, a large number of covariates were gathered from various sources, including the American Community Survey. A small number of covariates are selected for the model that provide the most predictive power. In addition, three levels of random effects (country, state, census division) are used to account for variation between areas. For the second objective, various sources of error (e.g., sampling error, measurement error) are incorporated in a Hierarchical Bayes model to generate thousands of realizations of the model outcomes. The variation across the realizations provides resulting mean square error. A thorough set of diagnostics is done on the model results prior to making predictions for areas that do not have PIAAC sample.

A visualization-based website will launch in 2020 that will allow users access to the small area estimates through heat maps and summary card displays. This user-friendly website will provide precision estimates and facilitate statistical comparisons among counties and states.

Top