Skip Navigation
  1. PIRLS
  2. PIRLS Resources

Frequently Asked Questions

In addition to the following questions about PIRLS, more FAQs about international assessments are available at

The Progress in International Reading Literacy Study (PIRLS) is an international assessment designed to measure trends in reading achievement at the fourth grade.

In PIRLS, fourth-grade students complete a reading assessment and a questionnaire that addresses their attitudes toward reading and their reading habits. In addition, questionnaires are given to students’ teachers and school principals to gather information about students’ school experiences in developing reading literacy. Copies of the questionnaires administered in the United States are available on the National Center for Education Statistics (NCES) PIRLS Questionnaires web page.

For more details on PIRLS, including assessment frameworks, technical documentation, results, and international versions of questionnaires, please visit the International PIRLS website.


PIRLS assesses two aspects of students’ reading literacy: (1) purposes for reading; and (2) processes of comprehension. Purposes for reading refers to the two types of reading that account for most of the reading done by young students, both in and out of school. Processes of comprehension refers to the ways in which readers construct meaning from the text.

Purposes for reading Processes of comprehension
Passages included in the PIRLS assessment evaluate students’ abilities to:

  • read for literary experience
  • read to acquire and use information
Items included in the PIRLS assessment evaluate students’ ability to:

  • focus on and retrieve explicitly stated information
  • make straightforward inferences
  • interpret and integrate ideas and information
  • evaluate and critique content and textual elements

For more information on the purposes for reading and processes of comprehension, see the PIRLS 2021 Assessment Framework.

PIRLS is a carefully constructed reading assessment that consists of a test of reading literacy and questionnaires to collect information about fourth-grade students' literacy performance.
PIRLS can help educators and policymakers by answering questions such as:

  • How well do fourth-grade students read?
  • How do students in one country compare with students in another country in reading literacy?
  • Do fourth-grade students value and enjoy reading?
  • Internationally, how do the reading habits and attitudes of students vary?
Since 2001, PIRLS has been administered every 5 years, with the United States participating in all past assessments.

To better measure the knowledge and skills required for success in the 21st century, PIRLS began the transition to a digitally based assessment in 2016 and has continued this transition in 2021.

In 2016, for the first time, education systems participating in PIRLS could choose to administer an optional assessment: ePIRLS. In addition to paper-and-pencil items in PIRLS, students were asked to complete online informational reading tasks. Each task involved navigating to and obtaining information from two to three different simulated websites, totaling 5 to 10 web pages. Students were then asked to complete a series of comprehension questions based on these tasks.

In 2021, education systems were given the option of participating in an entirely digital assessment incorporating both PIRLS passages and ePIRLS tasks. Thirty-three education systems opted to administer the digital assessment, while 32 education systems administered the assessment on paper. Education systems administering the digital assessment also administered a paper-based bridge assessment to a smaller sample of students to examine if there was a mode effect—that is, whether the shift from a paper format to a digital format affected student performance. For more information on the bridge study, refer to the PIRLS 2021 International Methods and Procedures.


The table below lists the total number of education systems that have participated in each of the five previous administrations of PIRLS. The term “education system” refers to International Association for the Evaluation of Educational Achievement (IEA) member countries and benchmarking participants. IEA member “countries” may be complete, independent political entities or nonnational entities that represent a portion of a country (e.g., England, Hong Kong, the Flemish community of Belgium). Nonnational entities that are represented by their larger country in the main results (e.g., Abu Dhabi in the United Arab Emirates, Ontario in Canada) or whose countries are not IEA members (e.g., Buenos Aires) are designated as “benchmarking participants.” For a complete list of education systems participating in PIRLS, visit the PIRLS Participating Countries page.

Year Number of education systems participating in PIRLS
2001 36
2006 45
2011 57
2016 PIRLS: 58
ePIRLS: 16
2021 Digital: 33
Paper: 32
Assessment year Number of participating schools Number of participating students Overall weighted participation rate (percent)
2001 174 3,763 83
2006 183 5,190 82
20111 370 12,726 81







1 The sample size for PIRLS 2011 was larger than in previous administrations because both TIMSS and PIRLS were conducted in that year. This led to the decision to draw a larger sample of schools and, where feasible, administer both studies in the same schools, albeit to separate classrooms of students. Ultimately, TIMSS (grade 4) and PIRLS in the United States were administered in the same schools but to separately sampled classrooms of students.

2 In 2016, all students in the selected classrooms were invited to participate in the PIRLS main study. The same students also participated in ePIRLS, typically on the day following the main study. Five schools that participated in PIRLS chose not to participate in ePIRLS.

3 In 2021, the United States administered the PIRLS digital assessment and a paper-bridge assessment to a smaller sample of students to examine if there was a mode effect—that is, whether the shift from a paper format to a digital format affected student performance.


Each participating country agrees to select a sample that is representative of the target population as a whole. In 2001, the target population was the higher of the two adjacent grades with the most 9-year-olds. Beginning in 2006, the definition of the target population was refined to represent students in the grade that corresponds to the fourth year of schooling, counting from the first year of International Standard Classification of Education (ISCED) Level 1—fourth grade in most countries, including the United States. This population represents an important stage in the development of reading, as at this point most children have learned to read and are using reading to learn. IEA's Trends in International Mathematics and Science Study (TIMSS) has also chosen to assess this target population of students.


In each administration of PIRLS, schools are randomly selected first (with a probability proportional to the estimated number of students enrolled in the target grade), and then one or two classrooms are randomly selected within each school. In the United States, schools of varying demographics and locations are randomly selected so that the overall U.S. sample is representative of the U.S. school population. The random selection process is important for ensuring that a country's sample accurately reflects its schools and, therefore, can be compared fairly with samples of schools from other countries.


In schools with only one or two fourth-grade classrooms, all students are asked to participate. In schools with more than two fourth-grade classrooms, only students in two randomly selected classrooms are asked to participate. Some students with special needs or limited English proficiency may be excused from the assessment.


PIRLS is a cooperative effort involving the participation of representatives from every education system in the study. Prior to each administration of PIRLS, the assessment framework is reviewed and updated to reflect changes in the curriculum and instruction of the participating education systems, while maintaining the ability to measure change over time. Extensive input is received from experts in reading education, assessment, and curriculum, as well as representatives from national education centers around the world.

To enable educators, policymakers, and other stakeholders to better understand the results from PIRLS, many assessment items are released for public use after each administration. To replace these items, countries submit items for review by subject-matter specialists; additional items are written by a committee, in consultation with item-writing specialists in various countries, to ensure that the content, as explicated in the frameworks, is covered adequately. To evaluate these items, they are first reviewed by a committee and then field-tested in most of the participating education systems, with the field-test results being used to evaluate item difficulty, how well items discriminate between high- and low-performing students, and evidence of bias toward or against individual countries or in favor of boys or girls.

PIRLS items include both multiple-choice and constructed-response items; the items are based on a selection of literary passages drawn from children's storybooks and informational texts. Literary passages include realistic stories and traditional tales, while informational texts include chronological and nonchronological articles, biographical articles, informational leaflets, and, beginning in 2016, online informational texts as part of ePIRLS. For more information on ePIRLS online informational texts, see FAQ #5.


Three studies have compared PIRLS and National Assessment of Educational Progress (NAEP) in terms of their measurement frameworks and the reading passages and questions included in the assessments.

The studies found the following similarities and differences:

Similarities Differences
  • PIRLS and NAEP call for students to develop interpretations, make connections across text, and evaluate aspects of what they have read.
  • PIRLS and NAEP use literary passages drawn from children's storybooks and informational texts as the basis for the reading assessment.
  • PIRLS and NAEP use multiple-choice and constructed-response questions with similar distributions of these types of questions.
  • Results of readability analyses suggest that the PIRLS reading passages are easier than the NAEP passages (by about one grade level, on average).
  • PIRLS calls for more text-based interpretation than NAEP. NAEP places more emphasis on having students take what they have read and connect to other readings or knowledge and to critically evaluate what they have read.
The most recent PIRLS administration was in 2021, with the release of the results in May 2023. The next administration of PIRLS is scheduled for 2026.
Back to Top