In addition to the following questions about PISA, more FAQs about international assessments are available at: http://nces.ed.gov/surveys/international/faqs.asp
PISA measures student performance in reading, mathematics, and science literacy. Conducted every 3 years, each PISA data cycle assesses one of the three core subject areas in depth (considered the major or focal subject), although all three core subjects are assessed in each cycle (the other two subjects are considered minor domains for that assessment year). Assessing all three subjects every 3 years allows countries to have a consistent source of achievement data in each of the three subjects while rotating one area as the primary focus over the years. In addition to the core assessments, education systems may participate in optional assessments such as financial literacy and problem solving. More information on the PISA assessment frameworks can be found at: www.oecd.org/pisa/pisaproducts.
In 2018, reading literacy was the major subject area, as it was in 2009 and 2000. In addition to the core assessment in science, reading, mathematics literacy, the 2018 cycle included an optional financial literacy assessment. The United States participated in this optional assessment.
PISA administration cycle
NOTE: Reading, mathematics, and science literacy are all assessed in each assessment cycle of the Program for International Assessment (PISA). A separate problem-solving assessment was administered in 2003 and 2012, and financial literacy in 2012, 2015, and 2018. The subject in bolded all capital letters is the major or focal subject area for that cycle. As of the 2015 cycle, PISA is administered entirely on computer.
PISA 2018 consisted of a computer-based assessment of students' science, reading and mathematics literacy. Countries could also opt to participate in an assessment of financial literacy. In each participating school, sampled students sat for a two-hour computer-based assessment that included a combination of science, reading, and mathematics items. A subsample of students who sat for the main assessment was asked to return for a second session in which they completed a computer-based assessment of financial literacy.
In 2018 PISA offered the following questionnaires:
The PISA questionnaires used in the United States are available at: http://nces.ed.gov/surveys/pisa/questionnaire.asp.
|Assessment year||Number of participating students||Number of participating schools||School response rate (percent)||Overall student response rate (percent)|
|Original schools||With substitute schools|
To provide valid estimates of student achievement and characteristics, PISA selects a sample of students that represents the full population of 15-year-old students in each participating country or education system. This population is defined internationally as 15-year-olds (15 years and 3 months to 16 years and 2 months at the beginning of the testing period) attending both public and private schools in grades 7-12. Each country or education system submits a sampling frame to the international consortium of organizations responsible for the implementation of PISA. The OECD's international sampling contractor then validates each country or education system's sampling frame.
Once a sampling frame is validated, the international contractor draws a scientific random sample of a minimum of 150 schools from each frame with two replacement schools for each original school, unless there are fewer than 150 schools, in which case all schools are sampled. A minimum of 50 schools were sampled for subnational participants. The list of selected schools, both original and replacement, is delivered to each education system's PISA national center. Countries and education systems do not draw their own samples.
Each country/education system is responsible for recruiting sampled schools. They begin with the original sample and only use the replacement schools if an original school refuses to participate. In accordance with PISA guidelines, replacement schools are identified by assigning the two schools neighboring the sampled school in the sampling frame as substitutes to be used in instances where an original sampled school refuses to participate. Replacement schools are required to be in the same implicit stratum (i.e., have similar demographic characteristics) as the sampled school. The international school response-rate target was 85 percent for all education systems. A minimum of 65 percent of schools from the original sample of schools was required to participate for an education system's data to be included in the international database. Education systems were allowed to use replacement schools (selected during the sampling process) to increase the response rate once the 65 percent benchmark had been reached.
After schools are sampled and agree to participate, students are sampled. A minimum of 6,300 students was required in each country or education system that planned to administer the core PISA assessment and the optional financial literacy test. (The minimum student sample size for subnational education systems, such as U.S. states, was 1,500 students.) Each country/education system submits student listing forms containing all age-eligible students for each of their schools to the OECD's international contractor for student level sampling.
The OECD's international contractor carefully reviews the student lists and uses sophisticated software to perform data validity checks to compare each list against what is known of the schools (e.g., expected enrollment, gender distribution) and PISA eligibility requirements (e.g., grade and birthday ranges). The selected student samples are then sent back to each national center. Unlike school sampling, there is no substitution of sampled students allowed.
Schools inform students of their selection to participate. Student participation must be at least 80 percent for a country's/education system's data to be reported by the OECD.
In order to keep PISA as inclusive as possible and to keep the student exclusion rate down, the United States used the UH ('Une Heure' or 'one hour' in French) instrument designed for students with special education needs. The UH instrument was made available to special education needs students within mainstream schools and contained about half as many items as the regular test instrument. These testing items were deemed more suitable for students with special education needs. A UH student questionnaire was also administered, which only contained trend items from the regular student questionnaire. The structure of both the UH test instrument and UH student questionnaire allowed more time per question than the regular instruments and UH sessions were generally held in small groups.
In order to increase the accuracy of the measurement of student ability, PISA 2018 introduced adaptive testing in its reading assessment. Instead of using fixed, predetermined test booklets as was done through PISA 2015, the reading assessment given to each student was dynamically determined, based on how the student performed in prior stages. Multistage adaptive testing in reading is made easier because PISA has moved to a computer-based assessment platform, which provides more flexibility in the routing of items and blocks or units of items.
There were three stages to the PISA 2018 reading assessment: Core, Stage 1 and Stage 2. Students first saw a short Core stage, which consisted of between 7 and 10 items. The vast majority of these items (at least 80 percent and always at least 7 items) were automatically scored. Studentsí performance in this stage was provisionally classified as low, medium or high, depending on the number of correct answers to these automatically scored items.
The various Core Blocks of material delivered to students did not differ in any meaningful way in their difficulty. Stage 1 and 2, however, both existed in two different forms: comparatively easy and comparatively difficult. Students who displayed medium performance in the Core stage were equally likely to be assigned an easy or a difficult Stage 1. Students who displayed low performance in the Core stage had a 90 percent chance of being assigned to an easy Stage 1 and a 10 percent chance of being assigned to a difficult Stage 1. Students who displayed high performance in the Core stage had a 90 percent chance of being assigned to a difficult Stage 1 and a 10 percent chance of being assigned to an easy Stage 1.
Students were assigned to easy and difficult Stage 2 blocks of material in much the same way. In order to classify student performance as precisely as possible, however, responses to automatically scored items from both the Core stage and Stage 1 were used.
For more information about the adaptive PISA test, see The PISA 2018 Technical Report (OECD forthcoming).
Countries participate in PISA, and education systems within countries also participate in the study. For instance, international cities and regions participated in PISA in 2018, and U.S. states have participated in past PISA cycles.
The list of countries and education systems that participated in each PISA cycle is available at: http://nces.ed.gov/surveys/pisa/countries.asp.
The goal of PISA is to represent outcomes of learning rather than outcomes of schooling. By placing the emphasis on age, PISA intends to show what 15-year-olds have learned inside and outside of the classroom throughout their lives, not just in a particular grade. Focusing on age 15 provides an opportunity to measure broad learning outcomes while students across the many participating nations are still required to be in school. Finally, because years of education vary among countries and education systems, choosing an age-based sample makes comparisons across countries and education systems somewhat easier.
PISA is designed to measure "literacy" broadly, while the Trends in International Mathematics and Science Study (TIMSS) and the National Assessment of Educational Progress (NAEP) have stronger links to curriculum frameworks and seeks to measure students' mastery of specific knowledge, skills, and concepts. The content of PISA is drawn from broad content areas, such as living systems and physical systems for science, in contrast to more specific curriculum-based content such as biology or physics.
To learn more about the differences in the respective approaches to the assessment of mathematics, science and reading among PISA, TIMSS, and NAEP, see the following papers (a paper comparing NAEP and PISA 2015 is forthcoming):
Except for the very first data collection cycle in 2000, the United States collects PISA data in the fall of the designated data collection year. The PISA 2018 data collection was administered between October and November 2018 in the United States.
The next administration of PISA is in fall of 2021. Results will be reported at the end of 2022.
Student and school-level data are available for download and analysis. However, the assessment methods used in international assessments like PISA only produce valid scores for groups, not individuals. Data from PISA 2018 for all countries, including the United States, can be obtained from the OECD website at www.oecd.pisa.org. Data collected in the United States for PISA can be downloaded from: http://nces.ed.gov/pubsearch/getpubcats.asp?sid=098 (2018 data forthcoming).
Yes and no. The U.S. national PISA results are representative of the nation as a whole but not of individual states. Drawing a sample that is representative of all 50 individual states would require a much larger sample than the United States currently draws for international assessments, requiring considerable amounts of additional time and money. A state or territory may elect to participate in PISA as an individual education system—as Massachusetts, North Carolina and Puerto Rico did in 2015—and in that case a sample is drawn that is representative of that state. In the case of Massachusetts and North Carolina, the samples drawn in 2015 represent public school students only. The Puerto Rico sample in 2015 included both public and private school students. No states elected to participate in PISA 2018 separately from the nation.