In addition to the following questions about PISA, more FAQs about international assessments are available at: http://nces.ed.gov/surveys/international/faqs.asp
PISA measures student performance in mathematics, reading, and science literacy. Conducted every 3 years, each PISA data cycle assesses one of the three core subject areas in depth (considered the major domain), although all three core subjects are assessed in each cycle (the other two subjects are considered minor subject areas for that assessment year). Assessing all three subjects every 3 years allows countries to have a consistent source of achievement data in each of the three subjects while rotating one area as the primary focus over the years. More information on the PISA assessment frameworks can be found at: www.oecd.org/pisa/pisaproducts.
Science is the major subject area in 2015, as it was in 2006, since each subject is a major subject area once every three cycles. In 2015, all subjects were assessed primarily through a computer-based assessment. In addition to the core assessments of science, reading, mathematics, and collaborative problem solving, the United States participated in the optional financial literacy assessment in 2015.
PISA administration cycle
NOTE: Reading, mathematics, and science literacy are all assessed in each assessment cycle of the Program for International Assessment (PISA). The subject in all capital letters is the major subject area for that cycle. A collaborative problem solving (CPS) assessment was administered in 2015. Financial literacy is an optional assessment for countries. As of 2015, PISA will be administered entirely on computer.
PISA 2015 consists of computer-based assessments of students' mathematics, science, and reading literacy, and collaborative problem solving skills. In each participating school, sampled students sit for a two-hour computer-based assessment. Countries can also opt to participate in an assessment of financial literacy.
In 2015, students completed a student questionnaire providing information about their background, attitudes towards science, and learning strategies and the principal of each participating school completed a school questionnaire providing information on the school's demographics and learning environment. New to 2015, PISA included a teacher questionnaire, to be completed by up to 10 science and 15 non-science teachers per school. There were separate teacher questionnaires that were administered to science teachers and non-science teachers. The PISA questionnaires used in the United States in prior cycles are available at: http://nces.ed.gov/surveys/pisa/questionnaire.asp.
|Assessment year||Number of participating students||Number of participating schools||School response rate (percent)||Overall student response rate (percent)|
|Original schools||With substitute schools|
To provide valid estimates of student achievement and characteristics, PISA selects a sample of students that represents the full population of 15-year-old students in each participating country or education system. This population is defined internationally as 15-year-olds (15 years and 3 months to 16 years and 2 months at the beginning of the testing period) attending both public and private schools in grades 7-12. Each country or education system submitted a sampling frame to the consortium of organizations responsible for the implementation of PISA 2015 internationally. Westat, a survey research firm in Rockville, Maryland, contracted by the OECD, then validates each country or education system's frame.
Once a sampling frame is validated, Westat draws a scientific random sample of a minimum of 150 schools from each frame with two replacement schools for each original school, unless there are less than 150 schools, in which case all schools would be sampled. A minimum of 50 schools are sampled for adjudicating entities (e.g., U.S. states that opted to participate separately in 2015). The list of selected schools, both original and replacement, is delivered to each education system's PISA national center. Countries and education systems do not draw their own samples.
Each country/education system is responsible for recruiting the sampled schools. They begin with the original sample and only use the replacement schools if an original school refuses to participate. In accordance with PISA guidelines, replacement schools are identified by assigning the two schools neighboring the sampled school in the frame as substitutes to be used in instances where an original sampled school refuses to participate. Replacement schools are required to be in the same implicit stratum (i.e., have similar demographic characteristics) as the sampled school. A minimum participation rate of 65 percent of schools from the original sample of schools is required for a country or education system's data to be included in the international database.
After schools are sampled and agree to participate, students are sampled. Each country/education system submits student listing forms containing all age-eligible students for each of their schools using Key Quest, the internationally provided software.
Westat carefully reviews the student lists and uses sophisticated software to perform data validity checks to compare each list against what is known of the schools (e.g., expected enrollment, gender distribution) and PISA eligibility requirements (e.g., grade and birthday ranges). The selected student samples are then sent back to each national center. Unlike school sampling, students are not sampled with replacement.
Schools inform students of their selection to participate on assessment day. Student participation must be at least 80 percent for a country's/education system's data to be reported by the OECD.
Countries and education systems within countries participate in PISA.
The list of countries and education systems that participated in each PISA cycle is available at: http://nces.ed.gov/surveys/pisa/countries.asp.
PISA is administered in the fall in the United States, typically in October to November in the year of the assessment. The 2015 data was collected in October-November 2015.
These and other data products are available at http://nces.ed.gov/pubsearch/getpubcats.asp?sid=098. The PISA International Data Explorer (IDE) (http://nces.ed.gov/surveys/international/ide/) includes data for the United States and other education systems. Users can create their own tables and figures with the IDE.
The most recent administration of PISA was in 2015. In the United States, data collection occurred in October-November 2015. Results will be reported in December 2016. The next round of PISA will be administered in 2018.
In 2012, the OECD piloted a new test, based on the PISA assessment frameworks and statistically linked to the PISA scales, for individual schools. The purpose of this test, called the OECD Test for Schools in the United States, is for individual schools to benchmark their performance internationally. While based on PISA, the OECD Test for Schools is a different assessment and has a different purpose than PISA. More information about this is available from the OECD at: http://www.oecd.org/pisa/aboutpisa/pisa-basedtestforschools.htm.
PISA differs from these studies in several ways:
PISA is designed to measure "literacy" broadly, while other studies, such as TIMSS and NAEP, have a stronger link to curriculum frameworks and seek to measure students' mastery of specific knowledge, skills, and concepts. The content of PISA is drawn from broad content areas, such as space and shape for mathematics, in contrast to more specific curriculum-based content such as geometry or algebra.
In addition to the differences in purpose and age coverage between PISA and other international comparative studies, PISA differs from other assessments in what students are asked to do. PISA focuses on assessing students' knowledge and skills in reading, mathematics, and science literacy in the context of everyday situations. That is, PISA emphasizes the application of knowledge to everyday situations by asking students to perform tasks that involve interpretation of real-world materials as much as possible. Analyses based on expert panels' reviews of mathematics and science items from PISA, TIMSS, and NAEP indicate that PISA items require multi-step reasoning more often than either TIMSS or NAEP. The study also shows that PISA mathematics and science literacy items often involve the interpretation of charts and graphs or other "real world" material. These tasks reflect the underlying assumption of PISA: as 15-year-olds begin to make the transition to adult life, they need to not only comprehend what they read or to retain particular mathematical formulas or scientific concepts, they need to know how to apply their knowledge and skills in the many different situations they will encounter in their lives.
Moreover, NAEP and PISA have different underlying approaches to mathematics that play out in the operationalization of items. NAEP focuses more closely on school-based curricular attainment whereas PISA focuses on literacy, or the use of mathematics in real-word situations. The implication of this difference is that while the NAEP assessment is not devoid of real-world contexts, it does not specifically require them; thus it includes computation items as well as problem solving items U.S. students are likely to encounter in school. PISA does not include any computation items (nor any items) that are not placed within a real-world context and, in that way, may be more unconventional to some students. PISA items also may have a heavier reading load, use a greater diversity of visual representations, and require students to make assumptions or sift through information that is irrelevant to the problem (i.e., 'mathematize'), whereas NAEP items typically do not include this aspect. These are thus other ways in which the assessments differ and explain divergent trend results.
A study comparing the PISA and NAEP (grades 8 and 12) reading assessments found that PISA and NAEP view reading as a constructive process and both measure similar cognitive skills. There are differences between them, though, reflecting in part the different purposes of the assessments. First, NAEP has longer reading passages than PISA and asks more questions about each passage, which is possible because of the NAEP passages' longer length. With regard to cognitive skills, NAEP has more emphasis on critiquing and evaluating text, while PISA has more emphasis on locating information. NAEP also measures students' understanding of vocabulary in context and PISA does not include any questions of this nature. Finally, NAEP has a greater emphasis on multiple-choice items compared to PISA and the nature of the open-ended items differs, where PISA open-ended items call for less elaboration and support from the text than do those in NAEP.
To learn more about the differences in the respective approaches to the assessment of mathematics, science and reading among PISA, TIMSS, and NAEP, see the following papers:
The goal of PISA is to represent outcomes of learning rather than outcomes of schooling. By placing the emphasis on age, PISA intends to measure what 15-year-olds have learned inside and outside of school throughout their lives, not just in a particular grade. Focusing on age 15 provides an opportunity to measure broad learning outcomes while all students across the many participating nations are still required to be in school. Finally, because years of education vary among countries and education systems, choosing an age-based sample makes comparisons across countries and education systems somewhat easier.
Before talking about how the TIMSS results compare with the PISA results, it is important to recognize the ways in which TIMSS and PISA differ.
While TIMSS and PISA both assess mathematics and science, they differ with respect to which students are assessed, what is measured, and the participating countries and educational jurisdictions.
On TIMSS, students at 4th and 8th grades performed above the TIMSS scale average in both mathematics and science, unlike what we see in PISA in which—in 2012—U.S. 15-year-olds performed below (in mathematics) or not measurably different than (in science) the OECD averages. Five East Asian countries and education systems (Singapore, Korea, Hong Kong-China, Chinese Taipei, and Japan) outperformed the United States in mathematics and science in both TIMSS and PISA.
Student and school-level data are available for download and analysis. However, the assessment methods used in international assessments only produce valid scores for groups, not individuals. To protect respondent privacy, individual students, principals and teachers cannot be identified from the data. Data from PISA 2012 for all countries, including the United States can be obtained from the OECD website at www.pisa.oecd.org. Data collected in the United States for PISA can be downloaded from:http://nces.ed.gov/pubsearch/getpubcats.asp?sid=098. Those interested in exploring the PISA data can use the PISA International Data Explorer (IDE) (http://nces.ed.gov/surveys/international/ide/), an online data tool that helps users create their own tables and figures.
Yes and no. The U.S. national PISA results are representative of the nation as a whole but not of individual states. Drawing a sample that is representative of all 50 individual states and the District of Columbia would require a much larger sample than the United States currently draws for international assessments, requiring considerable amounts of additional time and money. A state may elect to participate in PISA as an individual education system—as Connecticut, Florida and Massachusetts did in 2012, and Massachusetts, North Carolina, and Puerto Rico in 2015—and in that case a sample is drawn that is representative of that state.