Authors: Randy Elliot Bennett, Hilary Persky, Andrew R. Weiss, and Frank Jenkins
The Problem Solving in Technology-Rich Environments (TRE) study is the last of three field investigations in the National Assessment of Educational Progress (NAEP) Technology-Based Assessment Project, which explores the use of new technology in administering NAEP. The TRE study was designed to demonstrate and explore an innovative use of computers for developing, administering, scoring, and analyzing the results of NAEP assessments. The prior two studies, Mathematics Online (MOL) and Writing Online (WOL), compared online and paper testing in terms of issues related to measurement, equity, efficiency, and operations.
In the TRE study, two extended scenarios were created for measuring problem solving with technology. These scenarios were then administered to nationally representative samples of students. The resulting data were used to describe the measurement characteristics of the scenarios and the performance of students.
The context for the problem-solving scenarios was the domain of physical science. The TRE Search scenario required students to locate and synthesize information about scientific helium balloons from a simulated World Wide Web environment. The TRE Simulation scenario required students to experiment to solve problems of increasing complexity about relationships among buoyancy, mass, and volume; students viewed animated displays after manipulating the mass carried by a scientific helium balloon and the amount of helium contained in the balloon. Both scenarios targeted grade 8 students who were assumed to have basic computer skills; basic exposure to scientific inquiry and to concepts of buoyancy, mass, and volume; and the ability to read scientifically oriented material at a sixth-grade level or higher.
In the TRE study, data were collected from a nationally representative sample of grade 8 students in the spring of 2003. Over 2,000 public school students participated, with approximately 1,000 students taking each assessment scenario. (See appendix B for detailed information about the TRE sample selection.) Students were assigned randomly within each school to one of the scenarios—Search or Simulation. Students took the scenarios on school computers via the World Wide Web or on laptop computers taken into the schools. For both scenarios, data were collected about student demographics; students’ access to computers, use of computers, and attitudes toward them; and students’ science coursetaking and activities in school.
The TRE study used Evidence-Centered Design (ECD) (Mislevy, Almond, and Lukas 2003) to develop the interpretive framework for translating the multiplicity of actions captured from each student into inferences about what populations of students know and can do. In ECD, the key components of the interpretive framework are student and evidence models. The student model represents a set of hypotheses about the components of proficiency in a domain and their organization. The evidence model shows how relevant student actions are connected to those components of proficiency, including how each relevant action affects belief in student standing on each proficiency component. The structure provided by ECD is particularly important for complex assessments like TRE, for which meaningful inferences must be drawn based on hundreds of actions captured for each student.
For the purposes of TRE, the student model represented the components of student proficiency in the domain of problem solving in technology-rich environments. Two primary components were postulated: scientific inquiry and computer skills. Scientific inquiry was defined as the ability to find information about a given topic, judge what information is relevant, plan and conduct experiments, monitor efforts, organize and interpret results, and communicate a coherent interpretation. Computer skills were defined as the ability to carry out the largely mechanical operations of using a computer to find information, run simulated experiments, get information from dynamic visual displays, construct a table or graph, sort data, and enter text.
Evidence of these skills consisted of student actions called “observables.” Observables were captured by computer and judged for their correctness using scoring criteria called “evaluation rules,” and summary scores were created using a modeling procedure that incorporated Bayesian networks (Mislevy et al. 2000). Bayesian models belong to a class of methods particularly suited to the TRE scenarios because these methods account for multidimensionality and local dependency, neither of which is explicitly handled by the measurement models typically used in NAEP assessments.
Because the TRE study used measures that are experimental, data were analyzed to explore how well the TRE scenario scales captured the skills they were intended to summarize. For each scenario, the following measures were obtained: internal consistency; the relations of student scores to students’ prior knowledge; the TRE scale intercorrelations; the correlations of each observable with each subscale; the locations of the observables on the scales; the response probabilities for prototypic students (i.e., hypothetical students with low, medium, and high levels of proficiency); and the relations of relevant student background information to performance. Results were considered to be statistically significant if the probability of obtaining them by chance alone did not exceed the .05 level.
Readers are reminded that the TRE project was intended as an exploratory study of how NAEP can use technology to measure skills that cannot be easily measured by conventional paper-and-pencil means. This report will discuss the ability of a nationally representative student sample to solve problems using technology in the TRE context. However, the results pertain to student performance in only two scenarios employing a limited set of technology tools and a range of science content sufficient only for demonstration purposes. Therefore, results cannot be generalized more broadly to problem-solving in technology-rich environments for the nation’s eighth-graders.
TRE Search consisted of 11 items (or observables) and produced a total score and two subscores, scientific inquiry and computer skills.
The TRE Simulation scenario consisted of 28 observables and produced a total score and three subscores: scientific exploration, scientific synthesis, and computer skills.
NCES 2007-466 Ordering information
Bennett, R.E., Persky, H., Weiss, A.R., and Jenkins, F. (2007). Problem Solving in Technology-Rich Environments: A Report From the NAEP Technology-Based Assessment Project (NCES 2007–466). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
For more information, see The Problem Solving in Technology-Rich Environments (TRE) section on the Nation's Report Card website.