Search Results: (1-15 of 182 records)
|NCES 2021029||2012–2016 Program for International Student Assessment Young Adult Follow-up Study (PISA YAFS): How reading and mathematics performance at age 15 relate to literacy and numeracy skills and education, workforce, and life outcomes at age 19
This Research and Development report provides data on the literacy and numeracy performance of U.S. young adults at age 19, as well as examines the relationship between that performance and their earlier reading and mathematics proficiency in PISA 2012 at age 15. It also explores how other aspects of their lives at age 19—such as their engagement in postsecondary education, participation in the workforce, attitudes, and vocational interests—are related to their proficiency at age 15.
|REL 2021067||Early Childhood Data Use Assessment Tool
The Early Childhood Data Use Assessment Tool is designed to identify and improve data use skills among early childhood education (ECE) program staff so they can better use data to inform, plan, monitor, and make decisions for instruction and program improvement. Data use is critical in quality ECE programs but can be intimidating for some ECE program staff. This tool supports growth in their data use skills. The tool has three components: (1) a checklist to identify staff skills in using child assessment and administrative data, (2) a resource guide to identify professional development resources for improving data use skills, and (3) an action plan template to support planning for the development and achievement of data use goals. Results obtained from using the tool are meant by the developers to support instruction and program improvement through increased and structured use of data.
|REL 2021057||Tool for Assessing the Health of Research-Practice Partnerships
Education research-practice partnerships (RPPs) offer structures and processes for bridging research and practice and ultimately driving improvements for K-12 outcomes. To date, there is limited literature on how to assess the effectiveness of RPPs. Aligned to the most commonly cited framework for assessing RPPs, Assessing Research-Practice Partnerships: Five Dimensions of Effectiveness, this two-part tool offers guidance on how researchers and practitioners may prioritize the five dimensions of RPP effectiveness and their related indicators. The tool also provides an interview protocol for RPP evaluators to use as an instrument for assessing the extent to which the RPP demonstrates evidence of the prioritized dimensions and their indicators of effectiveness.
|NCES 2021021||TIMSS 2019 U.S. Highlights Web Report
The Trends in International Mathematics and Science Study (TIMSS) 2019 is the seventh administration of this international comparative study since 1995, when it was first administered. TIMSS is administered every 4 years and is used to compare the mathematics and science knowledge and skills of 4th and 8th-graders over time. TIMSS is designed to align broadly with mathematics and science curricula in the participating countries. The results, therefore, suggest the degree to which students have learned mathematics and science concepts and skills likely to have been taught in school. In 2019, there were 64 education systems that participated in TIMSS at the 4th grade and 46 education systems at the 8th grade.
The focus of this web report is on the mathematics and science achievement of U.S. students relative to their peers in other education systems in 2019. Changes in achievement over the last 24 years, focusing on changes since 2015 and 1995, are also presented for the U.S. and several participating education systems. In addition, this report describes achievement gaps within the United States and other education systems between top and bottom performers, as well as among different student subgroups.
In addition to numerical scale results, TIMSS also reports the percentage of students reaching international benchmarks. The TIMSS international benchmarks provide a way to understand what students know and can do in a concrete way, as each level is associated with specific types of knowledge and skills.
|REL 2021041||The Association between Teachers’ Use of Formative Assessment Practices and Students’ Use of Self-Regulated Learning Strategies
Three Arizona school districts surveyed more than 1,200 teachers and more than 24,000 students in grades 3–12 in spring 2019 to better understand the relationship between their teachers’ use of formative assessment practices and their students’ use of self-regulated learning strategies, to help shape related teacher development efforts moving forward. Descriptive results indicated that students regularly track their own progress but less frequently solicit feedback from teachers or peers. On the other hand, teachers regularly give students feedback but less frequently provide occasions for students to provide feedback to one another. There was only a small, positive association between the number of formative assessment practices teachers used and the average number of self-regulated learning strategies among their students. The correlation was stronger in elementary classrooms and in STEM (science, technology, engineering, and mathematics) classrooms than in others. Some of teachers’ least-used formative assessment practices—facilitating student peer feedback and student self-assessment—had the strongest, positive associations with the average number of self-regulated learning strategies their students used. The more that teachers reported using these particular practices, the more self-regulated learning strategies their students reported using.
|REL 2021048||Creating and Using Performance Assessments: An Online Course for Practitioners
This self-paced, online course provides educators with detailed information on performance assessment. Through five modules, practitioners, instructional leaders, and administrators will learn foundational concepts of assessment literacy and how to develop, score, and use performance assessments. They will also learn about the role of performance assessment within a comprehensive assessment system. Each module will take approximately 30 minutes to complete, with additional time needed to complete the related tasks, such as creating a performance assessment and rubric. Participants will be provided with a certificate of completion upon finishing the course.
|NCES 2020134||Process Data File of the 2017 NAEP Mathematics Assessment: Grade 8 Released Items
As schools in the United States are increasingly using digital technology in the classroom to teach and assess students, the National Assessment of Educational Progress (NAEP) has moved forward to align with these practices. NAEP’s transition from paper-based to digitally based administration provides an engaging assessment experience for students and aligns with the delivery mode of many other large-scale assessments. Importantly, this transition to digitally based assessment (DBA) also allows NAEP to use tools available in digital platforms to measure content in new ways; to use assistive technology to provide enhanced accommodations for students with special needs; and to collect new types of data that deepen our understanding of what students know and can do, including how they engage with new technologies to approach problem solving.
In 2017, the NAEP mathematics assessment was administered for the first time as a DBA at grades 4 and 8. The digital platform allowed for the collection of new data within the testing system, including information on how students used onscreen tools to develop their responses to the assessment questions. These new data are called response process data. To further enrich our understanding of what students know and can do in the digital environment, response process data from the 2017 NAEP grade 8 mathematics assessment are now available for secondary analysis.
NAEP DBAs offer far more flexibility in meeting the needs of different students. The DBAs include Universal Design Elements, or built-in features that make it possible for more students to participate without special accommodation sessions. Onscreen tools are also available for students to use in their problem solving. The goal is for all students to have a seamless assessment administration, regardless of their ability.
|REL 2020032||Guide to Conducting a Needs Assessment for American Indian Students
American Indian communities often bring a deep sense of connection, relationships, and knowledge to their children’s education. However, education research has repeatedly shown that American Indian students trail their peers in achievement, attendance, and postsecondary readiness. Regional Educational Laboratory Central, the North Dakota Department of Public Instruction, and educators across North Dakota collaboratively developed needs assessment surveys for the state. These surveys are a primary focus of this tool which can be adapted for use in other state and local education agencies across the nation. These surveys can be used to identify and monitor the needs and successes of schools serving American Indian students. Survey development involved rigorous processes to ensure that the surveys were technically sound and culturally appropriate. The surveys provide a means to evaluate different characteristics of schools and the resulting data can guide focused supports for American Indian students. For example, state and local education agency staff may use the information from the surveys to identify target areas of need in order to provide additional resources, such as professional development activities, curricular materials, instructional strategies, and research articles, to schools. As education agencies begin to implement strategies and programs, the surveys can be readministered to monitor progress toward goals. The tool also provides guidance on administering the surveys and analyzing and interpreting the resulting data.
|REL 2020039||The Reliability and Consequential Validity of Two Teacher-Administered Student Mathematics Diagnostic Assessments
Several school districts in Georgia currently use two teacher-administered diagnostic assessments of student mathematical knowledge as part of their multi-tiered system of support in grades K-8. These assessments are the Global Strategy Stage (GloSS; New Zealand Ministry of Education, 2012) and the Individual Knowledge Assessment of Number (IKAN; New Zealand Ministry of Education, 2011). However, little is known about the inter-assessor reliability and consequential validity of these assessments. Inter-assessor reliability indicates whether two teachers obtain the same score for a student after administering the test on two occasions, and consequential validity explores perceptions of the value of using the assessments. Rather than rely on occasional testimonials from the field, decisions about using diagnostic assessments across the state should be based on psychometric data from an external source. Districts not currently using the GloSS and IKAN have indicated that they would consider using them to assess students’ current level of mathematical understanding and determine appropriate levels of instruction and intervention, if they were proven to be reliable and valid diagnostic assessments. This study found that the inter-assessor reliability for the GloSS measure and the IKAN Counting Interview is adequate. The inter-assessor reliability for the IKAN Written Assessment (one of the two components of the IKAN) is inadequate, and additional attention must be directed toward improving training for this measure so that reliability can be established. Teachers indicated that they found the data from the GloSS and IKAN assessments more useful than screening data currently in use for guiding decisions about how to provide intervention. Although teachers interviewed in the study’s focus groups expressed strong support for using both assessments, they reported in the study survey that the GloSS is more useful than the IKAN because it addresses students' solution strategies, which most other mathematics measures do not assess. Teachers did express some criticisms of both assessments; for example, they felt the IKAN Written Assessment should be untimed and that the GloSS should include familiar vocabulary.
|REL 2020030||Identifying North Carolina Students at Risk of Scoring below Proficient in Reading at the End of Grade 3
This study examines how students’ performance on North Carolina’s assessments taken from kindergarten to the beginning of grade 3 predicts reading proficiency at the end of grade 3. The study used longitudinal student-level achievement data for 2014/15–2017/18. The sample consisted of students in grade 3 who took the 2017/18 grade 3 end-of grade assessment in reading for the first time and had reading assessment data when they were in kindergarten in 2014/15. The analyses modeled associations between student performance on grade-level interim assessment measures and proficiency on the grade 3 end-of-grade assessment in reading using classification and regression tree analyses. The results indicated that less than 80 percent of students who failed the grade 3 state assessment were correctly identified as being at risk. The only exception to this finding was when a test better aligned to the end-of-grade state assessment was used as a predictor. These results suggest that more information is needed to use test score data from grades K–2 to reliably identify who is at risk of not being proficient on the grade 3 end-of-grade assessment. Educators may want to consider supplementing screening and progress monitoring assessments with informal, curriculum-based assessments that measure student vocabulary, syntax, and listening comprehension skills because research has identified these skills as important predictors of reading comprehension.
|NCES 2020047||U.S. PIAAC Skills Map: State and County Indicators of Adult Literacy and Numeracy
The U.S. PIAAC Skills Map allows users to access estimates of adult literacy and numeracy proficiency in all U.S. states and counties through heat maps and summary card displays. It also provides estimates of the precision of its indicators and facilitates statistical comparisons among states and counties.
|NCES 2020123||Early Childhood Longitudinal Study, Kindergarten Class of 2010-11, Third-Grade, Fourth-Grade, and Fifth-Grade Psychometric Report
This report describes the design, development, administration, quality control procedures, and psychometric characteristics of the direct and indirect child assessment instruments used to measure the knowledge, skills, and development of young children participating in the Early Childhood Longitudinal Study, Kindergarten Class of 2010–11 (ECLS-K:2011) in the in the third-, fourth-, and fifth-grade data collections. The focus of this volume is the seventh through ninth rounds of data collection: the spring 2014, spring 2105, and spring 2016 rounds.
|NCES 2020038||Estimating Student Achievement at the Topic Level in TIMSS Using IRT-Based Domain Scoring
Student achievement data from large-scale assessments have traditionally only been analyzed at the overall subject (for example, mathematics) or content area levels (for example, geometric thinking). This report introduces a method to analyze large-scale student achievement data at the topic level (for example, using fractions), using data from the Trends in International Mathematics and Science Study (TIMSS) fourth-grade mathematics assessment. Obtaining topic-level scores from large-scale assessments, such as TIMSS, allows for the identification of specific areas of strength and weakness in student performance and how those may be related to curricula. The introduced method has the additional benefits of straightforward implementation and intuitive interpretation.
|NCES 2020051||U.S. Performance on the 2015 TIMSS Advanced Mathematics and Physics Assessments: A Closer Look
“U.S. Performance on the 2015 TIMSS Advanced Mathematics and Physics Assessments: A Closer Look” expands upon the results described in NCES’ initial "Highlights" report on TIMSS Advanced. This new report provides in-depth analyses that (1) examine the demographics, school characteristics, and coursetaking patterns of the small subset of U.S. 12th-graders taking the TIMSS Advanced assessments; (2) describe the extent to which the topics assessed in the study were covered in the curricula of the advanced mathematics and physics courses taken by U.S students; (3) provide detailed performance data within content domains for student subgroups and overall; and (4) illustrate student performance with example items.
This report uses data from the 2015 Trends in International Mathematics and Science Study Advanced (TIMSS Advanced), an international assessment that measures advanced mathematics and physics achievement in the final year of secondary school. TIMSS Advanced is sponsored by the International Association for the Evaluation of Educational Achievement (IEA) and conducted in the United States by NCES.
|NCES 2020222||Program for the International Assessment of Adult Competencies (PIAAC) U.S. 2017 Sample Public-use File (PUF)
The PIAAC U.S. 2017 public-use file (PUF) contains individual unit data including both responses to the background questionnaire and the cognitive assessment from the third U.S. PIAAC data collection, completed in 2017. Statistical disclosure control treatments were applied due to confidentiality concerns. For more details on the PUF, please refer to Appendix E of the U.S. PIAAC Technical Report (NCES 2020-224).