Search Results: (16-30 of 201 records)
Pub Number | Title | ![]() |
---|---|---|
NCES 2021022 | Program for the International Student Assessment Young Adult Follow-up Study (PISA YAFS) 2016 Public Use File (PUF)
The PISA YAFS 2016 Public Use File (PUF) consists of data from the PISA YAFS 2016 sample. PISA YAFS was conducted in the United States in 2016 with a sample of young adults (at age 19) who participated in PISA 2012 when they were in high school (at age 15). In PISA YAFS, students took the Education and Skills Online (ESO) literacy and numeracy assessments, which are based on the Program for the International Assessment of Adult Competencies (PIAAC). It contains data for individuals including responses to the background questionnaire and the cognitive assessment. Statistical confidentiality treatments were applied due to confidentiality concerns. For more details on the data, please refer to chapter 8 of the PISA YAFS 2016 Technical Report and User Guide (NCES 2021-020). |
7/8/2021 |
NCES 2021019 | Program for the International Student Assessment (PISA) 2018 Public Use File (PUF)
The PISA 2018 Public Use File (PUF) consists of data from the PISA 2018 sample. Statistical confidentiality treatments were applied due to confidentiality concerns. The PUF can be accessed from the National Center for Education Statistics website at http://nces.ed.gov/surveys/pisa/datafiles.asp. For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011). |
7/8/2021 |
NCES 2021020 | Technical Report and User Guide for the 2016 Program for International Student Assessment (PISA) Young Adult Follow-up Study
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA YAFS 2016, as well as with information on how to access the PISA YAFS 2016 data. |
7/8/2021 |
REL 2021095 | Examination of the Validity and Reliability of the Kansas Clinical Assessment Tool
Although national assessments for evaluating teacher candidates are available, some state education agencies and education preparation programs have developed their own assessments. These locally developed assessments are based on observations of teaching and other artifacts such as lesson plans and student assignments. However, local assessment developers often lack information about the validity and reliability of data collected with their assessments. The Council for the Accreditation of Educator Preparation (CAEP) has provided guidance for demonstrating the validity and reliability of locally developed teacher candidate assessments, yet few educator preparation programs have the capacity to generate this evidence. The Regional Educational Laboratory Central partnered with educator preparation programs in Kansas to examine the validity and reliability of the Kansas Clinical Assessment Tool (K-CAT), a newly developed tool for assessing the performance of teacher candidates. The study was designed to align with CAEP guidance. The study found that cooperating teachers reported that the K-CAT accurately represented existing teaching performance standards (face validity). Two skilled raters found that the content of the K-CAT was mostly aligned to existing teaching performance standards (content validity). In addition, K-CAT scores for the same teacher candidate, provided by cooperating teachers and supervising faculty, were positively related (convergent validity). K-CAT indicator scores showed internal consistency, or correlations among related indicators, for standards and for the tool overall (reliability). K-CAT scores showed small relationships with teacher candidate scores on other measures of teaching performance (criterion-related validity). |
7/7/2021 |
NCES 2021029 | 2012–2016 Program for International Student Assessment Young Adult Follow-up Study (PISA YAFS): How reading and mathematics performance at age 15 relate to literacy and numeracy skills and education, workforce, and life outcomes at age 19
This Research and Development report provides data on the literacy and numeracy performance of U.S. young adults at age 19, as well as examines the relationship between that performance and their earlier reading and mathematics proficiency in PISA 2012 at age 15. It also explores how other aspects of their lives at age 19—such as their engagement in postsecondary education, participation in the workforce, attitudes, and vocational interests—are related to their proficiency at age 15. |
6/15/2021 |
REL 2021067 | Early Childhood Data Use Assessment Tool
The Early Childhood Data Use Assessment Tool is designed to identify and improve data use skills among early childhood education (ECE) program staff so they can better use data to inform, plan, monitor, and make decisions for instruction and program improvement. Data use is critical in quality ECE programs but can be intimidating for some ECE program staff. This tool supports growth in their data use skills. The tool has three components: (1) a checklist to identify staff skills in using child assessment and administrative data, (2) a resource guide to identify professional development resources for improving data use skills, and (3) an action plan template to support planning for the development and achievement of data use goals. Results obtained from using the tool are meant by the developers to support instruction and program improvement through increased and structured use of data. |
3/2/2021 |
REL 2021057 | Tool for Assessing the Health of Research-Practice Partnerships
Education research-practice partnerships (RPPs) offer structures and processes for bridging research and practice and ultimately driving improvements for K-12 outcomes. To date, there is limited literature on how to assess the effectiveness of RPPs. Aligned to the most commonly cited framework for assessing RPPs, Assessing Research-Practice Partnerships: Five Dimensions of Effectiveness, this two-part tool offers guidance on how researchers and practitioners may prioritize the five dimensions of RPP effectiveness and their related indicators. The tool also provides an interview protocol for RPP evaluators to use as an instrument for assessing the extent to which the RPP demonstrates evidence of the prioritized dimensions and their indicators of effectiveness. |
2/2/2021 |
NCES 2021021 | TIMSS 2019 U.S. Highlights Web Report
The Trends in International Mathematics and Science Study (TIMSS) 2019 is the seventh administration of this international comparative study since 1995, when it was first administered. TIMSS is administered every 4 years and is used to compare the mathematics and science knowledge and skills of 4th and 8th-graders over time. TIMSS is designed to align broadly with mathematics and science curricula in the participating countries. The results, therefore, suggest the degree to which students have learned mathematics and science concepts and skills likely to have been taught in school. In 2019, there were 64 education systems that participated in TIMSS at the 4th grade and 46 education systems at the 8th grade. The focus of this web report is on the mathematics and science achievement of U.S. students relative to their peers in other education systems in 2019. Changes in achievement over the last 24 years, focusing on changes since 2015 and 1995, are also presented for the U.S. and several participating education systems. In addition, this report describes achievement gaps within the United States and other education systems between top and bottom performers, as well as among different student subgroups. In addition to numerical scale results, TIMSS also reports the percentage of students reaching international benchmarks. The TIMSS international benchmarks provide a way to understand what students know and can do in a concrete way, as each level is associated with specific types of knowledge and skills. |
12/8/2020 |
REL 2021041 | The Association between Teachers’ Use of Formative Assessment Practices and Students’ Use of Self-Regulated Learning Strategies
Three Arizona school districts surveyed more than 1,200 teachers and more than 24,000 students in grades 3–12 in spring 2019 to better understand the relationship between their teachers’ use of formative assessment practices and their students’ use of self-regulated learning strategies, to help shape related teacher development efforts moving forward. Descriptive results indicated that students regularly track their own progress but less frequently solicit feedback from teachers or peers. On the other hand, teachers regularly give students feedback but less frequently provide occasions for students to provide feedback to one another. There was only a small, positive association between the number of formative assessment practices teachers used and the average number of self-regulated learning strategies among their students. The correlation was stronger in elementary classrooms and in STEM (science, technology, engineering, and mathematics) classrooms than in others. Some of teachers’ least-used formative assessment practices—facilitating student peer feedback and student self-assessment—had the strongest, positive associations with the average number of self-regulated learning strategies their students used. The more that teachers reported using these particular practices, the more self-regulated learning strategies their students reported using. |
11/24/2020 |
REL 2021048 | Creating and Using Performance Assessments: An Online Course for Practitioners
This self-paced, online course provides educators with detailed information on performance assessment. Through five modules, practitioners, instructional leaders, and administrators will learn foundational concepts of assessment literacy and how to develop, score, and use performance assessments. They will also learn about the role of performance assessment within a comprehensive assessment system. Each module will take approximately 30 minutes to complete, with additional time needed to complete the related tasks, such as creating a performance assessment and rubric. Participants will be provided with a certificate of completion upon finishing the course. |
11/2/2020 |
NCES 2020134 | Process Data File of the 2017 NAEP Mathematics Assessment: Grade 8 Released Items
2017 Mathematics Process Data: grade 8 As schools in the United States are increasingly using digital technology in the classroom to teach and assess students, the National Assessment of Educational Progress (NAEP) has moved forward to align with these practices. NAEP’s transition from paper-based to digitally based administration provides an engaging assessment experience for students and aligns with the delivery mode of many other large-scale assessments. Importantly, this transition to digitally based assessment (DBA) also allows NAEP to use tools available in digital platforms to measure content in new ways; to use assistive technology to provide enhanced accommodations for students with special needs; and to collect new types of data that deepen our understanding of what students know and can do, including how they engage with new technologies to approach problem solving. In 2017, the NAEP mathematics assessment was administered for the first time as a DBA at grades 4 and 8. The digital platform allowed for the collection of new data within the testing system, including information on how students used onscreen tools to develop their responses to the assessment questions. These new data are called response process data. To further enrich our understanding of what students know and can do in the digital environment, response process data from the 2017 NAEP grade 8 mathematics assessment are now available for secondary analysis. NAEP DBAs offer far more flexibility in meeting the needs of different students. The DBAs include Universal Design Elements, or built-in features that make it possible for more students to participate without special accommodation sessions. Onscreen tools are also available for students to use in their problem solving. The goal is for all students to have a seamless assessment administration, regardless of their ability. |
10/28/2020 |
REL 2020032 | Guide to Conducting a Needs Assessment for American Indian Students
American Indian communities often bring a deep sense of connection, relationships, and knowledge to their children’s education. However, education research has repeatedly shown that American Indian students trail their peers in achievement, attendance, and postsecondary readiness. Regional Educational Laboratory Central, the North Dakota Department of Public Instruction, and educators across North Dakota collaboratively developed needs assessment surveys for the state. These surveys are a primary focus of this tool which can be adapted for use in other state and local education agencies across the nation. These surveys can be used to identify and monitor the needs and successes of schools serving American Indian students. Survey development involved rigorous processes to ensure that the surveys were technically sound and culturally appropriate. The surveys provide a means to evaluate different characteristics of schools and the resulting data can guide focused supports for American Indian students. For example, state and local education agency staff may use the information from the surveys to identify target areas of need in order to provide additional resources, such as professional development activities, curricular materials, instructional strategies, and research articles, to schools. As education agencies begin to implement strategies and programs, the surveys can be readministered to monitor progress toward goals. The tool also provides guidance on administering the surveys and analyzing and interpreting the resulting data. |
9/17/2020 |
REL 2020039 | The Reliability and Consequential Validity of Two Teacher-Administered Student Mathematics Diagnostic Assessments
Several school districts in Georgia currently use two teacher-administered diagnostic assessments of student mathematical knowledge as part of their multi-tiered system of support in grades K-8. These assessments are the Global Strategy Stage (GloSS; New Zealand Ministry of Education, 2012) and the Individual Knowledge Assessment of Number (IKAN; New Zealand Ministry of Education, 2011). However, little is known about the inter-assessor reliability and consequential validity of these assessments. Inter-assessor reliability indicates whether two teachers obtain the same score for a student after administering the test on two occasions, and consequential validity explores perceptions of the value of using the assessments. Rather than rely on occasional testimonials from the field, decisions about using diagnostic assessments across the state should be based on psychometric data from an external source. Districts not currently using the GloSS and IKAN have indicated that they would consider using them to assess students’ current level of mathematical understanding and determine appropriate levels of instruction and intervention, if they were proven to be reliable and valid diagnostic assessments. This study found that the inter-assessor reliability for the GloSS measure and the IKAN Counting Interview is adequate. The inter-assessor reliability for the IKAN Written Assessment (one of the two components of the IKAN) is inadequate, and additional attention must be directed toward improving training for this measure so that reliability can be established. Teachers indicated that they found the data from the GloSS and IKAN assessments more useful than screening data currently in use for guiding decisions about how to provide intervention. Although teachers interviewed in the study’s focus groups expressed strong support for using both assessments, they reported in the study survey that the GloSS is more useful than the IKAN because it addresses students' solution strategies, which most other mathematics measures do not assess. Teachers did express some criticisms of both assessments; for example, they felt the IKAN Written Assessment should be untimed and that the GloSS should include familiar vocabulary. |
9/14/2020 |
REL 2020030 | Identifying North Carolina Students at Risk of Scoring below Proficient in Reading at the End of Grade 3
This study examines how students’ performance on North Carolina’s assessments taken from kindergarten to the beginning of grade 3 predicts reading proficiency at the end of grade 3. The study used longitudinal student-level achievement data for 2014/15–2017/18. The sample consisted of students in grade 3 who took the 2017/18 grade 3 end-of grade assessment in reading for the first time and had reading assessment data when they were in kindergarten in 2014/15. The analyses modeled associations between student performance on grade-level interim assessment measures and proficiency on the grade 3 end-of-grade assessment in reading using classification and regression tree analyses. The results indicated that less than 80 percent of students who failed the grade 3 state assessment were correctly identified as being at risk. The only exception to this finding was when a test better aligned to the end-of-grade state assessment was used as a predictor. These results suggest that more information is needed to use test score data from grades K–2 to reliably identify who is at risk of not being proficient on the grade 3 end-of-grade assessment. Educators may want to consider supplementing screening and progress monitoring assessments with informal, curriculum-based assessments that measure student vocabulary, syntax, and listening comprehension skills because research has identified these skills as important predictors of reading comprehension. |
8/24/2020 |
NCES 2020047 | U.S. PIAAC Skills Map: State and County Indicators of Adult Literacy and Numeracy
The U.S. PIAAC Skills Map allows users to access estimates of adult literacy and numeracy proficiency in all U.S. states and counties through heat maps and summary card displays. The Skills Map also includes state- and county-level estimates for six age groups and four education groups. It also provides estimates of the precision of its indicators and facilitates statistical comparisons among states and counties. The users guide explains reporting practices and statistical methods that are needed to accurately use these state and county estimates and it provides examples of common uses. |
4/14/2020 |