Search Results: (1-15 of 191 records)
|NCES 2022065||International Computer and Information Literacy Study (ICILS): U.S. ICILS 2018 Technical Report and User’s Guide
The U.S. ICILS 2018 Technical Report and User’s Guide provides an overview of the design and implementation of ICILS 2018 in the United States.
|NCES 2022066||International Computer and Information Literacy Study (ICILS): U.S. ICILS 2018 restricted-use data files and documentation
The ICILS 2018 U.S. restricted-use student and school data files include U.S. specific variables that are not part of the ICILS 2018 U.S. public-use data files or the U.S. data files in the IEA’s ICILS 2018 international database. They include NCES school IDs that facilitate merging with the Common Core of Data (CCD) for public schools and the Private School Universe Survey (PSS) for private schools. They are add-on files that do not contain weight variables or replicate weights, and therefore must be merged with the U.S. teacher and school data files in the international database before any analysis can be conducted. The U.S. data files in the international database can be downloaded from the IEA Data Repository https://www.iea.nl/data-tools/repository.
|NCES 2022067||International Computer and Information Literacy Study (ICILS): U.S. ICILS 2018 public-use data files and documentation
The ICILS 2018 U.S. public-use student, teacher, and school data files include U.S. specific variables that are not part of the U.S. data files in the ICILS 2018 international database. They are add-on files that do not contain weight variables or replicate weights, and therefore must be merged with the U.S. data files in the international database before any analysis can be conducted. The U.S. data files in the international database can be downloaded from the IEA Data Repository https://www.iea.nl/data-tools/repository.
|REL 2021106||The reliability of shorter assessments in New Jersey for group-level inferences
Education policymakers must balance the reliability of assessments to measure academic knowledge and skills with the burdens that assessments place upon students, teachers, and schools. In 2019, New Jersey began using the New Jersey Student Learning Assessments (NJSLA), shorter assessments based on the Partnership for Assessment of Readiness for College and Careers (PARCC). Regional Educational Laboratory researchers examined the reliability of test results for the NJSLA by comparing results at the school, test, and subgroup levels from 2016 to 2019. The findings indicated a high degree of reliability across most measures the researchers examined; during the transition to the NJSLA, the reliability did not decrease for any test results—except the Algebra 2 test—reported by the New Jersey Department of Education. The instability of the Algebra 2 results was most likely not attributable to changes in the assessment but instead to changes in the student population that was required to take the test following a change in the state’s testing requirements .
|NCES 2021019||Program for the International Student Assessment (PISA) 2018 Public Use File (PUF)
The PISA 2018 Public Use File (PUF) consists of data from the PISA 2018 sample. Statistical confidentiality treatments were applied due to confidentiality concerns. The PUF can be accessed from the National Center for Education Statistics website at http://nces.ed.gov/surveys/pisa/datafiles.asp.
For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011).
|NCES 2021020||Technical Report and User Guide for the 2016 Program for International Student Assessment (PISA) Young Adult Follow-up Study
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA YAFS 2016, as well as with information on how to access the PISA YAFS 2016 data.
|NCES 2021022||Program for the International Student Assessment Young Adult Follow-up Study (PISA YAFS) 2016 Public Use File (PUF)
The PISA YAFS 2016 Public Use File (PUF) consists of data from the PISA YAFS 2016 sample. PISA YAFS was conducted in the United States in 2016 with a sample of young adults (at age 19) who participated in PISA 2012 when they were in high school (at age 15). In PISA YAFS, students took the Education and Skills Online (ESO) literacy and numeracy assessments, which are based on the Program for the International Assessment of Adult Competencies (PIAAC). It contains data for individuals including responses to the background questionnaire and the cognitive assessment. Statistical confidentiality treatments were applied due to confidentiality concerns.
For more details on the data, please refer to chapter 8 of the PISA YAFS 2016 Technical Report and User Guide (NCES 2021-020).
|NCES 2021047||Program for the International Student Assessment (PISA) 2018 Restricted-Use Files (RUF)
The PISA 2018 Restricted Use File (RUF) consists of restricted-use data from PISA 2018 for the United States. The data file and documentation includes the data file, a codebook, instructions on how to merge with the U.S. PISA 2018 public-use dataset (NCES 2021-047), and a cross-walk to assist in merging with other public datasets, such as the Common Core of Data (CCD) and Private School Survey (PSS). As these data files can be used to identify respondent schools, a restricted-use license must be obtained before access to the data is granted. Click on the restricted-use license link below for more details https://nces.ed.gov/surveys/pisa/datafiles.asp.
For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011).
|REL 2021095||Examination of the Validity and Reliability of the Kansas Clinical Assessment Tool
Although national assessments for evaluating teacher candidates are available, some state education agencies and education preparation programs have developed their own assessments. These locally developed assessments are based on observations of teaching and other artifacts such as lesson plans and student assignments. However, local assessment developers often lack information about the validity and reliability of data collected with their assessments. The Council for the Accreditation of Educator Preparation (CAEP) has provided guidance for demonstrating the validity and reliability of locally developed teacher candidate assessments, yet few educator preparation programs have the capacity to generate this evidence.
The Regional Educational Laboratory Central partnered with educator preparation programs in Kansas to examine the validity and reliability of the Kansas Clinical Assessment Tool (K-CAT), a newly developed tool for assessing the performance of teacher candidates. The study was designed to align with CAEP guidance. The study found that cooperating teachers reported that the K-CAT accurately represented existing teaching performance standards (face validity). Two skilled raters found that the content of the K-CAT was mostly aligned to existing teaching performance standards (content validity). In addition, K-CAT scores for the same teacher candidate, provided by cooperating teachers and supervising faculty, were positively related (convergent validity). K-CAT indicator scores showed internal consistency, or correlations among related indicators, for standards and for the tool overall (reliability). K-CAT scores showed small relationships with teacher candidate scores on other measures of teaching performance (criterion-related validity).
|NCES 2021029||2012–2016 Program for International Student Assessment Young Adult Follow-up Study (PISA YAFS): How reading and mathematics performance at age 15 relate to literacy and numeracy skills and education, workforce, and life outcomes at age 19
This Research and Development report provides data on the literacy and numeracy performance of U.S. young adults at age 19, as well as examines the relationship between that performance and their earlier reading and mathematics proficiency in PISA 2012 at age 15. It also explores how other aspects of their lives at age 19—such as their engagement in postsecondary education, participation in the workforce, attitudes, and vocational interests—are related to their proficiency at age 15.
|REL 2021067||Early Childhood Data Use Assessment Tool
The Early Childhood Data Use Assessment Tool is designed to identify and improve data use skills among early childhood education (ECE) program staff so they can better use data to inform, plan, monitor, and make decisions for instruction and program improvement. Data use is critical in quality ECE programs but can be intimidating for some ECE program staff. This tool supports growth in their data use skills. The tool has three components: (1) a checklist to identify staff skills in using child assessment and administrative data, (2) a resource guide to identify professional development resources for improving data use skills, and (3) an action plan template to support planning for the development and achievement of data use goals. Results obtained from using the tool are meant by the developers to support instruction and program improvement through increased and structured use of data.
|REL 2021057||Tool for Assessing the Health of Research-Practice Partnerships
Education research-practice partnerships (RPPs) offer structures and processes for bridging research and practice and ultimately driving improvements for K-12 outcomes. To date, there is limited literature on how to assess the effectiveness of RPPs. Aligned to the most commonly cited framework for assessing RPPs, Assessing Research-Practice Partnerships: Five Dimensions of Effectiveness, this two-part tool offers guidance on how researchers and practitioners may prioritize the five dimensions of RPP effectiveness and their related indicators. The tool also provides an interview protocol for RPP evaluators to use as an instrument for assessing the extent to which the RPP demonstrates evidence of the prioritized dimensions and their indicators of effectiveness.
|NCES 2021021||TIMSS 2019 U.S. Highlights Web Report
The Trends in International Mathematics and Science Study (TIMSS) 2019 is the seventh administration of this international comparative study since 1995, when it was first administered. TIMSS is administered every 4 years and is used to compare the mathematics and science knowledge and skills of 4th and 8th-graders over time. TIMSS is designed to align broadly with mathematics and science curricula in the participating countries. The results, therefore, suggest the degree to which students have learned mathematics and science concepts and skills likely to have been taught in school. In 2019, there were 64 education systems that participated in TIMSS at the 4th grade and 46 education systems at the 8th grade.
The focus of this web report is on the mathematics and science achievement of U.S. students relative to their peers in other education systems in 2019. Changes in achievement over the last 24 years, focusing on changes since 2015 and 1995, are also presented for the U.S. and several participating education systems. In addition, this report describes achievement gaps within the United States and other education systems between top and bottom performers, as well as among different student subgroups.
In addition to numerical scale results, TIMSS also reports the percentage of students reaching international benchmarks. The TIMSS international benchmarks provide a way to understand what students know and can do in a concrete way, as each level is associated with specific types of knowledge and skills.
|REL 2021041||The Association between Teachers’ Use of Formative Assessment Practices and Students’ Use of Self-Regulated Learning Strategies
Three Arizona school districts surveyed more than 1,200 teachers and more than 24,000 students in grades 3–12 in spring 2019 to better understand the relationship between their teachers’ use of formative assessment practices and their students’ use of self-regulated learning strategies, to help shape related teacher development efforts moving forward. Descriptive results indicated that students regularly track their own progress but less frequently solicit feedback from teachers or peers. On the other hand, teachers regularly give students feedback but less frequently provide occasions for students to provide feedback to one another. There was only a small, positive association between the number of formative assessment practices teachers used and the average number of self-regulated learning strategies among their students. The correlation was stronger in elementary classrooms and in STEM (science, technology, engineering, and mathematics) classrooms than in others. Some of teachers’ least-used formative assessment practices—facilitating student peer feedback and student self-assessment—had the strongest, positive associations with the average number of self-regulated learning strategies their students used. The more that teachers reported using these particular practices, the more self-regulated learning strategies their students reported using.
|REL 2021048||Creating and Using Performance Assessments: An Online Course for Practitioners
This self-paced, online course provides educators with detailed information on performance assessment. Through five modules, practitioners, instructional leaders, and administrators will learn foundational concepts of assessment literacy and how to develop, score, and use performance assessments. They will also learn about the role of performance assessment within a comprehensive assessment system. Each module will take approximately 30 minutes to complete, with additional time needed to complete the related tasks, such as creating a performance assessment and rubric. Participants will be provided with a certificate of completion upon finishing the course.