Search Results: (1-15 of 40 records)
Pub Number | Title | Date |
---|---|---|
NCES 2023076 | NAEP High School Transcript Study 2019 Restricted-Use (RU) Datasets
The restricted-use datasets for the National Assessment of Educational Progress (NAEP) High School Transcript Study (HSTS) 2019 include ASCII-formatted data files, record layouts, SAS- and SPSS-formatted databases, codebooks, and SAS and SPSS programs. The NAEP High School Transcript Study analyzes transcripts from a national sample of U.S. public and private school graduates who also took the 2019 NAEP 12th grade assessments in mathematics and science. The study provides valuable information about coursetaking patterns disaggregated by demographic characteristics and the relationship between NAEP scale scores and various graduate characteristics. Explore the NAEP HSTS 2019 User’s Guide and Technical Report (NCES 2023077). |
11/30/2023 |
NCES 2023077 | 2019 NAEP High School Transcript Study (HSTS) User’s Guide and Technical Report
The 2019 National Assessment of Educational Progress (NAEP) High School Transcript Study (HSTS) analyzes transcripts from a national sample of U.S. public and private school graduates who also took 2019 NAEP 12th grade mathematics and science assessments and provides information about coursetaking patterns and the relationship between NAEP scale scores and various graduate characteristics. The 2019 NAEP HSTS User’s Guide and Technical Report documents the procedures used to collect and summarize the data from the 2019 High School Transcript Study. Chapters describe the sampling of schools and graduates, data collection, data processing, weighting, variance estimation procedures, the 2019 HSTS data files and codebooks, and non-response bias analysis. The appendices contain the data collection and documentation forms; associated NAEP 2019 questionnaires; a description of the School Courses for the Exchange of Data (SCED), which was used to code the courses on the collected transcripts, plus a complete listing of SCED codes; codebooks for the 2019 data files; a discussion of the linking methodology used to estimate error variance; and a glossary. |
11/30/2023 |
NCES 2023032 | Equity in Education Dashboard
The Equity in Education Dashboard website contains key findings and trends on educational equity in the United States from a variety of data sources. |
9/14/2023 |
NCES 2023013 | User’s Manual for the MGLS:2017 Data File, Restricted-Use Version
This manual provides guidance and documentation for users of the Middle Grades Longitudinal Study of 2017–18 (MGLS:2017) restricted-use school and student data files (NCES 2023-131). An overview of MGLS:2017 is followed by chapters on the study data collection instruments and methods; direct and indirect student assessment data; sample design and weights; response rates; data preparation; data file content, including the composite variables; and the structure of the data file. Appendices include a psychometric report, a guide to scales, field test reports, and school and student file variable listings. |
8/16/2023 |
REL 2023001 | Stabilizing subgroup proficiency results to improve identification of low-performing schools
The Every Student Succeeds Act (ESSA) requires states to identify schools with low-performing student subgroups for Targeted Support and Improvement (TSI) or Additional Targeted Support and Improvement (ATSI). Random differences between students’ true abilities and their test scores, also called measurement error, reduce the statistical reliability of the performance measures used to identify schools for these categorizations. Measurement error introduces a risk that the identified schools are unlucky rather than truly low performing. Using data provided by the Pennsylvania Department of Education (PDE), the study team used Bayesian hierarchical modeling to improve the reliability of subgroup proficiency measures, allowing PDE to target the schools and students that most need additional support. PDE plans to incorporate stabilization as a “safe harbor” alternative in its 2022 accountability calculations. The study also shows that Bayesian stabilization produces reliable results for subgroups as small as 10 students—suggesting that states could choose to reduce minimum counts used in subgroup calculations (typically now around 20 students), promoting accountability for all subgroups without increasing random error. Findings could be relevant to states across the country, all of which face the same need to identify schools for TSI and ATSI, and the same tension between accountability and reliability, which Bayesian stabilization could help to resolve. |
2/27/2023 |
REL 2023146 | Indicators of School Performance in Texas
The School Improvement Division of the Texas Education Agency (TEA) identifies, monitors, and supports low-performing schools. To identify low-performing schools, TEA assigns annual academic accountability ratings to its districts and schools, but these ratings are only provided once per year and are vulnerable to disruptions in the assessment system. Schools that receive low accountability ratings do not meet accountability expectations and are considered low-performing. |
12/5/2022 |
NCES 2022028 | 2019 NAEP High School Transcript Study
The 2019 NAEP High School Transcript Study (HSTS) describes the coursetaking patterns and academic performance of graduates from a national sample of U.S. public and private schools who also took the 2019 NAEP twelfth-grade mathematics and science assessments. This report uses data from the 1990, 2000, 2009, and 2019 NAEP HSTS for coursetaking results and from 2005, 2009, and 2019 for comparisons to NAEP. The study of high school graduates’ academic performance and coursetaking patterns is based on an analysis of their transcripts and NAEP twelfth-grade mathematics and science assessment results. HSTS show trends from 1990, 2000, 2009, and 2019 in grade point averages, course credits earned, curriculum levels, and various coursetaking patterns. The 2019 HSTS uses a new course classification system, the School Courses for the Exchange of Data (SCED), to provide a more detailed breakdown of cross-disciplinary coursetaking programs such as Career and Technical Education (CTE) and Science Technology Engineering and Mathematics (STEM) coursetaking. The study also compares graduates’ average NAEP scale scores from the twelfth-grade mathematics and science assessments to the academic achievement reported in their transcripts. The linkage of the NAEP twelfth-grade mathematics and science assessments to HSTS provides the opportunity for school leaders, policy makers, and researchers to analyze student performance by a rich set of HSTS and NAEP contextual factors. |
3/16/2022 |
NCES 2021077 | 2020 Long-Term Trend Reading and Mathematics Assessment Results at Age 9 and Age 13
This report presents the results of the National Assessment of Educational Progress (NAEP) long-term trend assessments in reading and mathematics administered during the 2019–20 school year to 9- and 13-year-old students. Long-term trend assessments were first administered in the early 1970s; results are available for 13 reading assessments dating back to 1971 and 12 mathematics assessments dating back to 1973. This report provides trend results in terms of average scale scores, selected percentiles, and five performance levels. Item maps for each age group illustrate skills demonstrated by students when responding to assessment questions. Scale score results are included for students by selected background characteristics (e.g., race/ethnicity, gender, and grade attended). Overall, the 2020 average scores in reading and mathematics for 13-year-olds were higher than the earliest assessments but declined since 2012. Scores for the lowest-performing students (at the 10th percentile) decreased from 2012 at both ages and subjects. |
10/14/2021 |
REL 2021226 | Identifying Students At Risk Using Prior Performance Versus a Machine Learning Algorithm
This report provides information for administrators in local education agencies who are considering early warning systems to identify at-risk students. Districts use early warning systems to target resources to the most at-risk students and intervene before students drop out. Schools want to ensure the early warning system accurately identifies the students that need support to make the best use of available resources. The report compares the accuracy of using simple flags based on prior academic problems in school (prior performance early warning system) to an algorithm using a range of in- and out-of-school data to estimate the specific risk of each academic problem for each student in each quarter. Schools can use one or more risk-score cutoffs from the algorithm to create low- and high-risk groups. This study compares a prior performance early warning system to two risk-score cutoff options: a cutoff that identifies the same percentage of students as the prior performance early warning system, and a cutoff that identifies the 10 percent of students most at risk. The study finds that the prior performance early warning system and the algorithm using the same-percentage risk score cutoffs are similarly accurate. Both approaches successfully identify most of the students who ultimately are chronically absent, have a low grade point average, or fail a course. In contrast, the algorithm with 10-percent cutoffs is good at targeting the students who are most likely to experience an academic problem; this approach has the advantage in predicting suspensions, which are rarer and harder to predict than the other outcomes. Both the prior performance flags and the algorithm are less accurate when predicting outcomes for students who are Black. The findings suggest clear tradeoffs between the options. The prior performance early warning system is just as accurate as the algorithm for some purposes and is cheaper and easier to set up, but it does not provide fine-grained information that could be used to identify the students who are at greatest risk. The algorithm can distinguish degrees of risk among students, enabling a district to set cutoffs that vary depending on the prevalence of different outcomes, the harms of over-identifying versus under-identifying students at risk, and the resources available to support interventions. |
9/28/2021 |
REL 2021107 | Characteristics and Performance of High School Equivalency Exam Takers in New Jersey
Since 2014 the New Jersey Department of Education has offered three high school equivalency (HSE) exams for nongraduates seeking credentials: the GED, the High School Equivalency Test (HiSET), and the Test Assessing Secondary Completion (TASC). This study used data on exam takers who had been grade 8 students in a New Jersey public school between 2008/09 and 2013/14 and who had attempted at least one HSE exam in New Jersey between March 2014 and December 2018. It analyzed how the characteristics of exam takers differ across exams and from the characteristics of non–exam takers, how the performance of exam takers with similar backgrounds varies, and how a recent reduction in the passing threshold for two of the exams affected passing rates. Among all students who had been grade 8 students in a New Jersey public school during the study years, HSE exam takers completed fewer years of school, were more likely to have been eligible for the national school lunch program in grade 8, and were more likely to identify as Black or Hispanic than non–exam takers. GED takers had received higher grade 8 standardized test scores, were more likely to identify as White, and were less likely to have been eligible for the national school lunch program in grade 8 than HiSET and TASC takers. Under the New Jersey Department of Education's original passing thresholds, exam takers in the study sample were more likely to pass the HiSET and TASC than the GED on the first attempt (after grade 8 standardized test scores were controlled for). However, after the reduction in passing thresholds, the first-attempt passing rate was similar across the three exams. Under the new passing thresholds, two-thirds of GED takers and more than half of HiSET and TASC takers passed on the first attempt, and—when all exam attempts are included—three-quarters of all exam takers ever passed each exam. |
8/23/2021 |
NCES 2021019 | Program for the International Student Assessment (PISA) 2018 Public Use File (PUF)
The PISA 2018 Public Use File (PUF) consists of data from the PISA 2018 sample. Statistical confidentiality treatments were applied due to confidentiality concerns. The PUF can be accessed from the National Center for Education Statistics website at http://nces.ed.gov/surveys/pisa/datafiles.asp. For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011). |
7/8/2021 |
NCES 2021020 | Technical Report and User Guide for the 2016 Program for International Student Assessment (PISA) Young Adult Follow-up Study
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA YAFS 2016, as well as with information on how to access the PISA YAFS 2016 data. |
7/8/2021 |
NCES 2021022 | Program for the International Student Assessment Young Adult Follow-up Study (PISA YAFS) 2016 Public Use File (PUF)
The PISA YAFS 2016 Public Use File (PUF) consists of data from the PISA YAFS 2016 sample. PISA YAFS was conducted in the United States in 2016 with a sample of young adults (at age 19) who participated in PISA 2012 when they were in high school (at age 15). In PISA YAFS, students took the Education and Skills Online (ESO) literacy and numeracy assessments, which are based on the Program for the International Assessment of Adult Competencies (PIAAC). It contains data for individuals including responses to the background questionnaire and the cognitive assessment. Statistical confidentiality treatments were applied due to confidentiality concerns. For more details on the data, please refer to chapter 8 of the PISA YAFS 2016 Technical Report and User Guide (NCES 2021-020). |
7/8/2021 |
NCES 2021047 | Program for the International Student Assessment (PISA) 2018 Restricted-Use Files (RUF)
The PISA 2018 Restricted Use File (RUF) consists of restricted-use data from PISA 2018 for the United States. The data file and documentation includes the data file, a codebook, instructions on how to merge with the U.S. PISA 2018 public-use dataset (NCES 2021-047), and a cross-walk to assist in merging with other public datasets, such as the Common Core of Data (CCD) and Private School Survey (PSS). As these data files can be used to identify respondent schools, a restricted-use license must be obtained before access to the data is granted. Click on the restricted-use license link below for more details https://nces.ed.gov/surveys/pisa/datafiles.asp. For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011). |
7/8/2021 |
REL 2021085 | Relationship between State Annual School Monitoring Indicators and Outcomes in Massachusetts Low‑Performing Schools
The Massachusetts Department of Elementary and Secondary Education supports low-performing schools through a process that draws on qualitative and quantitative data from monitoring visits. The data are used to produce ratings for 26 turnaround indicators in four turnaround practice areas relating to school leadership, instructional practices, student supports, and school climate. This study analyzed data on school indicator ratings collected during school years 2014/15–2018/19 from 91 low-performing schools, with a focus on the distribution of the ratings among schools during their first year in the monitoring system and on the relationship of ratings to school outcomes. During the first year in which ratings data were available for a school, a majority of schools were in the two highest rating levels for 21 of the 26 indicators. Schools generally had lower rating levels for indicators in the student supports practice area than in the other three practice areas. Ratings for half the indicators were statistically significantly related to better schoolwide student outcomes and had a practically meaningful effect size of .25 or greater, and none was statistically significantly related to worse outcomes. Two indicators in the leadership practice area (school leaders' high expectations for students and staff and trusting relationships among staff) were related to lower chronic absenteeism rates. Ratings for five indicators in the instructional practices area were related to higher student academic growth in English language arts or math; two of these indicators (use of student assessment data to inform classroom instruction and school structures for instructional improvements) were related to higher growth in both English language arts and math. Ratings for four indicators in the student supports practice area (teacher training to identify student needs, research-based interventions for all students, interventions for English learner students, and interventions for students with disabilities) were related to higher student academic growth in English language arts or math. Two indicators in the school climate practice area (schoolwide behavior plans and adult–student relationships) were related to higher student academic growth in English language arts or math or lower chronic absenteeism rate. Eight indicators were not statistically related to any of the outcomes of interest. |
5/17/2021 |