Search Results: (1-15 of 30 records)
Pub Number | Title | ![]() |
---|---|---|
NCES 2023055 | Overview of the Middle Grades Longitudinal Study of 2017–18 (MGLS:2017): Technical Report
This technical report provides general information about the study and the data files and technical documentation that are available. Information was collected from students, their parents or guardians, their teachers, and their school administrators. The data collection included direct and indirect assessments of middle grades students’ mathematics, reading, and executive function, as well as indirect assessments of socioemotional development in 2018 and again in 2020. MGLS:2017 field staff provided additional information about the school environment through an observational checklist. |
3/16/2023 |
NCES 2022049 | U.S. Technical Report and User Guide for the 2019 Trends in International Mathematics and Science Study (TIMSS)
The U.S. TIMSS 2019 Technical Report and User’s Guide provides an overview of the design and implementation of TIMSS 2019 in the United States and includes guidance for researchers using the U.S. datasets. This information is meant to supplement the IEA’s TIMSS 2019 Technical Report and TIMSS 2019 User Guide by describing those aspects of TIMSS 2019 that are unique to the United States including information on merging the U.S. public- and restricted-use student, teacher, and school data files with the U.S. data files in the international database. |
10/17/2022 |
NCES 2021019 | Program for the International Student Assessment (PISA) 2018 Public Use File (PUF)
The PISA 2018 Public Use File (PUF) consists of data from the PISA 2018 sample. Statistical confidentiality treatments were applied due to confidentiality concerns. The PUF can be accessed from the National Center for Education Statistics website at http://nces.ed.gov/surveys/pisa/datafiles.asp. For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011). |
7/8/2021 |
NCES 2021020 | Technical Report and User Guide for the 2016 Program for International Student Assessment (PISA) Young Adult Follow-up Study
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA YAFS 2016, as well as with information on how to access the PISA YAFS 2016 data. |
7/8/2021 |
NCES 2021022 | Program for the International Student Assessment Young Adult Follow-up Study (PISA YAFS) 2016 Public Use File (PUF)
The PISA YAFS 2016 Public Use File (PUF) consists of data from the PISA YAFS 2016 sample. PISA YAFS was conducted in the United States in 2016 with a sample of young adults (at age 19) who participated in PISA 2012 when they were in high school (at age 15). In PISA YAFS, students took the Education and Skills Online (ESO) literacy and numeracy assessments, which are based on the Program for the International Assessment of Adult Competencies (PIAAC). It contains data for individuals including responses to the background questionnaire and the cognitive assessment. Statistical confidentiality treatments were applied due to confidentiality concerns. For more details on the data, please refer to chapter 8 of the PISA YAFS 2016 Technical Report and User Guide (NCES 2021-020). |
7/8/2021 |
NCES 2021047 | Program for the International Student Assessment (PISA) 2018 Restricted-Use Files (RUF)
The PISA 2018 Restricted Use File (RUF) consists of restricted-use data from PISA 2018 for the United States. The data file and documentation includes the data file, a codebook, instructions on how to merge with the U.S. PISA 2018 public-use dataset (NCES 2021-047), and a cross-walk to assist in merging with other public datasets, such as the Common Core of Data (CCD) and Private School Survey (PSS). As these data files can be used to identify respondent schools, a restricted-use license must be obtained before access to the data is granted. Click on the restricted-use license link below for more details https://nces.ed.gov/surveys/pisa/datafiles.asp. For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011). |
7/8/2021 |
REL 2021057 | Tool for Assessing the Health of Research-Practice Partnerships
Education research-practice partnerships (RPPs) offer structures and processes for bridging research and practice and ultimately driving improvements for K-12 outcomes. To date, there is limited literature on how to assess the effectiveness of RPPs. Aligned to the most commonly cited framework for assessing RPPs, Assessing Research-Practice Partnerships: Five Dimensions of Effectiveness, this two-part tool offers guidance on how researchers and practitioners may prioritize the five dimensions of RPP effectiveness and their related indicators. The tool also provides an interview protocol for RPP evaluators to use as an instrument for assessing the extent to which the RPP demonstrates evidence of the prioritized dimensions and their indicators of effectiveness. |
2/2/2021 |
IES 2020001REV | Cost Analysis: A Starter Kit
This starter kit is designed for grant applicants who are new to cost analysis. The kit will help applicants an a cost analysis, setting the foundation for more complex economic analyses. |
6/1/2020 |
NCSER 2020001 | An Introduction to Adaptive Interventions and SMART Designs in Education
Educators must often adapt interventions over time because what works for one student may not work for another and what works now for one student may not work in the future for the same student. Adaptive interventions provide education practitioners with a prespecified, systematic, and replicable way of doing this through a sequence of decision rules for whether, how, and when to modify interventions. The sequential, multiple assignment, randomized trial (SMART) is one type of multistage, experimental design that can help education researchers build high-quality adaptive interventions. Despite the critical role adaptive interventions can play in various domains of education, research about adaptive interventions and the use of SMART designs to develop effective adaptive interventions in education is in its infancy. To help the field move forward in this area, the National Center for Special Education Research (NCSER) and the National Center for Education Evaluation and Regional Assistance (NCEE) commissioned a paper by leading experts in adaptive interventions and SMART designs. This paper aims to provide information on building and evaluating high-quality adaptive interventions and review the components of SMART designs, discuss the key features of the SMART, and introduce common research questions for which SMARTs may be appropriate. |
11/25/2019 |
NCES 2019113 | U.S. PIRLS and ePIRLS 2016 Technical Report and User's Guide
The U.S. PIRLS and ePIRLS 2016 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Progress in International Reading Literacy Study (PIRLS) and ePIRLS 2016, along with information designed to facilitate access to the U.S. PIRLS and ePIRLS 2016 data. |
8/27/2019 |
NCES 2018020 | U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 Technical Report and User's Guide
The U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Trends in International Mathematics and Science Study (TIMSS) 2015 and TIMSS Advanced 1995 & 2015, along with information designed to facilitate access to the U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 data. |
11/1/2018 |
NCES 2017095 | Technical Report and User Guide for the 2015 Program for International Student Assessment (PISA)
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA 2015 in the United States, as well as information on how to access the PISA 2015 data. The report includes information about sampling requirements and sampling in the United States; participation rates at the school and student level; how schools and students were recruited; instrument development; field operations used for collecting data; detail concerning various aspects of data management, including data processing, scaling, and weighting. In addition, the report describes the data available from both international and U.S. sources, special issues in analyzing the PISA 2015 data, as well as a description of merging data files. |
12/19/2017 |
NCEE 20184002 | Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects. |
10/3/2017 |
REL 2017265 | What does it mean when a study finds no effects?
This short brief for education decisionmakers discusses three main factors that may contribute to a finding of no effects: failure of theory, failure of implementation, and failure of research design. It provides readers with questions to ask themselves to better understand 'no effects' findings, and describes other contextual factors to consider when deciding what to do next. |
10/27/2016 |
NCSER 2015002 | The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary for such research designs (e.g., to answer questions about impacts for children with low-incidence disabilities). A type of experimental design that is well suited for such low-incidence populations is the single-case design (SCD). These designs involve observations of a single case (e.g., a child or a classroom) over time in the absence and presence of an experimenter-controlled treatment manipulation to determine whether the outcome is systematically related to the treatment. Research using SCD is often omitted from reviews of whether evidence-based practices work because there has not been a common metric to gauge effects as there is in group design research. To address this issue, the National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors' recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices. |
1/7/2016 |