Search Results: (1-15 of 48 records)
|REL 2021111||Professional Development Incentives for Oregon's Early Childhood Education Workforce: A Randomized Study
Many states seek to increase the education levels of their early childhood education (ECE) workforce to improve the quality of care for children. Oregon encourages all ECE workforce members to sign up for a career lattice, a career pathway system that helps them determine goals related to increasing their education. The state also offers incentives for reaching specific steps in the career lattice and scholarships for college credit and community-based training. This study used two randomized controlled trials in 2018 and 2019 to test whether sending emails and offering different financial incentives to Oregon ECE workforce members increased career lattice sign-up and increased education and training levels or workplace retention. The study found that sending emails encouraging career lattice sign-up had no detectable impact on career lattice sign-up or workplace retention. Sending emails offering a monetary incentive at an earlier-than-usual step on the career lattice had a positive impact on training hours recorded but no detectable impact on career lattice movement, college credit hours earned, or workplace retention. Sending emails about automatic enrollment in a scholarship program had no detectable impact on scholarship use, career lattice movement, college credit hours earned, or workplace retention. Lastly, after participants were randomly assigned to study groups, the email campaigns were implemented as planned, reaching all intended participants, although the interventions ended sooner than planned because of a state policy change. The findings suggest that low-touch interventions such as emails have promise for increasing training hours but are not sufficient to induce changes in career lattice sign-up, continuing postsecondary education, or workplace retention for Oregon ECE workforce members. These results have implications for future research, in addition to demonstrating how better messaging and supports can mitigate barriers to further education and training and how email campaigns can be leveraged for workforce communication efforts. This information is particularly relevant for state agencies and education and training providers.
|NCEE 20174027||Multi-armed RCTs: A design-based framework
Design-based methods have recently been developed as a way to analyze data for impact evaluations of interventions, programs, and policies. The estimators are derived using the building blocks of experimental designs with minimal assumptions, and have important advantages over traditional model-based impact methods. This report extends the design-based theory for the single treatment-control group design to designs with multiple research groups. It discusses how design-based estimators found in the literature need to be modified for multi-armed designs when comparing pairs of research groups to each other. It also discusses multiple comparison adjustments when conducting hypothesis tests across pairwise contrasts to identify the most effective interventions. Finally, it discusses the complex assumptions required to identify and estimate the complier average causal effect (CACE) parameter for multi-armed designs.
|NCEE 20174025||What is Design-Based Causal Inference for RCTs and Why Should I Use It?
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The approach uses the building blocks of experimental designs to develop impact estimators with minimal assumptions. The methods apply to randomized controlled trials and quasi-experimental designs with treatment and comparison groups. Although the fundamental concepts that underlie design-based methods are straightforward, the literature on these methods is technical, with detailed mathematical proofs required to formalize the theory. This brief aims to broaden knowledge of design-based methods by describing their key concepts and how they compare to traditional model-based methods, such as such as hierarchical linear modeling (HLM). Using simple mathematical notation, the brief is geared toward researchers with a good knowledge of evaluation designs and HLM.
|NCEE 20174026||Comparing Impact Findings from Design-Based and Model-Based Methods: An Empirical Investigation
This report compares empirical results from different approaches to analyzing data from randomized controlled trials (RCTs). It focuses on how impact estimates compare between recently-developed design-based methods and traditional model-based methods. Design-based methods use the potential outcomes framework and known features of study designs to connect statistical methods to the building blocks of causal inference. They differ from model-based methods that have commonly been used in education research, including hierarchical linear model (HLM) methods and robust cluster standard error (RCSE) methods for clustered designs. This study re-analyzes nine past RCTs in the education area using both design- and model-based methods. The study finds that model-based and design-based methods yield very similar impact estimates and levels of statistical significance, especially when the underlying analytic assumptions (e.g., weights used to aggregate clusters and blocks) are aligned.
|REL 2017256||Impact of the Developing Mathematical Ideas professional development program on grade 4 students' and teachers' understanding of fractions
The purpose of this study was to assess the impact of the Developing Mathematical Ideas (DMI) professional development program on grade 4 teachers' in-depth knowledge of fractions as well as their students' understanding and proficiency with fractions. The study was conducted during the 2014/15 school year. A total of 84 schools from eight school districts in three states (Florida, Georgia, and South Carolina) agreed to participate. Participants included 264 grade 4 teachers and their 4,204 students. The study utilized the "gold standard" methodology involving random assignment of schools to either DMI or the control condition. Teachers in the DMI condition participated in 24 hours of professional development on fractions during fall 2014. They attended eight 3-hour sessions conducted over four days (two 3-hour sessions per day; one day per month). DMI did not demonstrate any impact on student knowledge of fractions. Students of DMI teachers performed at almost the same level as those taught by control teachers; the difference was not statistically significant. The impact of the DMI on teachers’ knowledge of fractions was inconclusive. DMI teachers performed slightly better than teachers who did not participate in DMI, but the result was not statistically significant. It was, however, close to the threshold of statistical significance (p = .051).
|REL 2017225||Impacts of the Retired Mentors for New Teachers program
This study evaluates the impact of the Retired Mentors for New Teachers Program, a two-year intervention at the elementary-school level. The program pairs recently retired, master educators with probationary teachers in high-need schools. These retired educators provide the teachers with weekly support over two years that includes tailored in-class observations, coaching, and mentoring. The study used a randomized controlled trial approach to assess the program’s impact on student learning in reading and math, on teacher turnover, and on teacher evaluation ratings. To assist education leaders interested in replicating the program, the study also gathered detailed data on the program’s cost to the school district and return on investment over time. Key findings include that students of teachers collaborating with retired mentors demonstrated a significant improvement in math achievement equivalent to one month’s worth of added instructional time. At an annual local cost of $171 per student, the positive impacts on math achievement produce a return on investment that can pay back the program’s cost more than 15 times over, through increased student earnings over time.
|REL 2017204||Scaling academic planning in community college: A randomized controlled trial
Community college students often lack an academic plan to guide their choices of coursework to achieve their educational goals, in part because counseling departments typically lack the capacity to advise students at scale. This randomized controlled trial tests the impact of guaranteed access to one of two alternative counseling sessions (group workshops or one-on-one counseling), each of which was combined with targeted “nudging." Outcome measures included scheduling and attending the counseling session, completing an academic plan, and re-enrolling in the following semester. Evidence suggests that both variations on the intervention increase academic plan completion rates by over 20 percentage points compared to a control group that did not receive guaranteed access to a counseling session or the automated nudges. Exploratory evidence suggests that when combined with nudging, the guarantee of workshop counseling is as effective as the guarantee of one-on-one counseling in causing students to schedule and attend academic planning appointments.
|NCEE 20154011||Statistical Theory for the RCT-YES Software: Design-Based Causal Inference for RCTs
This Second Edition report updates the First Edition published in June 2015 that presents the statistical theory underlying the RCT-YES software that estimates and reports impacts for RCTs for a wide range of designs used in social policy research. The preface to the new report summarizes the updates from the previous version. The report discusses a unified, non-parametric design-based approach for impact estimation using the building blocks of the Neyman-Rubin-Holland causal inference model that underlies experimental designs. This approach differs from the more model-based impact estimation methods that are typically used in education research. The report discusses impact and variance estimation, asymptotic distributions of the estimators, hypothesis testing, the inclusion of baseline covariates to improve precision, the use of weights, subgroup analyses, baseline equivalency analyses, and estimation of the complier average causal effect parameter.
|NCER 20142000||Partially Nested Randomized Controlled Trials in Education Research: A Guide to Design and Analysis
In some tests of educational interventions, individual students are randomized directly to the treatment or control group, and both intervention and control protocols are administered in an individual setting. Such an experiment is an Individual-Level Randomized Controlled Trial (I-RCT). In other tests, clusters of students (e.g., classrooms) are randomized. This sort of experiment is called a Cluster Randomized Controlled Trial (C-RCT). However, in some designs, students in the treatment group are clustered like those in a C-RCT, but students in the control group are unclustered, like students in an I-RCT. This design is called a Partially Nested Randomized Controlled Trial (PN-RCT). It is partially nested because students in the treatment group are nested in some higher level unit, such as a tutoring group or class, but students in the control group are not nested as part of the experimental design.
This paper, commissioned by the National Center for Education Research, provides readers with an introduction to PN-RCTs and ways to design and analyze the results from them. This paper was written primarily for applied education researchers with introductory knowledge of quantitative impact evaluation methods. However, those with more advanced knowledge will also benefit from some of the technical examples and appendices.
|REL 2014048||Making the Most of Opportunities to Learn What Works: A School District's Guide
This guide for district and school leaders shows how to recognize opportunities to embed randomized controlled trials (RCTs) into planned policies or programs. Opportunistic RCTs can generate strong evidence for informing education decisions—with minimal added cost and disruption. The guide also outlines the key steps to conduct RCTs and responds to common questions and concerns about RCTs. Readers will find a real life example of how one district took advantage of an opportunity to learn whether a summer reading program worked.
|REL 20140037||Recognizing and Conducting Opportunistic Experiments in Education: A Guide for Policymakers and Researchers
Opportunistic experiments are type of randomized controlled trial that study the effects of a planned intervention or policy change with minimal added disruption and cost. This guide defines opportunistic experiments and provides examples, discusses issues to consider when identifying potential opportunistic experiments, and outlines the critical steps to complete opportunistic experiments. It concludes with a discussion of the potentially low cost of conducting opportunistic experiments and the potentially high cost of not conducting them. Readers will also find a checklist of key questions to consider when conducting opportunistic experiments.
|NCEE 20144003||Transfer Incentives for High-Performing Teachers: Final Results from a Multisite Randomized Experiment
One policy response to the challenge of attracting high-performing teachers to low-achieving schools is offering teachers monetary incentives to transfer. This report examines impacts of transfer incentives — including the willingness of teachers to transfer when offered an incentive, teacher retention in the schools to which they transferred, and the impact of transfer incentives on student achievement at low-performing schools. Ten school districts in seven states participated in the random assignment study. The highest-performing teachers in each district — those who had raised student achievement year after year as measured by "value added" — were offered $20,000 to teach at a lower-performing district school for two years.
The study found that:
|WWC SSR10013||WWC Review of the Report "Impacts of Comprehensive Teacher Induction: Final Results from a Randomized Controlled Study"
The 2010 study, Impacts of Comprehensive Teacher Induction: Final Results from a Randomized Controlled Study, examined the effects of a comprehensive teacher induction program for beginning teachers on teacher and student outcomes in 17 school districts across 13 states. Researchers randomly assigned 418 elementary schools with a total of 1,009 beginning teachers to either an intervention group that received the program or a business-as-usual comparison group. The program included mentoring, monthly professional development sessions, study groups with other beginning teachers, and opportunities to observe veteran teachers. In the second year of the study, researchers selected a subset of the original districts to receive a second year of the teacher induction program. In these districts, the schools that were originally assigned to receive the intervention continued to offer the intervention services for a second year to beginning teachers. Impacts after the first year of the study were based on data from all participating districts, all of which received the intervention during the first year. Impacts after the second and third years of the study were presented separately for districts receiving 1 or 2 years of the intervention. Study authors assessed the effects of the program on both teacher outcomes and student outcomes over a 3-year period. The study is a well-implemented randomized controlled trial that meets WWC evidence standards for assessing impacts on teacher retention for the entire sample at the end of year 1 of the study, and for the subset of districts that received only 1 year of the intervention at the end of years 2 and 3 of the study. The remaining analyses conducted by this study--including impacts on teacher practices, preparation, satisfaction, some retention outcomes, and student achievement--either do not meet WWC evidence standards or were deemed to be ineligible for review.
|WWC SSR10053||WWC Review of the Report "Large-scale Randomized Controlled Trial with 4th Graders Using Intelligent Tutoring of the Structure Strategy to Improve Nonfiction Reading Comprehension"
In the 2012 study, Large-scale Randomized Controlled Trial with 4th Graders Using Intelligent Tutoring of the Structure Strategy to Improve Nonfiction Reading Comprehension, researchers examined the effects of the web-based tutoring program, Intelligent Tutoring of the Structure Strategy (ITSS), on the reading comprehension of fourth-grade students in language arts classrooms. ITSS is a one-on-one, web-based intelligent tutoring system which models a "structure strategy" technique, provides practice opportunities, and gives immediate feedback to students. The analysis included 1,875 to 2,371 fourth-grade students from 100 to 117 classrooms in Pennsylvania elementary schools. Study authors assessed the effectiveness of ITSS by comparing the reading comprehension of students in ITSS classrooms with students in comparison classrooms. The study is a well-implemented randomized controlled trial, and the research meets WWC evidence standards without reservations.
|NCEE 20124033||Improving Adolescent Literacy Across the Curriculum in High Schools (Content Literacy Continuum, CLC)
For report NCEE 2013-4001 Lessons in Evaluation of the Content Literacy Continuum: Report on Program Impacts, Program Fidelity, and Contrast http://ies.ed.gov/ncee/edlabs/projects/project.asp?projectID=34.
This data file contains data from a study that examines the impact of the Content Literacy Continuum (CLC) on high school students' reading comprehension and accumulation of course credits in core subject areas. CLC combines instructional routines and learning strategies developed by the University of Kansas Center for Research on Learning. The study found no statistically significant impacts of CLC on either reading comprehension or credit accumulation in core subjects. The study used a randomized design and involved 33 high schools in nine school districts in four Midwestern states. This sample includes 7,365 grade 9 students from year 1; in year 2, the school records sample includes 7,951 grade 9 students and 8,514 grade 10 students.