Search Results: (1-15 of 25 records)
|NCEE 2023005||Conducting Implementation Research in Impact Studies of Education Interventions: A Guide for Researchers
Implementation analyses conducted as part of impact studies can help educators know whether a tested intervention is likely to be a good fit for their own settings. This guide can help researchers design and conduct these kinds of analyses. The guide provides steps and recommendations about ways to specify implementation research questions, assess whether and how the planned intervention is implemented, document the context in which the intervention is implemented, and measure the difference between the intervention and what members of the control group receive. It presents strategies for analysis and reporting about these topics, and also for linking implementation and impact findings. The guide offers key definitions, examples, templates, and links to resources.
|NCEE 2022005||The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations: A Practical Guide for Education Researchers
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide also provides conceptual and technical details for evaluation methodologists.
|REL 2022130||Exploring Early Implementation of Pennsylvania's Innovative Teacher and Principal Residency Grants
To improve educator diversity and address educator shortages, the Pennsylvania Department of Education (PDE) awards grants to universities in the state to develop and implement teacher and principal residency preparation programs. The programs must offer aspiring teachers and principals a residency of at least a year, consisting of clinical practice in schools with trained mentors, aligned coursework, and financial aid. The programs must focus on improving diversity and must partner with districts with chronic teacher or principal shortages, high proportions of students of color or in poverty, or that have been identified for state support.
This study examines eight residency programs that received grants for the 2019/20 school year. The study interviewed program staff, collected program data, and conducted focus groups with residents and mentors. The study sought to provide preliminary information early in the implementation of the programs on how well they were preparing teachers and principals, where the teachers and principals were getting jobs after completing the programs, whether the programs were improving diversity, and how they could be improved.
Four key findings emerged from the study. First, recruiting diverse candidates was difficult. Teacher residents were mostly White, although more than a third of participants in one of the programs were people of color. Principal residents were more diverse. Second, for five of the six programs with available employment data, at least half of the residents were hired in high-need districts after completing the programs. Third, residents and mentors felt the residents were prepared for most teaching or school leadership responsibilities, although principal mentors felt some principal residents were not as well prepared. Finally, program staff, residents, and mentors described several lessons learned, including that communication and the balance of the time commitment between the coursework and the residency could be improved.
The findings will inform PDE’s plans for future grants and help the funded programs improve. The findings may also be relevant to other states, districts, or preparation programs that are developing residency programs.
|REL 2021075||Evaluating the Implementation of Networked Improvement Communities in Education: An Applied Research Methods Report
The purpose of this study was to develop a framework that can be used to evaluate the implementation of networked improvement communities (NICs) in public prekindergarten (PK)–12 education and to apply this framework to the formative evaluation of the Minnesota Alternative Learning Center Networked Improvement Community (Minnesota ALC NIC), a partnership between Regional Educational Laboratory Midwest, the Minnesota Department of Education, and five alternative learning centers (ALCs) in Minnesota. The partnership formed with the goal of improving high school graduation rates among students in ALCs. The evaluation team developed and used research tools aligned with the evaluation framework to gather data from 37 school staff in the five ALCs participating in the Minnesota ALC NIC. Data sources included attendance logs, postmeeting surveys (administered following three NIC sessions), a post–Plan-Do-Study-Act survey, continuous improvement artifacts, and event summaries. The evaluation team used descriptive analyses for quantitative and qualitative data, including frequency tables to summarize survey data and coding artifacts to indicate completion of continuous improvement milestones. Engagement in the Minnesota ALC NIC was strong, as measured by attendance data and post–Plan-Do-Study-Act surveys, but the level of engagement varied by continuous improvement milestones. Based on postmeeting surveys, NIC members typically viewed the NIC as relevant and useful, particularly because of the opportunities to work within teams and develop relationships with staff from other schools. The percentage of meeting attendees agreeing that the NIC increased their knowledge and skills increased over time. Using artifacts from the NIC, the evaluation team determined that most of the teams completed most continuous improvement milestones. Whereas the post–Plan-Do-Study-Act survey completed by NIC members indicated that sharing among different NIC teams was relatively infrequent, contemporaneous meeting notes recorded specific instances of networking among teams. This report illustrates how the evaluation framework and its aligned set of research tools were applied to evaluate the Minnesota ALC NIC. With slight adaptations, these tools can be used to evaluate the implementation of a range of NICs in public PK–12 education settings. The study has several limitations, including low response rates to postmeeting surveys, reliance on retrospective measures of participation in continuous improvement activities, and the availability of extant data on a single Plan-Do-Study-Act cycle. The report includes suggestions for overcoming these limitations when applying the NIC evaluation framework to other NICs in public PK–12 education settings.
|REL 2020017||What Tools Have States Developed or Adapted to Assess Schools’ Implementation of a Multi-Tiered System of Supports/Response to Intervention Framework?
Educators in Tennessee use Response to Instruction and Intervention (RTI2), a multi-tiered system of support (MTSS), to help address problems early for students at risk for poor learning outcomes. Tennessee Department of Education officials sought to support schools and districts implementing RTI2 with a tool that educators can use to align their RTI2 implementation with the state’s expected practices and determine next steps for improving implementation. To support the development of a research-informed tool, Regional Educational Laboratory Appalachia staff reviewed the websites and relevant documents of all 50 states and the District of Columbia as well as interviewed state education officials from eight states to examine how others have adapted or developed similar tools and supported their use. The study focused on 31 tools that 21 states developed or adapted to measure MTSS/response to intervention (RTI) implementation. Methods included assessing tools for key MTSS/RTI practices that are informed by the research literature and coding qualitative data to identify themes. Findings showed that although most tools assessed broad MTSS/RTI practices, such as whether schools administer assessments for students in need of intervention, fewer tools measured more specific practices such as whether schools are expected to administer universal screenings twice a year. Report findings can serve as a useful resource for state education officials interested in selecting or adapting a tool to measure and improve MTSS/RTI implementation, which can ultimately provide educators with data to inform their instruction and enhance learning outcomes for students at risk.
|NCES 2019084||Technology and K-12 Education: The NCES Ed Tech Equity Initiative
This interactive brochure provides an overview of the Initiative—including its purpose, goal, and target outcomes.
|NCES 2019085||Technology and K-12 Education: Advancing the NCES Ed Tech Equity Initiative
This infographic outlines the key steps NCES is taking to advance the NCES Ed Tech Equity Initiative.
|NCES 2019086||Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Framework
Check out our new factsheet to learn about the factors most critical to informing ed tech equity in the context of K-12 education!
|NCES 2019087||Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Data Collection Priorities
This factsheet outlines the key subtopics NCES will prioritize in its ed tech equity data collections.
|REL 2017241||Impacts of Ramp-Up to Readiness™ after one year of implementation
This study examined whether the Ramp-Up to Readiness program (Ramp-Up) produced impacts on high school students' college enrollment actions and personal college readiness following one year of program implementation. The study also looked at Ramp-Up's impact on more immediate outcomes, such as the emphasis placed on college readiness and the amount of college-related teacher-student interactions taking place in high schools. The impacts were studied in context by assessing the degree to which schools were implementing Ramp-Up to the developer's satisfaction. Forty-nine Minnesota and Wisconsin high schools were randomly assigned to one of two groups: (1) the Ramp-Up group that would implement the program during the 2014–15 school year (25 schools), or (2) the comparison group that would implement Ramp-Up the following school year, 2015–16 (24 schools). The researchers collected data from students and school staff during the fall of 2014, before program implementation and during the spring of 2015 after one year of implementation. The study team administered surveys to staff, surveys to students in grades 10–12, and the commitment to college and goal striving scales from ACT's ENGAGE instrument. Researchers also obtained extant student-level data from the high schools and school-level data from their respective state education agencies. The outcomes of most interest were students' submission of the Free Application for Federal Student Aid (FAFSA) and their scores on the two ENGAGE scales. Data indicated that following a single year of implementation, Ramp-Up had no impact on grade 12 students' submission rates for the FAFSA or on the commitment to college and goal striving of students in grades 10–12. However, the program did produce greater emphasis on college-readiness and more student-teacher interactions related to college. Implementation data showed mixed results: on average, Ramp-Up schools implemented the program with adequate fidelity, but some schools struggled with implementation and 88 percent of schools did not adequately implement the planning tools component of the program. Schools implementing Ramp-Up demonstrated a greater emphasis on college-readiness than comparison schools, but a single year of program exposure is insufficient to produce greater college readiness among students or FAFSA submissions among grade 12 students. Schools that adopt Ramp-Up can implement the program as intended by the program developer, but some program components are more challenging to implement than others. Additional studies need to examine Ramp-Up's impact on students' college enrollment actions, their college admission rates, and their success in college following multiple years of program exposure. Studies also should investigate whether implementation gets stronger in subsequent years as schools gain more experience with Ramp-Up's curriculum and processes.
|NCEE 20174014||Implementation of Title I and Title II-A Program Initiatives: Results from 2013-14
This report examines implementation of program initiatives promoted through Title I and Title II-A of the Elementary and Secondary Education Act (ESEA) during the 2013–14 school year. It is based on surveys completed by all 50 states and the District of Columbia and nationally representative samples of districts, schools, and teachers. The report describes policy and practice in several core areas: content standards, assessments, accountability, and educator evaluation and support.
|REL 2017192||Measuring Implementation of the Response to Intervention framework in Milwaukee Public Schools
The purpose of this study was to determine the reliability of a newly-developed hybrid system to measure the implementation of Response to Intervention (RTI), to determine schools' progress toward implementing RTI; and to determine whether implementation ratings were related to contextual factors. School improvement coaches were trained and certified to conduct school data reviews. These reviewers visited 70 elementary schools serving grades K-5 in a single urban school district. During each visit, two reviewers made ratings on the 34-indicator rubric and entered their ratings into a dashboard system. Reviewers reconciled discrepant ratings and the reconciled ratings were analyzed. To determine the reliability of the rubric, the study team estimated inter-rater reliability using percent agreement and Cohen's Kappa to account for chance ratings. Coefficient alphas were calculated to estimate inter-item reliability. To determine how well schools were implementing RTI, average ratings were calculated for each school on the total rubric and six components and converted into categories: "little fidelity", "inadequate fidelity", "adequate fidelity", and "full fidelity". The study team also calculated Pearson product-moment correlations to study relationships between implementation ratings and characteristics of teachers and students in the schools. Results indicated that the ratings made by the trained data reviewers were reliable even when accounting for chance. Among the 68 visited schools that had complete data, 53 percent of the schools were implementing RTI with adequate fidelity after two years. However, 68 percent of the priority schools did not reach adequate levels of implementation fidelity. Findings also revealed that most schools have yet to implement instruction for diverse students and Tier III instruction with fidelity. Of the contextual factors studied, correlations were found between implementation scores and teacher and student characteristics. The system can be used to produce reliable evidence about the level of RTI implementation in schools and which components of RTI need to be the focus of professional development and coaching. Also, if RTI is indeed an effective school improvement strategy, then by monitoring implementation fidelity of RTI, school districts can improve the chances that RTI produces the expected impacts in their school settings. Establishing an implementation monitoring system requires district staff time to complete training, conduct the data reviews, resolve rating discrepancies, and enter the data into a dashboard system.
|NCEE 20174004||Early Implementation Findings From a Study of Teacher and Principal Performance Measurement and Feedback
This is an impact study of the implementation and impacts of a set of three educator performance measures: observations of teachers' classroom practices, value-added measures of teacher performance, and a 360-degree survey assessment of principals' leadership practices. A set of elementary and middle schools within each of eight districts were randomly assigned to either a treatment group in which the study's performance measures were implemented or a control group in which they were not. A total of 127 schools participated in the study. This report provides descriptive information on the first of two years of implementation. The classroom observation and principal leadership measures were implemented generally as planned, although fewer teachers and principals accessed their value-added reports than the study intended. All three measures differentiated teacher performance, although the observation scores were mostly at the upper end of the scale. For the principal leadership measure, principal self-ratings, teachers' ratings of the principal, and the principal's supervisor's ratings of the principal often differed. Both teachers and principals in schools selected to implement the intervention reported receiving more feedback on their performance than did their counterparts in control schools.
|NCEE 20174001||Race to the Top: Implementation and Relationship to Student Outcomes
Race to the Top (RTT), one of the Obama administration's signature programs and one of the largest federal government investments in an education grant program, received $4.35 billion in funding as part of the American Recovery and Reinvestment Act of 2009. Through three rounds of competition in 2010 and 2011, RTT awarded grants to states that agreed to implement a range of education policies and practices designed to improve student outcomes. Using 2013 interview data from all states, this report documents whether states that received an RTT grant used the policies and practices promoted by RTT and how that compares to non-grantee states. The report also examines whether receipt of an RTT grant was related to improvements in student outcomes. Findings show that 2010 RTT grantees reported using more policies and practices than non-grantees in four areas (standards and assessments, teachers and leaders, school turnaround, charter schools), and 2011 RTT grantees reported using more in one area (teachers and leaders). However, the relationship between RTT and student outcomes was not clear, as trends in test scores could be plausibly interpreted as providing evidence of either a positive, negative, or null effect for RTT.
|REL 2016143||Development and implementation of quality rating and improvement systems in Midwest Region states
Recent federal and state policies that recognize the benefits of high-quality early childhood education and care, such as the Race to the Top–Early Learning Challenge and the Preschool for All initiative, have led to a rapid expansion of quality rating and improvement systems (QRISs). Although 49 states implement a QRIS in some form, each system differs in its approach to defining, rating, supporting, and communicating program quality. This study examined QRISs in use across the Midwest Region to describe approaches that states use in developing and implementing a QRIS. The purpose was to create a resource for QRIS administrators to use as they refine their systems over time. Researchers used qualitative techniques, including a review of existing documents and semistructured interviews with state officials in the Midwest Region to document the unique and common approaches to QRIS implementation. Findings suggest that the process of applying for a Race to the Top–Early Learning Challenge grant helped advance the development of a QRIS system, even in states that were not awarded funding. Also, all seven states in the Midwest Region use a variety of direct observations in classrooms to measure quality within each QRIS, despite the logistical and financial burdens associated with observational assessment. Five of the states in the Midwest Region use alternate pathways to rate certain early childhood education programs in their QRIS, most commonly for accredited or state prekindergarten programs. Finally, linking state subsidies and other early childhood education funding to QRIS participation encouraged early childhood education providers to participate in a QRIS. Developing and refining a QRIS is an ongoing process for all states in the Midwest Region and systems are continually evolving. Ongoing changes require policymakers, researchers, providers, and families to periodically relearn the exact requirements of their QRISs, but if changes are based on evidence in the field of changing needs of children and families, revised QRISs may better measure quality and better serve the public. Findings from this report can help inform the decisions of state QRIS administrators as they expand and refine their systems.