Search Results: (1-15 of 20 records)
|NCES 2019084||Technology and K-12 Education: The NCES Ed Tech Equity Initiative
This interactive brochure provides an overview of the Initiative—including its purpose, goal, and target outcomes.
|NCES 2019085||Technology and K-12 Education: Advancing the NCES Ed Tech Equity Initiative
This infographic outlines the key steps NCES is taking to advance the NCES Ed Tech Equity Initiative.
|NCES 2019086||Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Framework
Check out our new factsheet to learn about the factors most critical to informing ed tech equity in the context of K-12 education!
|NCES 2019087||Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Data Collection Priorities
This factsheet outlines the key subtopics NCES will prioritize in its ed tech equity data collections.
|REL 2017241||Impacts of Ramp-Up to Readiness™ after one year of implementation
This study examined whether the Ramp-Up to Readiness program (Ramp-Up) produced impacts on high school students' college enrollment actions and personal college readiness following one year of program implementation. The study also looked at Ramp-Up's impact on more immediate outcomes, such as the emphasis placed on college readiness and the amount of college-related teacher-student interactions taking place in high schools. The impacts were studied in context by assessing the degree to which schools were implementing Ramp-Up to the developer's satisfaction. Forty-nine Minnesota and Wisconsin high schools were randomly assigned to one of two groups: (1) the Ramp-Up group that would implement the program during the 2014–15 school year (25 schools), or (2) the comparison group that would implement Ramp-Up the following school year, 2015–16 (24 schools). The researchers collected data from students and school staff during the fall of 2014, before program implementation and during the spring of 2015 after one year of implementation. The study team administered surveys to staff, surveys to students in grades 10–12, and the commitment to college and goal striving scales from ACT's ENGAGE instrument. Researchers also obtained extant student-level data from the high schools and school-level data from their respective state education agencies. The outcomes of most interest were students' submission of the Free Application for Federal Student Aid (FAFSA) and their scores on the two ENGAGE scales. Data indicated that following a single year of implementation, Ramp-Up had no impact on grade 12 students' submission rates for the FAFSA or on the commitment to college and goal striving of students in grades 10–12. However, the program did produce greater emphasis on college-readiness and more student-teacher interactions related to college. Implementation data showed mixed results: on average, Ramp-Up schools implemented the program with adequate fidelity, but some schools struggled with implementation and 88 percent of schools did not adequately implement the planning tools component of the program. Schools implementing Ramp-Up demonstrated a greater emphasis on college-readiness than comparison schools, but a single year of program exposure is insufficient to produce greater college readiness among students or FAFSA submissions among grade 12 students. Schools that adopt Ramp-Up can implement the program as intended by the program developer, but some program components are more challenging to implement than others. Additional studies need to examine Ramp-Up's impact on students' college enrollment actions, their college admission rates, and their success in college following multiple years of program exposure. Studies also should investigate whether implementation gets stronger in subsequent years as schools gain more experience with Ramp-Up's curriculum and processes.
|NCEE 20174014||Implementation of Title I and Title II-A Program Initiatives: Results from 2013-14
This report examines implementation of program initiatives promoted through Title I and Title II-A of the Elementary and Secondary Education Act (ESEA) during the 2013–14 school year. It is based on surveys completed by all 50 states and the District of Columbia and nationally representative samples of districts, schools, and teachers. The report describes policy and practice in several core areas: content standards, assessments, accountability, and educator evaluation and support.
|REL 2017192||Measuring Implementation of the Response to Intervention framework in Milwaukee Public Schools
The purpose of this study was to determine the reliability of a newly-developed hybrid system to measure the implementation of Response to Intervention (RTI), to determine schools' progress toward implementing RTI; and to determine whether implementation ratings were related to contextual factors. School improvement coaches were trained and certified to conduct school data reviews. These reviewers visited 70 elementary schools serving grades K-5 in a single urban school district. During each visit, two reviewers made ratings on the 34-indicator rubric and entered their ratings into a dashboard system. Reviewers reconciled discrepant ratings and the reconciled ratings were analyzed. To determine the reliability of the rubric, the study team estimated inter-rater reliability using percent agreement and Cohen's Kappa to account for chance ratings. Coefficient alphas were calculated to estimate inter-item reliability. To determine how well schools were implementing RTI, average ratings were calculated for each school on the total rubric and six components and converted into categories: "little fidelity", "inadequate fidelity", "adequate fidelity", and "full fidelity". The study team also calculated Pearson product-moment correlations to study relationships between implementation ratings and characteristics of teachers and students in the schools. Results indicated that the ratings made by the trained data reviewers were reliable even when accounting for chance. Among the 68 visited schools that had complete data, 53 percent of the schools were implementing RTI with adequate fidelity after two years. However, 68 percent of the priority schools did not reach adequate levels of implementation fidelity. Findings also revealed that most schools have yet to implement instruction for diverse students and Tier III instruction with fidelity. Of the contextual factors studied, correlations were found between implementation scores and teacher and student characteristics. The system can be used to produce reliable evidence about the level of RTI implementation in schools and which components of RTI need to be the focus of professional development and coaching. Also, if RTI is indeed an effective school improvement strategy, then by monitoring implementation fidelity of RTI, school districts can improve the chances that RTI produces the expected impacts in their school settings. Establishing an implementation monitoring system requires district staff time to complete training, conduct the data reviews, resolve rating discrepancies, and enter the data into a dashboard system.
|NCEE 20174004||Early Implementation Findings From a Study of Teacher and Principal Performance Measurement and Feedback
This is an impact study of the implementation and impacts of a set of three educator performance measures: observations of teachers' classroom practices, value-added measures of teacher performance, and a 360-degree survey assessment of principals' leadership practices. A set of elementary and middle schools within each of eight districts were randomly assigned to either a treatment group in which the study's performance measures were implemented or a control group in which they were not. A total of 127 schools participated in the study. This report provides descriptive information on the first of two years of implementation. The classroom observation and principal leadership measures were implemented generally as planned, although fewer teachers and principals accessed their value-added reports than the study intended. All three measures differentiated teacher performance, although the observation scores were mostly at the upper end of the scale. For the principal leadership measure, principal self-ratings, teachers' ratings of the principal, and the principal's supervisor's ratings of the principal often differed. Both teachers and principals in schools selected to implement the intervention reported receiving more feedback on their performance than did their counterparts in control schools.
|NCEE 20174001||Race to the Top: Implementation and Relationship to Student Outcomes
Race to the Top (RTT), one of the Obama administration's signature programs and one of the largest federal government investments in an education grant program, received $4.35 billion in funding as part of the American Recovery and Reinvestment Act of 2009. Through three rounds of competition in 2010 and 2011, RTT awarded grants to states that agreed to implement a range of education policies and practices designed to improve student outcomes. Using 2013 interview data from all states, this report documents whether states that received an RTT grant used the policies and practices promoted by RTT and how that compares to non-grantee states. The report also examines whether receipt of an RTT grant was related to improvements in student outcomes. Findings show that 2010 RTT grantees reported using more policies and practices than non-grantees in four areas (standards and assessments, teachers and leaders, school turnaround, charter schools), and 2011 RTT grantees reported using more in one area (teachers and leaders). However, the relationship between RTT and student outcomes was not clear, as trends in test scores could be plausibly interpreted as providing evidence of either a positive, negative, or null effect for RTT.
|REL 2016143||Development and implementation of quality rating and improvement systems in Midwest Region states
Recent federal and state policies that recognize the benefits of high-quality early childhood education and care, such as the Race to the Top–Early Learning Challenge and the Preschool for All initiative, have led to a rapid expansion of quality rating and improvement systems (QRISs). Although 49 states implement a QRIS in some form, each system differs in its approach to defining, rating, supporting, and communicating program quality. This study examined QRISs in use across the Midwest Region to describe approaches that states use in developing and implementing a QRIS. The purpose was to create a resource for QRIS administrators to use as they refine their systems over time. Researchers used qualitative techniques, including a review of existing documents and semistructured interviews with state officials in the Midwest Region to document the unique and common approaches to QRIS implementation. Findings suggest that the process of applying for a Race to the Top–Early Learning Challenge grant helped advance the development of a QRIS system, even in states that were not awarded funding. Also, all seven states in the Midwest Region use a variety of direct observations in classrooms to measure quality within each QRIS, despite the logistical and financial burdens associated with observational assessment. Five of the states in the Midwest Region use alternate pathways to rate certain early childhood education programs in their QRIS, most commonly for accredited or state prekindergarten programs. Finally, linking state subsidies and other early childhood education funding to QRIS participation encouraged early childhood education providers to participate in a QRIS. Developing and refining a QRIS is an ongoing process for all states in the Midwest Region and systems are continually evolving. Ongoing changes require policymakers, researchers, providers, and families to periodically relearn the exact requirements of their QRISs, but if changes are based on evidence in the field of changing needs of children and families, revised QRISs may better measure quality and better serve the public. Findings from this report can help inform the decisions of state QRIS administrators as they expand and refine their systems.
|REL 2016120||Stated Briefly: Teacher evaluation and professional learning: Lessons from early implementation in a large urban district
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. REL Northeast and Islands, in collaboration with the Northeast Educator Effectiveness Research Alliance, examined the alignment of teacher evaluation and professional learning in a large urban district in the Northeast. REL researchers examined the types of professional learning activities teachers reported they participated in, the alignment of the reported activities with what evaluators prescribed, and whether evaluation ratings improved from one academic year to the next. The study found that teachers received written feedback across all standards of the evaluation rubric. Each prescription tended to include one or two recommended professional activities, and more of these activities were professional practice activities, such as independent work to improve instruction, than professional development activities, such as courses or workshops. Teachers reported participating in more professional activities for the instruction-based standards than for the non-instruction-based standards. For all standards, less than 40 percent of teachers reported participating in the activities their evaluator recommended. While further work may be needed to strengthen the connection between teacher evaluation and a comprehensive system of teacher support and development, this study takes the first step in illustrating the need for coherence among these related systems.
|NCEE 20154018||Usage of Policies and Practices Promoted by Race to the Top and School Improvement Grants
The American Recovery and Reinvestment Act of 2009 injected $7 billion into two of the Obama administration's signature competitive education grant programs: Race to the Top (RTT) and School Improvement Grants (SIG). While RTT focused on state policies and SIG focused on school practices, both programs promoted related policies and practices, including an emphasis on turning around the nation's lowest-performing schools. Despite the sizable investment in both of these programs, comprehensive evidence on their implementation and impact has been limited to date.
This report focuses on two implementation questions: (1) Do states and schools that received grants actually use the policies and practices promoted by these two programs? (2) Does their usage of these policies and practices differ from states and schools that did not receive grants? Answers to these questions provide context for interpreting impact findings that will be presented in a future report.
The first volume of this report details our RTT findings, which are based on spring 2012 interviews with 49 states and the District of Columbia.
The second volume of this report details our SIG findings, which are based on spring 2012 surveys of approximately 470 schools in 60 districts and 22 states.
|REL 2015093||Alternative Student Growth Measures for Teacher Evaluation: Implementation Experiences of Early-Adopting Districts
State requirements to include student achievement growth in teacher evaluations are prompting the development of alternative ways to measure growth in grades and subjects not covered by state assessments. These alternative growth measures use two primary approaches: (1) value-added models (VAMs) applied to end-of-course and commercial assessments; and (2) student learning objectives (SLOs) selected by teachers with the approval of their principals. Information is limited, however, on how these alternative growth measures can be used to evaluate teachers and on their costs and benefits. REL Mid-Atlantic sought to develop new information by conducting case studies to examine the implementation experiences of eight districts that were early adopters of alternative measures of student growth. District administrators, principals, teachers, and teachers' union representatives were interviewed for the study.
The study found that alternative growth measures have been used for many purposes other than teacher evaluation, but SLOs are unique in their use to adapt and improve instruction. Although the alternative measures show a wider range of teacher performance relative to previous evaluation systems without measures of student growth, evidence on the reliability and validity of alternative measures--especially SLOs--is limited. Districts implementing SLOs most often reported increased collaboration as a benefit, while alternative assessment-based VAMs were perceived as fairer than SLOs for making comparisons among teachers. Both types of alternative growth measures come with costs and implementation challenges. SLOs are substantially more labor-intensive relative to alternative-assessment based VAMs. More research is needed on the statistical properties of the alternative measures, the approaches districts are taking to offset implementation costs, and innovative solutions to overcome implementation challenges.
|REL 2015105||Professional learning communities facilitator's guide for the What Works Clearinghouse practice guide: Teaching academic content and literacy to English learners in elementary and middle school
The Professional Learning Communities Facilitator's Guide is designed to assist teams of educators in applying the evidence-based strategies presented in the Teaching Academic Content and Literacy to English Learners in Elementary and Middle School educator's practice guide, produced by the What Works Clearinghouse. Through this collaborative learning experience, educators will expand their knowledge base as they read, discuss, share, and apply key ideas and strategies to help K–8 English learners acquire the language and literacy skills needed to succeed academically.
The facilitator's guide employs a five-step cycle that encourages professional learning communities to debrief, define, explore, experiment, and reflect and plan. This cycle is supplemented with activities, handouts, readings, and videos. Participants will develop a working knowledge of some of the best practices in the English learner practice guide through analysis of teaching vignettes and other interactive activities. Included in the toolkit of materials are activities along with 31 handouts and 23 videos. Four of the videos provide a narrative overview of each of the four recommendations in the practice guide, and the remaining videos show actual classrooms from three different grade levels putting the recommendations into practice.
|REL 2015057||Logic models for program design, implementation, and evaluation: Workshop toolkit
The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to evaluation, uses of a logic model to develop evaluation questions and identify indicators of success, and strategies to determine the right evaluation design for your program or policy. The toolkit, which includes an agenda, slide deck, participant workbook and facilitator’s manual, was delivered to three REL-NEI research alliances: the Northeast Educator Effectiveness Research Alliance, the Urban School Improvement Alliance, and the Puerto Rico Research Alliance for Dropout Prevention.