Search Results: (1-15 of 27 records)
Pub Number | Title | Date |
---|---|---|
NCEE 2024002 | Federal Efforts Towards Investing in Innovation through the i3 Fund: A Summary of Grantmaking and Evidence-Building
Finding and expanding the use of innovative educational strategies that work is important to help improve student learning and close equity gaps nationwide. The Investing in Innovation Fund (i3) was a key U.S. Department of Education (Department) program explicitly focused on these goals. Between 2010 and 2016, i3 invested $1.4 billion in 172 five-year grants to universities, school districts, and private non-profit organizations. The i3 Fund intentionally awarded different types of grants to either develop and test new, innovative but as-yet unproven strategies or to learn more about the circumstances under which previously tested strategies are effective. Grantees were required to fund independent evaluations that would meet high standards for quality. The Department reviewed 148 i3 evaluations, completed at or after the conclusion of the grants, to understand the key components of the grantees' educational strategies, assess the quality of the grantees' evaluations, and summarize what the evaluations found. |
2/27/2024 |
REL 2021112 | Program Evaluation Toolkit
Program evaluation is important for assessing the implementation and outcomes of local, state, and federal programs. The Program Evaluation Toolkit provides resources and tools to support users in contributing to evaluations of their own programs. The primary audience for the toolkit includes individuals responsible for evaluating and monitoring local, state, or federal programs. The toolkit comprises a series of eight modules that begin at the planning stages of an evaluation and progress to the presentation of findings. Resources in the toolkit will help users create a logic model, develop evaluation questions, identify data sources, develop data collection instruments, conduct basic analyses, and disseminate findings. By using the toolkit, users should develop an evaluation that provides easy-to-understand findings as well as recommendations or possible actions. |
10/12/2021 |
REL 2021117 | Exploring the Potential Role of Staff Surveys in School Leader Evaluation
The Mid-Atlantic Regional Educational Laboratory partnered with the District of Columbia Public Schools (DCPS) to explore the potential use of teacher surveys in school leader evaluation. The DCPS evaluation system, like many others, currently consists of two components: an assessment on how well a school performs on a set of student achievement metrics (such as proficiency on standardized tests) and an assessment by a supervisor of the principal’s leadership across multiple domains. Incorporating teacher surveys could provide an additional perspective on principals’ leadership and performance. Examining data from two teacher surveys that DCPS has used (Panorama and Insight), the study found that:
Overall, our findings suggest that it could be useful for DCPS to use elements of teacher surveys to bring in teachers’ perspectives on principals’ leadership related to instruction, talent, and school culture. Other districts may also wish to consider employing teacher surveys to gain an additional perspective on principals from staff who interact with the principal every day. |
8/2/2021 |
NCEE 2020001 | National Evaluation of the Comprehensive Centers Program Final Report
Between 2012 and 2018, the U.S. Department of Education invested nearly $350 million in 22 Comprehensive Technical Assistance (TA) Centers operating across the nation. These Centers were charged with delivering TA that builds the capacity of state education agencies (SEAs) to support local educational agencies (LEAs) in improving student outcomes. Centers were given broad discretion in interpreting and enacting this mandate. This evaluation sought to address the open questions about how the Centers designed and implemented the TA, what challenges they encountered, and what outcomes they achieved. With thorough documentation of how this process played out, stakeholders will be in a better position to inform future program improvement. |
10/21/2019 |
NCEE 20174022 | Evaluation of the DC Opportunity Scholarship Program: Impacts After One Year
The DC Opportunity Scholarship Program (OSP), established in 2004, is the only federally-funded private school voucher program for low-income parents in the United States. This report examines impacts on achievement and other outcomes one year after eligible children were selected or not selected to receive scholarships using a lottery process in 2012, 2013, and 2014. The study found negative impacts on student achievement but positive impacts on parent perceptions of school safety, for those participating in the program. There were no statistically significant effects on parents' or students' general satisfaction with their schools or parent involvement in education. |
4/27/2017 |
NCEE 20174013 | School Improvement Grants: Implementation and Effectiveness
The American Recovery and Reinvestment Act of 2009 injected $3 billion into the federal School Improvement Grants (SIG) program, which awarded grants to states that agreed to implement one of four school intervention models in their lowest-performing schools. Each of the models prescribed specific practices designed to improve student outcomes. Despite the sizable investment, comprehensive evidence on the implementation and impact of SIG has been limited. Using 2013 survey and administrative data from nearly 500 schools in 22 states, this report focuses on whether schools receiving a grant used the practices promoted by SIG and how that compares to other schools. The report also focuses on whether SIG had an impact on student outcomes. Findings show that SIG schools reported using more practices than other schools, but there was no evidence that SIG caused those schools to use more practices. There was also no evidence that SIG had significant impacts on math or reading test scores, high school graduation, or college enrollment. |
1/18/2017 |
REL 2017219 | Rubric for evaluating reading/language arts instructional materials for kindergarten to grade 5
This rubric was developed in response to a request by Improving Literacy Research Alliance members at the Florida Department of Education to be used in their instructional materials review process. It is a tool for evaluating reading/language arts instructional and intervention materials in grades K–5 based on rigorous research and standards. It can be used by practitioners at the state, district, or school level or by university faculty involved in reviewing instructional materials. The rubric is organized by content area for grades K–2 and for grades 3–5. Each item is aligned to recommendations from six What Works Clearinghouse practice guides. Each content area (for example, writing) includes a list of criteria that describe what should be consistently found within the instructional materials. Reviewers use a 1–5 scale to rate the degree to which the criteria were met. The rubric includes a guide for when and how to use it, including facilitator responsibilities, professional learning for reviewers, and ways to use the scores. Alliance members and reading coaches involved in a statewide literacy initiative in Mississippi provided feedback on the rubric. |
1/11/2017 |
REL 2016144 | Measurement instruments for assessing the performance of professional learning communities
This annotated bibliography is a compilation of valid and reliable measures of key performance indicators of teacher professional learning communities (PLCs). The research team employed a rigorous process of searching and screening the scientific literature and other sources for relevant qualitative and quantitative instruments, followed by a careful review and evaluation of each instrument against established standards of measurement quality, such as reliability and validity, as well as the instrument’s ability to detect a variable’s change over time. This resource, which is organized according to key elements of a PLC logic model (i.e., a model that describes how PLCs are expected to operate to achieve their goals), is intended for researchers, practitioners, and education professionals who seek to engage in evidence-based planning, implementation, and evaluation of teacher PLCs. The PLC-related measurement instruments identified in this project include 31 quantitative and 18 qualitative instruments that assess a range of teacher/principal-, team-, and student-level variables. |
8/31/2016 |
REL 2016156 | Measuring principals' effectiveness: Results from New Jersey’s first year of statewide principal evaluation
This study describes measures used to evaluate New Jersey principals in the first year of statewide implementation of the new evaluation system. It examines four statistical properties of the measures: the variation in ratings across principals, their year-to-year stability, the associations between component ratings and the characteristics of students in the schools, and the associations among component ratings. Based on statewide principal performance ratings from the 2013/14 school year and ratings from 14 districts that piloted the principal evaluation system in the 2012/13 school year, the study found a mix of strengths and weaknesses in the statistical properties of the measures used to evaluate principals in New Jersey. First, nearly all principals received effective or highly effective summative ratings. Second, fewer principals evaluated on school median student growth percentiles received highly effective summative ratings than principals not evaluated on this measure. Third, principal practice instrument ratings and school median student growth percentiles had moderate to high levels of year-to-year stability. Fourth, several component ratings—school median student growth percentiles, teachers' student growth objectives, and principal practice instrument ratings—and the summative rating had low, negative correlations with student socioeconomic disadvantage. Finally, principals' ratings on component measures had low to moderate positive correlations with each other, consistent with the idea that they measure distinct dimensions of overall principal performance. Nevertheless, the validity of the principal evaluation measures cannot be verified without a measure of principals' effectiveness at raising student achievement to use as a standard. More evidence is needed on the accuracy of measures used to evaluate principals. |
8/30/2016 |
NCEE 20164004 | Evaluation of the Teacher Incentive Fund: Implementation and Impacts of Pay-for-Performance After Three Years
The Teacher Incentive Fund (TIF), now named the Teacher and School Leader Incentive Program, provides grants to support performance-based compensation systems for teachers and principals in high-need schools. The study measures the impact of pay-for-performance bonuses as part of a comprehensive compensation system within a large, multisite random assignment study design. The treatment schools were to fully implement their performance-based compensation system. The control schools were to implement the same performance-based compensation system with one exception—the pay-for-performance bonus component was replaced with a one percent bonus paid to all educators regardless of performance. The report provides implementation and impact information after three years. Implementation was similar across the three years, with most districts (88 percent) implementing at least 3 of the 4 required components for teachers. In a subset of 10 districts participating in the random assignment study, educators' understanding of performance measures continued to improve during the third year, but many teachers still did not understand that they were eligible for a bonus. They also underestimated the maximum amount they could earn. The pay-for-performance bonus policy had small, positive impacts on students' reading and math achievement. |
8/24/2016 |
REL 2016142 | How are teacher evaluation data used in five Arizona districts?
Recent teacher evaluation reforms instituted across the country have sought to yield richer information about educators' strengths and limitations and guide decisions about targeted opportunities for professional growth. This study describes how results from new multiple-measure teacher evaluations were being used in 2014/15 in five school districts in Arizona (according to interviews with district leaders and instructional coaches and surveys of school principals and teachers), with each district administering its own local evaluation system developed to align with the overarching state evaluation regulations passed in 2011. Findings from a majority of the study districts indicated that online data platforms are facilitating observation-based feedback, with evaluation results reportedly influencing subsequent professional development for teachers—in particular shaping the work of instructional coaches and/or the support opportunities that are suggested for teachers within the district's online system. However, responding teachers in the five study districts expressed some skepticism about the relevance of school- and district-level professional development offerings, and viewed themselves as responsible for their own professional growth activities. In addition, respondents indicated that the timing of the release of standardized state test data renders those data less useful for professional development decisions than observation results. Meanwhile, teacher evaluation data are reportedly being less systematically used in talent management decisions, including to identify teacher leaders or to assign teachers to schools or classrooms. Regarding evaluation's impact, principals and teachers in a majority of study districts agreed that their new teacher evaluations have improved teachers' instructional practice, but teachers in all five study districts were less likely than principals to agree that evaluations have benefitted students. Together, these findings are suggestive of positive benefits from organizational structures that support the review of data during the school year, such as standards-based observation frameworks, benchmark assessments, professional learning communities, and instructional coaching and feedback. However, skepticism among teachers (particularly high school teachers) suggests that they may not yet perceive their evaluations as entirely credible and relevant to their work. |
5/19/2016 |
REL 2016133 | Relationship between school professional climate and teachers' satisfaction with the evaluation process
This study, conducted by the Regional Educational Laboratory Northeast & Islands in collaboration with the Northeast Educator Effectiveness Research Alliance, reports on the relationship between teachers' perceptions of school professional climate and their satisfaction with their formal evaluation process using the responses of a nationally representative sample of teachers from the Schools and Staffing Surveys. Specifically, the study used logistic regression analysis to examine whether teachers' satisfaction with their evaluation was associated with two measures of school professional climate (principal leadership and teacher influence), teacher and school characteristics, and the inclusion of student test scores in the evaluation system. The results indicate that teachers' perceptions of their principals' leadership was associated with their satisfaction with the evaluation system—the more positively teachers rated their principal's leadership, the more likely they were to report satisfaction with their evaluation process. The rating teachers received on their evaluation was also associated with their satisfaction, with those rated satisfactory or higher more likely to be satisfied. Teachers whose evaluation process included student test score outcomes were less likely to be satisfied with that process than teachers whose evaluations did not include student test scores. The findings reinforce current literature about the importance of the school principal in establishing positive school professional climate. The report recommends additional research related to the implementation of new educator evaluation systems. |
5/3/2016 |
NCEE 20154020 | Evaluation of the Teacher Incentive Fund: Implementation and Impacts of Pay-for-Performance After Two Years
The Teacher Incentive Fund (TIF) provides grants to support performance-based compensation systems for teachers and principals in high-need schools. The study measures the impact of pay-for-performance bonuses as part of a comprehensive compensation system within a large, multisite random assignment study design. The treatment schools were to fully implement their performance-based compensation system. The control schools were to implement the same performance-based compensation system with one exception—the pay-for-performance bonus component was replaced with a one percent bonus paid to all educators regardless of performance. This second report provides implementation and impact information. Ninety percent of all TIF districts in 2012–2013 reported implementing at least 3 of the 4 required components for teachers, and only about one-half (52 percent) reported implementing all four. This was a slight improvement from the first year of implementation. In a subset of 10 districts participating in the random assignment study, educators understanding of key program components improved during the second year, but many teachers still did not understand that they were eligible for a bonus. The pay-for-performance bonus policy had small, positive impacts on students reading achievement; impacts on students math achievement were not statistically significant but similar in magnitude. |
9/24/2015 |
REL 2015089 | Measuring principals' effectiveness: Results from New Jersey's principal evaluation pilot
The purpose of this study was to describe the measures used to evaluate principals in New Jersey in the first (pilot) year of the new principal evaluation system and examine three of the statistical properties of the measures: their variation among principals, their year-to-year stability, and the associations between these measures and the characteristics of students in the schools. The study reviewed information that developers of principal practice instruments provided about their instruments and examined principals' performance ratings using data from 14 districts in New Jersey that piloted the principal evaluation system in the 2012/13 school year. The study had four key findings: First, the developers of principal practice instruments provided partial information about their instruments' reliability (consistency across raters and observations) and validity (accurate measurement of true principal performance). Second, principal practice ratings and schoolwide student growth percentiles have the potential to differentiate among principals. Third, school median student growth percentiles, which measure student achievement growth during the school year, exhibit year-to-year stability even when the school changes principals. This may reflect persistent school characteristics, suggesting a need to investigate whether other evaluation measures could more closely gauge principals' contributions to student achievement growth. Finally, school median student growth percentiles correlate with student disadvantage, a relationship that warrants further investigation using statewide evaluation data. Results show a mix of strengths and weaknesses in the statistical properties of the measures used to evaluate principals in New Jersey. Future research could provide more evidence on the accuracy of measures used to evaluate principals. |
5/12/2015 |
REL 2015057 | Logic models for program design, implementation, and evaluation: Workshop toolkit
The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to evaluation, uses of a logic model to develop evaluation questions and identify indicators of success, and strategies to determine the right evaluation design for your program or policy. The toolkit, which includes an agenda, slide deck, participant workbook and facilitator’s manual, was delivered to three REL-NEI research alliances: the Northeast Educator Effectiveness Research Alliance, the Urban School Improvement Alliance, and the Puerto Rico Research Alliance for Dropout Prevention. |
5/5/2015 |