Search Results: (16-30 of 40 records)
|REL 2016120||Stated Briefly: Teacher evaluation and professional learning: Lessons from early implementation in a large urban district
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. REL Northeast and Islands, in collaboration with the Northeast Educator Effectiveness Research Alliance, examined the alignment of teacher evaluation and professional learning in a large urban district in the Northeast. REL researchers examined the types of professional learning activities teachers reported they participated in, the alignment of the reported activities with what evaluators prescribed, and whether evaluation ratings improved from one academic year to the next. The study found that teachers received written feedback across all standards of the evaluation rubric. Each prescription tended to include one or two recommended professional activities, and more of these activities were professional practice activities, such as independent work to improve instruction, than professional development activities, such as courses or workshops. Teachers reported participating in more professional activities for the instruction-based standards than for the non-instruction-based standards. For all standards, less than 40 percent of teachers reported participating in the activities their evaluator recommended. While further work may be needed to strengthen the connection between teacher evaluation and a comprehensive system of teacher support and development, this study takes the first step in illustrating the need for coherence among these related systems.
|REL 2016115||Teacher evaluation and professional learning: Lessons from early implementation in a large urban district
REL Northeast and Islands, in collaboration with the Northeast Educator Effectiveness Research Alliance, examined the alignment of teacher evaluation and professional learning in a large urban district in the Northeast. REL researchers examined the types of professional learning activities teachers reported they participated in, the alignment of the reported activities with what evaluators prescribed, and whether evaluation ratings improved from one academic year to the next. The study found that teachers received written feedback across all standards of the evaluation rubric. Each prescription tended to include one or two recommended professional activities, and more of these activities were professional practice activities, such as independent work to improve instruction, than professional development activities, such as courses or workshops. Teachers reported participating in more professional activities for the instruction-based standards than for the non-instruction-based standards. For all standards, less than 40 percent of teachers reported participating in all the activities their evaluator recommended. While further work may be needed to strengthen the connection between teacher evaluation and a comprehensive system of teacher support and development, this study takes the first step in illustrating the need for coherence among these related systems.
|REL 2016101||Stated Briefly: Redesigning Teacher Evaluations: Lessons from a Pilot Implementation
This “Stated Briefly” report is a companion piece that summarizes the results of another report of the same name. REL Northeast and Islands, in collaboration with the Northeast Educator Effectiveness Research Alliance and the New Hampshire Department of Education conducted a study of the implementation of new teacher evaluation systems in New Hampshire’s School Improvement Grant schools. While the basic system features are similar across district plans, the specifics of these features vary considerably by district. Further, district fidelity to the plans, as measured by the exposure of teachers to different features of the evaluation system, ranged from moderate to high. Finally, researchers identified several factors related to implementation: (1) capacity of administrators to conduct evaluations; (2) initial and on-going evaluator training; (3) the introduction and design of student learning objectives; (4) the professional climate of schools, including the support of the new system by teachers and evaluators.
|REL 2016102||A Descriptive Study of the Pilot Implementation of Student Learning Objectives in Arizona and Utah
Approximately 30 states are now adopting teacher evaluation policies that include student learning objectives (SLOs), which are classroom-specific student test growth targets set by teachers and approved (and scored) by principals. Today state and district leaders are trying to determine the appropriate level of guidance and oversight to provide in support of this work. This study describes results of the pilot implementation of SLOs in two states—Arizona (with 363 teachers) and Utah (with 82 teachers)—that were implementing SLOs with the same aims: to positively affect student achievement and to fulfill the state's required student-accountability component for teacher evaluations. Findings indicated that, in their SLOs, Arizona teachers tended to target student proficiency growth on vendor-developed tests, without including any specifics about instructional strategies, while Utah's pilot teachers (over half of them special education teachers) tended to define their own SLO-focused instructional strategies and/or use their own classroom-level tests or rubrics, with goals geared toward students demonstrating knowledge (through project completion) or a physical skill. Arizona teachers' end-of-year SLO scores from their principals varied, distinguishing high- and low-performing teachers, and teachers with higher SLO scores were also rated higher on classroom observations and student surveys. Conversely, SLO scores varied little in Utah's pilot, with 89 percent of teachers meeting expectations." (Utah's pilot teachers were not rated on other measures.) On end-of-year surveys, Utah pilot teachers generally perceived the SLO process as worthwhile and beneficial to their students and to their own professional growth; however, they did not perceive the SLO pilot as positively affecting their instruction or their knowledge of effective ways to assess students. (A low response rate precluded parallel survey analysis in Arizona.)
|REL 2016100||The Examining Evaluator Feedback Survey
This report presents a survey tool, developed by REL Central at Marzano Research, designed to gather information from teachers about their perceptions of and responses to evaluator feedback. District or state administrators can use this survey to systematically collect teacher perceptions on five key aspects of evaluation feedback: (1) feedback usefulness, (2) feedback accuracy, (3) evaluator credibility, (4) access to resources related to feedback, and (5) teacher response to feedback. The survey tool was developed using an iterative process that included expert review, cognitive interviews, and a pilot study. Evidence regarding the reliability and validity of the survey tool is also reported.
|REL 2015069||A Guide for Monitoring District Implementation of Educator Evaluation Systems
This guide was developed to provide guidance to states or districts wishing to monitor implementation of educator evaluation systems. It describes a three-step process: develop state guidelines for educator evaluation systems; develop data collection methods; and determine adherence criteria and review data against criteria. The process was developed by REL Central, working with personnel from the Missouri Department of Elementary and Secondary Education (MO DESE). MO DESE is using the resulting process and tools to collect data about how districts are implementing educator evaluation systems as aligned to their principles of effective evaluation systems. The guide includes a description of how the process was implemented in Missouri, as well as tools developed to collect information about policies and practices in districts related to their educator evaluation systems. The tools include: Missouri’s principles of effective evaluation systems; a Policy Data Collection Checklist; surveys to collect practice data from teachers, principals, mentors, and district administrators; rating guides to assess implementation against criteria; and templates for reporting the results.
|REL 2015030||Redesigning Teacher Evaluation: Lessons from a Pilot Implementation
REL Northeast and Islands, in collaboration with the Northeast Educator Effectiveness Research Alliance and the New Hampshire Department of Education, conducted a study of the implementation of new teacher evaluation systems in New Hampshire’s School Improvement Grant (SIG) schools. While the basic system features are similar across district plans, the specifics of these features vary considerably by district. District fidelity to the plans, as measured by the exposure of teachers to different features of the evaluation system, ranged from moderate to high. Researchers identified several factors related to implementation: capacity of administrators to conduct evaluations; initial and on-going evaluator training; the introduction and design of student learning objectives; and the professional climate of schools, including the support of the new system by teachers and evaluators.
|REL 2015044||Approaches to evaluating teacher preparation programs in seven states
The purpose of this study was to describe how states in the REL Central Region (Colorado, Kansas, Missouri, Nebraska, North Dakota, South Dakota, and Wyoming) evaluate teacher preparation programs and planned changes for evaluation. Publicly available documents were reviewed and interviews were conducted with state education agency representatives in late 2013. Findings show that all Central Region states have procedures for approval and reauthorization of teacher preparation programs that focus on program design and implementation through reviews of documentation and on-site visits by review teams. Six of seven Central Region states are implementing or have planned changes to state evaluation of teacher preparation programs to focus on the performance of program graduates. As part of these changes to evaluation activities, states are also developing statewide data collection tools, investing in data system development, and exploring new approaches for reporting evaluation findings. More frequent and outcomes-focused approaches to teacher preparation program evaluation have the potential to motivate a change from the current state focus on program accountability to meaningful and ongoing identification of program strengths and weaknesses that can be used to improve programs.
|REL 2015062||Principal and teacher perceptions of implementation of multiple‑measure teacher evaluation systems in Arizona
This study describes how multiple-measure teacher evaluations were put into practice in a set of ten volunteering local education agencies (LEAs) in Arizona. After a key shift in state policy, five “pilot” LEAs implemented the new Arizona Department of Education teacher evaluation model in the 2012/13 school year, while five other “partner” school districts developed their own local models aligned with the new state requirements. Secondary analyses of survey and focus group data from the pilot and partner LEAs indicated that teachers and principals tended to more favorably view performance assessments (observations of teachers) that have traditionally comprised evaluations, and were more skeptical about incorporating results from student assessments and stakeholder surveys. Study participants had mixed perceptions about the new evaluations’ initial outcomes, and raised concerns about the time burden involved, inter-rater reliability, and the need for ongoing training and support.
|REL 2015050||Properties of the Multiple Measures in Arizona’s Teacher Evaluation Model
This study explored the relationships among the components of the Arizona Department of Education’s new teacher evaluation model, with a particular focus on the extent to which ratings from the state model’s teacher observation instrument differentiated higher and lower performance. The study used teacher-level evaluation data collected by the Arizona Department of Education from five participating pilot LEAs during the 2012/13 school year. The study relied primarily on descriptive statistics calculated from the results of the different component metrics piloted in these LEAs, as well as analysis of the correlations among these components. Results indicated that teachers’ observation item scores tended to concentrate at the Proficient level (the second-highest score on a four-point scale: Unsatisfactory, Basic, Proficient, and Distinguished), with this level accounting for 62 percent of all observational item scores. In addition, while the strength of the correlation between results from observations and the state’s student academic progress metric was generally low, the correlation varied significantly between high- and low-performing teachers, as well as between certain teacher subgroups.
|REL 2014024||Professional Practice, Student Surveys, and Value-Added: Multiple Measures of Teacher Effectiveness in the Pittsburgh Public Schools
Responding to federal and state prompting, school districts across the country are implementing new teacher evaluation systems that aim to increase the rigor of evaluation ratings, better differentiate effective teaching, and support personnel and staff development initiatives that promote teacher effectiveness and ultimately improve student achievement. Pittsburgh Public Schools (PPS) has been working for the last several years to develop richer and more-comprehensive measures of teacher effectiveness in support of a larger effort to promote effective teaching. In partnership with PPS, REL Mid-Atlantic collected data from Pittsburgh on three different types of teacher performance measures: professional practice measures derived from the Danielson Framework for Teaching; Tripod student survey measures; and value-added measures designed to assess each teacher’s contribution to student achievement growth. The study found that each of the three types of measures has the potential to differentiate the performance levels of different teachers. Moreover, the three types of measures are positively but modestly correlated with each other, suggesting that they are valid and complementary measures of teacher effectiveness and that they can be combined to produce a measure that is more comprehensive than any single measure. School-level variation in the ratings on the professional practice measure, however, suggests that different principals may have different standards in assigning ratings, which in turn suggests that the measure might be improved by using more than one rater of professional practice for each teacher.
|WWC SSR227||WWC Review of the Report “Incentives, Selection, and Teacher Performance: Evidence from IMPACT”
The study examined the effects of IMPACT, the teacher evaluation system used in the District of Columbia Public Schools (DCPS), on teacher retention and performance. IMPACT assigns each teacher a single performance score based on classroom observations, student achievement, core professionalism, and their contributions to the school. Based on these scores, teachers are assigned one of four ratings: Highly Effective, Effective, Minimally Effective, or Ineffective. Highly Effective teachers receive sizeable increases in compensation, Minimally Effective teachers are scheduled for dismissal if improvement does not occur in 1 year, and Ineffective teachers are immediately dismissed.
|NCEE 20144016||State Requirements for Teacher Evaluation Policies Promoted by Race to the Top
This brief describes the extent to which states required teacher evaluation policies aligned with the Race to the Top (RTT) initiative as of spring 2012. Although teacher evaluation policies appear to be rapidly evolving, documenting policy requirements in the early years of RTT implementation can help inform policymakers about the pace of policy innovation nationally. This brief examines the presence of state-level requirements for certain practices but not the actual district- or school-level implementation of such practices. Key findings, based on interviews with administrators from 49 states and the District of Columbia (12 Round 1 and 2 RTT states, 7 Round 3 RTT states, and 31 non-RTT states), include the following:
|REL 2014013||How States Use Student Learning Objectives in Teacher Evaluation Systems: A Review of State Websites
This report provides an overview of how states define and apply student learning objectives (SLOs) in evaluation systems. The research team conducted a systematic scan of state policies by searching state education agency websites of the 50 states and Washington, D.C. to identify tools, guidance, policies, regulations, and other documents related to the use of SLOs in teacher evaluation systems. The researchers reviewed each relevant document to code the requirements, components, and uses of SLOs, which are summarized in a brief report and a series of searchable tables. The report and tables were produced in response to research questions posed by the Northeast Educator Effectiveness Research Alliance (NEERA), one of eight research alliances working with REL Northeast & Islands.
|NCEE 20144011||State Implementation of Reforms Promoted Under the Recovery Act
This report, based on surveys completed by all 50 SEAs and the District of Columbia (DC) during spring 2011, examines which states were implementing the key education reform strategies promoted by the Recovery Act in 2010-11, the extent to which implementation reflected progress since Recovery Act funds were first available, and states' challenges with implementation. Findings showed variation across the strategies assessed. Almost all SEAs provided guidance for choosing and implementing one of the four school intervention models ED recommended to improve low performing schools, while only two reported supporting teacher evaluation models that included the complete set of criteria (e.g., use of student achievement gains) that the Recovery Act promoted. Difficulty in measuring student growth for teachers of nontested subjects was the challenge reported by the largest number of SEAs.