Search Results: (16-30 of 876 records)
|NCEE 20164003||Applying to the DC Opportunity Scholarship Program: How Do Parents Rate Their Children's Current Schools at Time of Application and What Do They Want in New Schools?
The DC Opportunity Scholarship Program (OSP), established in 2004, is the only federally-funded private school voucher program for low-income parents in the United States. This evaluation brief describes findings using data from more than 2,000 applicants' parents, who applied to the program from spring 2011 to spring 2013 following reauthorization under the Scholarships for Opportunity and Result (SOAR) Act of 2011. The application form asked parents to rate elements of their child's current school with which they were satisfied or dissatisfied and to indicate which elements were top priorities for them when looking for a new school. The ratings provide insights about school-related reasons parents may have had for applying for a voucher and what they were looking for in a new school.
|REL 2016125||How do school districts mentor new teachers?
This report provides a snapshot of school district policies for mentoring new teachers in five REL Central states (Kansas, Missouri, Nebraska, North Dakota, and South Dakota). State education agencies collected survey data from school districts on: who provides mentoring; how mentoring time changes after the first year; whether mentors are expected to observe their mentees; whether mentors are required to get training; whether mentors are paid stipends for their work; and district barriers to implementing mentor programs. Respondents from nearly 1,000 school districts, including superintendents and other district administrative leaders, completed the survey. The report also provides suggested next steps for district and state leaders to consider in light of the survey findings and current research.
|REL 2016119||Stated Briefly: How methodology decisions affect the variability of schools identified as beating the odds
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. Schools that show better academic performance than would be expected given characteristics of the school and student populations are often described as "beating the odds" (BTO). State and local education agencies often attempt to identify such schools as a means of identifying strategies or practices that might be contributing to the schools' relative success. Key decisions on how to identify BTO schools may affect whether schools make the BTO list and thereby the identification of practices used to beat the odds. The purpose of this study was to examine how a list of BTO schools might change depending on the methodological choices and selection of indicators used in the BTO identification process. This study considered whether choices of methodologies and type of indicators affect the schools that are identified as BTO. The three indicators were (1) type of performance measure used to compare schools, (2) the types of school characteristics used as controls in selecting BTO schools, and (3) the school sample configuration used to pool schools across grade levels. The study applied statistical models involving the different methodologies and indicators and documented how the lists schools identified as BTO changed based on the models. Public school and student data from one midwest state from 2007-08 through 2010-11 academic years were used to generate BTO school lists. By performing pairwise comparisons among BTO school lists and computing agreement rates among models, the project team was able to gauge the variation in BTO identification results. Results indicate that even when similar specifications were applied across statistical methods, different sets of BTO schools were identified. In addition, for each statistical method used, the lists of BTO schools identified varied with the choice of indicators. Fewer than half of the schools were identified as BTO in more than one year. The results demonstrate that different technical decisions can lead to different identification results.
|REL 2016129||Self-study Guide for Implementing Early Literacy Interventions
The Self-study Guide for Implementing Early Literacy Interventions is a tool to help district and school-based practitioners conduct self-studies for planning and implementing early literacy interventions for kindergarten, grade1 and grade 2 students. This guide is designed to promote reflection about current strengths and challenges in planning for implementation of early literacy interventions, spark conversations among staff, and identify areas for improvement. This self-study guide provides a template for data collection and guiding questions for discussion.
|REL 2016128||English Learner Student Characteristics and Time to Reclassification: An Example From Washington State
This study examined how long it typically takes English learner students to become proficient in English and how this time differs by student characteristics, such as gender, home language, or initial proficiency in English. The authors analyzed state data for 16,957 English learner students who entered kindergarten between 2005/06 and 2011/12 in seven cohorts. The students attended seven school districts that comprise the Road Map Project, an initiative designed to double the number of students in South King County (Washington) who are on track to graduate from college or earn a career credential by 2020. The study looked at five language groups in the region, each of which comprises at least 3 percent of the total sample: Spanish, Vietnamese, Somali, Russian and Ukrainian combined, and Cantonese and Mandarin Chinese combined. All other languages, 160 in total, were combined into an "other language" category. The findings show that students who entered kindergarten as English learners took a median of 3.8 years to be reclassified by Washington state as former English learners. Those who entered kindergarten with advanced English language proficiency were more likely to be reclassified than English learner students with basic or intermediate English proficiency. Also, female English learner students were more likely to be reclassified than male English learner students. Speakers of Chinese, Vietnamese, and Russian and Ukrainian were more likely to be reclassified than Somali or Spanish speakers. In addition to contributing to the research base, the study findings may be of interest to state education agencies as they create new targets and standards for English language proficiency. State agencies may wish to consider taking initial English language proficiency into account when determining appropriate targets for federal accountability measures, for example by setting longer expected times to reclassification and providing additional support to students entering school with basic or intermediate levels of English language proficiency. Many states are also implementing new standards for college and career readiness and overhauling their assessment and accountability systems, both of which involve setting additional targets for English learner students. A better understanding of the factors related to variation in time to proficiency may allow states to establish targets that take particular factors, such as initial English language proficiency, into account.
|REL 2016114||Alaska students' pathways from high school to postsecondary education and employment
This study follows Alaskan students in their first six years after high school to describe the pathways they took to postsecondary education and careers. Analyzing data from multiple national and state education and employment sources, the study examines the trajectories of 40,000 students who exited public high schools in Alaska from 2004/05 to 2007/08. The analysis shows that students followed more than 3,000 unique postsecondary pathways. Over two-thirds of the students (67 percent) graduated from high school and most either enrolled in postsecondary education or entered the workforce in the state immediately after graduation. Female students, White students, and urban students were more likely than male students, Alaska Native students, and rural students to enroll in college, respectively. However, students from each of these groups with similar academic and personal background characteristics had similar probabilities of enrolling directly after high school. In addition, students who earned a postsecondary degree tended to have higher early-career employment rates and wages than students who did not earn a degree. The findings provide evidence to inform policy and practice related to academic readiness and closing the gap in postsecondary enrollment rates between Alaska Native students and their White peers.
|REL 2016115||Teacher evaluation and professional learning: Lessons from early implementation in a large urban district
REL Northeast and Islands, in collaboration with the Northeast Educator Effectiveness Research Alliance, examined the alignment of teacher evaluation and professional learning in a large urban district in the Northeast. REL researchers examined the types of professional learning activities teachers reported they participated in, the alignment of the reported activities with what evaluators prescribed, and whether evaluation ratings improved from one academic year to the next. The study found that teachers received written feedback across all standards of the evaluation rubric. Each prescription tended to include one or two recommended professional activities, and more of these activities were professional practice activities, such as independent work to improve instruction, than professional development activities, such as courses or workshops. Teachers reported participating in more professional activities for the instruction-based standards than for the non-instruction-based standards. For all standards, less than 40 percent of teachers reported participating in all the activities their evaluator recommended. While further work may be needed to strengthen the connection between teacher evaluation and a comprehensive system of teacher support and development, this study takes the first step in illustrating the need for coherence among these related systems.
|REL 2016117||Benchmarking Education Management Information Systems Across the Federated States of Micronesia
The purpose of this study was to provide information on the current quality of the education management information system (EMIS) in Yap, Federated States of Micronesia, so that data specialists, administrators, and policy makers might identify areas for improvement. As part of a focus group interview, knowledgeable data specialists in Yap responded to 46 questions covering significant areas of their EMIS. The interview protocol, adapted by Regional Educational Laboratory Pacific from the World Bank’s System Assessment and Benchmarking for Education Results assessment tool, provides a means for rating aspects of an EMIS system using four benchmarking levels: latent (the process or action required to improve the aspect of quality is not in place), emerging (the process or action is in progress of implementation), established (the process or action is in place and it meets standards), and mature (the process or action is an example of best practice). Overall, data specialists scored their EMIS as established. They reported that the prerequisites of quality, that is, both the institutional frameworks that govern the information system and data reporting, and the supporting resources, are emerging. They also rated integrity of education statistics, referring to the professionalism, objectivity, transparency, and ethical standards by which staff operate and statistics are reported, as emerging. Data specialists reported the accuracy and reliability of education statistics within their system to be mature. They reported that the serviceability (the relevance, timeliness, and consistency of data) and accessibility of education data within their system are established. Results show that data specialists know and can apply sound techniques and validate data and generate statistical reports; however the system does not ensure that their roles and responsibilities are defined, nor does it provide any assurance, in the form of a legal mandate, that they receive the data they require. Data specialists provide timely services, but the system cannot assure the public that such services are provided independently, or that public has information regarding internal governmental access to statistics prior to their release. The results of this study provide the Yap State Department of Education and the National Department of Education with information regarding the strengths and areas of the EMIS that may benefit from improvement efforts through the development of action plans focused on priority areas
|REL 2016126||Stated Briefly: Who will succeed and who will struggle? Predicting early college success with Indiana’s Student Information System
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examined whether data on Indiana high school students, their high schools, and the Indiana public colleges and universities in which they enroll predict their academic success during the first two years in college. The researchers obtained student-level, school-level, and university-related data from Indiana's state longitudinal data system on the 68,802 students who graduated high school in 2010. For the 32,564 graduates who first entered a public 2-year or 4-year college, the researchers examined their success during the first two years of college using four indicators of success: (1) enrolling in only nonremedial courses, (2) completion of all attempted credits, (3) persistence to the second year of college, and (4) an aggregation of the other three indicators. HLM was used to predict students' performance on indicators using students' high school data, information about their high schools and information about the colleges they first attended. Half of Indiana 2010 high school graduates who enrolled in a public Indiana college were successful by all indicators of success. College success differed by student demographic and academic characteristics, by the type of college a student first entered, and by the indicator of college success used. Academic preparation in high school predicted all indicators of college success, and student absences in high school predicted two individual indicators of college success and a composite of college success indicators. While statistical relationships were found, the predictors collectively only predicted less than 35 percent of the variance. The predictors from this study can be used to identify students who will likely struggle in college, but there will likely be false positive (and false negative) identifications. Additional research is needed to identify other predictors--possibly non-cognitive predictors--that can improve the accuracy of the identification models.
|REL 2016127||Stated Briefly: Professional experiences of online teachers in Wisconsin: Results from a survey about training and challenges
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. REL Midwest, in partnership with the Midwest Virtual Education Research Alliance, analyzed the results of a survey administered to Wisconsin Virtual School teachers about the training in which they participated related to online instruction, the challenges they encounter while teaching online, and the type of training they thought would help them address those challenges. REL Midwest researchers and Virtual Education Research Alliance members collaborated to develop the survey based on items from the Going Virtual! survey (Dawley et al., 2010; Rice & Dawley, 2007; Rice et al., 2008). Wisconsin Virtual School administered the survey to its 54 teachers, and 49 (91 percent) responded to the survey. The responses of the 48 teachers who indicated that they taught an online course during the 2013/14 or 2014/15 school year were analyzed for the report. Results indicate that all Wisconsin Virtual School teachers reported participating in training or professional development related to online instruction and that more teachers reported participating in training that occurred while teaching online than prior to teaching online or during preservice education. The teachers most frequently reported challenges related to students' perseverance and engagement and indicated that they preferred unstructured professional development to structured professional development to help them address those challenges. Further research is needed to determine what types of professional development and training are most effective in improving teaching practice, especially related to student engagement and perseverance.
|WWC IRDIS528||Unbranded Orton-Gillingham-based Interventions
No studies of unbranded Orton-Gillingham–based strategies that fall within the scope of the Students with Learning Disabilities review protocol meet What Works Clearinghouse (WWC) evidence standards. The lack of studies meeting WWC evidence standards means that, at this time, the WWC is unable to draw any conclusions based on research about the effectiveness or ineffectiveness of unbranded Orton-Gillingham–based strategies for students with learning disabilities.
|WWC IRPE651||First Year Experience Courses for Students in Developmental Education
First year experience courses for students in developmental education are designed to ease the transition to college by providing academic and social development supports. Although course content and focus may vary, most are designed to introduce students to campus resources, provide training in time management and study skills, and address student development issues. First year experience courses, also called success courses, study skills, student development, or new student orientation courses, are often linked with or taken concurrently with developmental courses.
The WWC recently reviewed the research on the impacts of first year experience courses for students in developmental education. One study met WWC group design standards and included 911 freshman college students in developmental education at one technical community college in the United States. Based on this study, the WWC found the practice to have no discernible effects on academic achievement, progress through developmental education, and credit accumulation and persistence for postsecondary students.
|REL 2016111||Measuring school leaders' effectiveness: Findings from a multiyear pilot of Pennsylvania's Framework for Leadership
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examines the accuracy of performance ratings from the Framework for Leadership (FFL), Pennsylvania's tool for evaluating the leadership practices of principals and assistant principals. The study analyzed four key properties of the FFL: score variation, internal consistency, year-to-year stability, and concurrent validity. Score variation was characterized by the percentages of school leaders earning scores in different portions of the rating scale. To measure the internal consistency of the FFL, Cronbach's alpha was calculated for the full FFL and for each of its four categories of leadership practices. Analyses of score stability used data on FFL scores of school years across two years to calculate Pearson’s correlation coefficient. Concurrent validity was assessed through a regression model for the relationship between school leaders' estimated contributions to student achievement growth and their FFL scores. This report is based primarily on the 2013/14 pilot in which 517 principals and 123 assistant principals were rated by their supervisors; an interim report examined data from the 2012/13 pilot year. The study finds that the FFL is a reliable measure, with good internal consistency and a moderate level of year-to-year stability in scores. The study also finds evidence of the FFL’s concurrent validity: principals with higher scores on the FFL, on average, make larger estimated contributions to student achievement growth. Higher total FFL scores and scores in two of the four FFL domains are significantly or marginally significantly associated with both value-added in all subjects combined and value-added in math specifically. This evidence of the validity of the FFL sets it apart from other principal evaluation tools: No other measures of principals' professional practice have been shown to be related to principals' effects on student achievement. However, in both pilot years, variation in scores was limited, with most school leaders scoring in the upper third of the rating scale. As the FFL is implemented statewide, continued examination of evidence on its statistical properties, especially the variation in scores, is important.
|REL 2016106||Measuring school leaders' effectiveness: Final report from a multiyear pilot of Pennsylvania's Framework for Leadership
This study examines the accuracy of performance ratings from the Framework for Leadership (FFL), Pennsylvania's tool for evaluating the leadership practices of principals and assistant principals. The study analyzed four key properties of the FFL: score variation, internal consistency, year-to-year stability, and concurrent validity. Score variation was characterized by the percentages of school leaders earning scores in different portions of the rating scale. To measure the internal consistency of the FFL, Cronbach's alpha was calculated for the full FFL and for each of its four categories of leadership practices. Analyses of score stability used data on FFL scores of school years across two years to calculate Pearson’s correlation coefficient. Concurrent validity was assessed through a regression model for the relationship between school leaders' estimated contributions to student achievement growth and their FFL scores. This report is based primarily on the 2013/14 pilot in which 517 principals and 123 assistant principals were rated by their supervisors; an interim report examined data from the 2012/13 pilot year. The study finds that the FFL is a reliable measure, with good internal consistency and a moderate level of year-to-year stability in scores. The study also finds evidence of the FFL's concurrent validity: principals with higher scores on the FFL, on average, make larger estimated contributions to student achievement growth. Higher total FFL scores and scores in two of the four FFL domains are significantly or marginally significantly associated with both value-added in all subjects combined and value-added in math specifically. This evidence of the validity of the FFL sets it apart from other principal evaluation tools: No other measures of principals' professional practice have been shown to be related to principals' effects on student achievement. However, in both pilot years, variation in scores was limited, with most school leaders scoring in the upper third of the rating scale. As the FFL is implemented statewide, continued examination of evidence on its statistical properties, especially the variation in scores, is important.
|REL 2016104||Analysis of the stability of teacher-level growth scores from the student growth percentile model
This study, undertaken at the request of the Nevada Department of Education, examined the stability over years of teacher-level growth scores from the Student Growth Percentile (SGP) model, which many states and districts have selected as a measure of effectiveness in their teacher evaluation systems. The authors conducted a generalizability study using three years of data in mathematics and reading for nearly 370 elementary and middle school teachers from Washoe County School District in Reno, Nevada’s second-largest district. The study found that in mathematics, half of the variation among teachers’ annual growth score (median SGPs) was attributable to differences among teachers, while half was random or unstable. In reading, .41 of the variance in annual scores was attributable to differences among teachers, while .59 was due to random or unstable sources. More stable measures of effectiveness can be constructed by averaging multiple years of growth scores for a teacher, and the report provides stability estimates for averages of two, three, and four years of annual scores. The results from this study can also be used to examine the accuracy of judgments of teachers’ effectiveness that are based on these scores. Study results suggest that as states examine properties of their estimates of teacher effectiveness and consider their use in teacher accountability, they may want to be cautious in using such scores for teacher evaluation.