Search Results: (16-30 of 93 records)
|REL 2017228||Summary of research on online and blended learning programs that offer differentiated learning options
This report presents a summary of empirical studies of K-12 online and blended instructional approaches that offer differentiated learning options. In these approaches, instruction is provided in whole or in part online. This report includes studies that examine student achievement outcomes and summarizes the methodology, measures, and findings used in the studies of these instructional approaches. Of the 162 studies that were reviewed, 17 met all inclusion criteria and are summarized in this report. The majority of the studies examined blended instructional approaches, while all approaches provided some means to differentiate the content, difficulty level, and/or pacing of the online content. Among the blended instructional approaches, 45 percent were designed to support differentiation of the in-class component of instruction. The majority of studies examining these approaches compared student performance on common standardized achievement measures between students receiving the instructional approach and those in comparison classrooms or schools. Among the most rigorous studies, statistically significant positive effects were found for four blended instructional approaches.
|NCES 2016332||NCES-Barron's Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, , 2008, 2014
The NCES−Barron’s Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, 2008, 2014 (NCES 2015-332) contain the Barron’s college admissions competitiveness selectivity ratings for 1972, 1982, 1992, 2004, 2008, 2014 along with the NCES Higher Education Information System (HEGIS) FICE ID and Integrated Postsecondary Education Data System (IPEDS) UNITID codes and the Office of Postsecondary Education OPEID codes of each postsecondary institution included. Also included are the city and state of each institution included in the Barron’s lists. The years selected correspond to the years that students in the longitudinal studies (NLS-72, HS&B, NELS:88, ELS-2000,HSLS:09, and BPS) initially attended the 4-year postsecondary institutions. Each of the six NCES−Barron’s index files is available in a separate worksheet in an Excel workbook file that is in Excel 1997–2003 compatible format.
|REL 2017209||Stated Briefly Benchmarking education management information systems across the Federated States of Micronesia
The chief state school officers of the Federated States of Micronesia (FSM) have called for the improvement of the education management information system (EMIS) in each of the four FSM states (Chuuk, Kosrae, Pohnpei, and Yap). To assist the FSM, Regional Educational Laboratory Pacific conducted separate assessments of the quality of the current EMIS in Chuuk, Kosrae, Pohnpei, and Yap. This report integrates the findings from all four states, thereby providing an opportunity for comparison on each aspect of quality. As part of a focus group interview, knowledgeable data specialists in each of the four states of the FSM responded to 46 questions covering significant areas of their EMIS. The interview protocol, adapted by Regional Educational Laboratory Pacific from the World Bank's System Assessment and Benchmarking for Education Results assessment tool, provides a means for rating aspects of an EMIS system using four benchmarking levels: latent (the process or action required to improve the aspect of quality is not in place), emerging (the process or action is in progress of implementation), established (the process or action is in place and it meets standards), and mature (the process or action is an example of best practice). Overall, data specialists in all four states rated their systems as either emerging or established.
|REL 2017265||What does it mean when a study finds no effects?
This short brief for education decisionmakers discusses three main factors that may contribute to a finding of no effects: failure of theory, failure of implementation, and failure of research design. It provides readers with questions to ask themselves to better understand 'no effects' findings, and describes other contextual factors to consider when deciding what to do next.
|REL 2016178||Summary of 20 years of research on the effectiveness of adolescent literacy programs and practices
This literature review searched the peer-reviewed studies of reading comprehension instructional practices conducted and published between 1994 and 2014 and summarizes the instructional practices that have demonstrated positive or potentially positive effects in scientifically rigorous studies employing experimental designs. Each study was rated by the review team utilizing the What Works Clearinghouse standards. The review of the literature resulted in the identification of 7,144 studies. Of these studies, only 111 met eligibility for review. Thirty-three of these studies were determined by the study team to have met What Works Clearinghouse standards. The 33 studies represented 29 different interventions or classroom practices. Twelve of these studies demonstrated positive or potentially positive effects. These 12 studies are described and the commonalities among the studies are summarized.
|NCER 20162003||Synthesis of IES-Funded Research on Mathematics: 2002–2013
This synthesis reviews published papers on IES-supported research from projects awarded between 2002 and 2013. The authors identified 28 specific contributions that IES-funded research made to support mathematics learning and teaching from kindergarten through secondary school. The publication organizes the contributions by topic and grade level and each section describes the contributions IES-funded researchers are making in these areas and discusses the projects behind the contributions.
|REL 2016138||Summary of research on the association between state interventions in chronically low-performing schools and student achievement
This report presents a summary of research on the associations between state interventions in chronically low-performing schools and student achievement. The majority of the research focused on one type of state intervention: working with a turnaround partner. In this type of intervention, states assign an individual or team to work with a school to identify strengths and weaknesses, develop a school improvement plan, and provide technical assistance as the school implements the plan. In some cases, additional funding is also provided to support implementation of the school improvement efforts. Most of the studies were descriptive, which limits conclusions about the effectiveness of the interventions. Results of studies of turnaround partner interventions were mixed, and suggested that student achievement was more likely to improve when particular factors were in place in schools such as strong leadership, use of data to guide instruction, and a positive school culture characterized by trust and increased expectations for students. Although researchers sought to include research on a variety of state intervention types, few studies were identified that examined other types of interventions such as school closure, charter conversion, and school redesign.
|NCSER 2015002||The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary for such research designs (e.g., to answer questions about impacts for children with low-incidence disabilities). A type of experimental design that is well suited for such low-incidence populations is the single-case design (SCD). These designs involve observations of a single case (e.g., a child or a classroom) over time in the absence and presence of an experimenter-controlled treatment manipulation to determine whether the outcome is systematically related to the treatment.
Research using SCD is often omitted from reviews of whether evidence-based practices work because there has not been a common metric to gauge effects as there is in group design research. To address this issue, the National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors' recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices.
|NCER 20162000||A Compendium of Math and Science Research Funded by NCER and NCSER: 2002–2013
Between 2002 and 2013, the Institute of Education Sciences (Institute) funded over 300 projects focused on math and science through the National Center for Education Research (NCER) and the National Center for Special Education Research (NCSER). Together, researchers funded by NCER and NCSER have developed or tested more than 215 instructional interventions (e.g., packaged curricula, intervention frameworks, and instructional approaches), 75 professional development programs, 165 educational technologies, and 65 assessments in math and science. NCER commissioned the development of this compendium with the intent to present information in a structured, accessible, and usable manner. This compendium organizes information on the NCER and NCSER projects into two main sections: Mathematics and Science. Within each section, projects are sorted into chapters based on content area, grade level, and intended outcome. The compendium also includes multiple appendices and an index to help readers locate specific types of information (e.g., projects that focus on English language learners, specific interventions).
|NCES 2015118||Documentation for the School Attendance Boundary Survey (SABS): School Year 2013-2014
The School Attendance Boundary Survey (SABS) data file contains school attendance boundaries for regular schools with grades kindergarten through twelfth in the 50 states and the District of Columbia for the 2013-2014 school year. Prior to this survey, a national fabric of attendance boundaries was not freely available to the public. The geography of school attendance boundaries provides new context for researchers who were previously limited to state and district level geography.
|NCEE 20154013||A Guide to Using State Longitudinal Data for Applied Research
State longitudinal data systems (SLDSs) promise a rich source of data for education research. SLDSs contain statewide student data that can be linked over time and to additional data sources for education management, reporting, improvement, and research, and ultimately for informing education policy and practice.
Authored by Karen Levesque, Robert Fitzgerald, and Joy Pfeiffer of RTI International, this guide is intended for researchers who are familiar with research methods but who are new to using SLDS data, are considering conducting SLDS research in a new state environment, or are expanding into new topic areas that can be explored using SLDS data. The guide also may be useful for state staff as background for interacting with researchers and may help state staff and researchers communicate across their two cultures. It highlights the opportunities and constraints that researchers may encounter in using state longitudinal data systems and offers approaches to addressing some common problems.
|REL 2015061||Stated Briefly: What Does the Research Say About Increased Learning Time and Student Outcomes?
REL Appalachia conducted a systematic review of the research evidence on the effects of increased learning time. After screening more than 7,000 studies, REL Appalachia identified 30 that met the most rigorous standards for research. A review of those 30 studies found that increased learning time does not always produce positive results. However, some forms of instruction tailored to the needs of specific types of students were found to improve their circumstances. Findings suggest that the impacts of these programs depend on the settings, implementation features, and types of students targeted. This “Stated Briefly” report is a companion piece that summarizes the results of another report entitled The effects of increased learning time on students’ academic and nonacademic outcomes, released on July 9, 2014.
|REL 2014064||Reporting What Readers Need to Know about Education Research Measures: A Guide
This brief provides five checklists to help researchers provide complete information describing (1) their study's measures; (2) data collection training and quality; (3) the study's reference population, study sample, and measurement timing; (4) evidence of the reliability and construct validity of the measures; and (5) missing data and descriptive statistics. The brief includes an example of parts of a report's methods and results section illustrating how the checklists can be used to check the completeness of reporting.
|REL 2014014||Developing a Coherent Research Agenda: Lessons from the REL Northeast & Islands Research Agenda Workshops
This report describes the approach that REL Northeast and Islands (REL-NEI) used to guide its eight research alliances toward collaboratively identifying a shared research agenda. A key feature of their approach was a two-workshop series, during which alliance members created a set of research questions on a shared topic of education policy and/or practice. This report explains how REL-NEI conceptualized and organized the workshops, planned the logistics, overcame geographic distance among alliance members, developed and used materials (including modifications for different audiences and for a virtual platform), and created a formal research agenda after the workshops. The report includes links to access the materials used for the workshops, including facilitator and participant guides and slide decks.
|REL 2014051||Going public: Writing About Research in Everyday Language
This brief describes approaches that writers can use to make impact research more accessible to policy audiences. It emphasizes three techniques: making concepts as simple as possible, focusing on what readers need to know, and reducing possible misinterpretations. A glossary of common concepts is included showing the approaches applied to a range of concepts common to impact research, such as ‘regression models’ and ‘effect sizes.’
Page 2 of 7