Search Results: (1-15 of 85 records)
|NFES 2017007||The Forum Guide to Collecting and Using Attendance Data
The Forum Guide to Collecting and Using Attendance Data is designed to help state and local education agency staff improve their attendance data practices – the collection, reporting, and use of attendance data to improve student and school outcomes. The guide offers best practice suggestions and features real-life examples of how attendance data have been used by education agencies. The guide includes a set of voluntary attendance codes that can be used to compare attendance data across schools, districts, and states. The guide also features tip sheets for a wide range of education agency staff who work with attendance data.
|NCES 2017121||Program for International Student Assessment (PISA) 2015 United States Restricted-use Data File
This CD-ROM contains PISA 2015 restricted-use data for the United States. The CD-ROM includes the data file, a codebook, instructions on how to merge with the U.S. PISA 2015 public-use dataset (NCES 2017-120), and a cross-walk to assist in merging with other public datasets, such as the Common Core of Data (CCD) and Private School Survey (PSS). As these data files can be used to identify respondent schools, a restricted-use license must be obtained before access to the data is granted. Click on the restricted-use license link below for more details.
|NCEE 20184002||Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects.
|REL 2017214||Workshop on Survey Methods in Education Research: Facilitator's guide and resources
Regional Educational Laboratory Midwest has developed a tool for state and local education agencies to use to organize and conduct training for their staff members who design and conduct surveys. Surveys are often used by education agencies to collect data to assess needs, inform policy decisions, evaluate programs, or respond to legislative mandates. The workshop presentation materials draw from evidence-based research from the field of survey research methodology and offer guidance on designing and administering high-quality surveys. The materials provide practical advice and examples drawn from experiences in developing surveys for local, state, and national education applications. The workshop includes eight modules that describe the steps of survey design and administration—from planning to data collection—and covers the following topics: planning for a survey, exploring existing item sources, writing items, pretesting survey items, sampling, data collection methods, response rates, and focus groups. The facilitator's guidebook includes the goals for each module, considerations for adapting the materials for various purposes, an annotated agenda, and participant handouts (slide decks and accompanying notes, activities, and handouts). Individuals and groups who are developing surveys can use these materials to facilitate workshops, guide a survey project, or ensure that they are adhering to best practices for designing and conducting surveys. Although this guide is intended to help survey researchers in state and local education agencies organize and conduct a training for their staff, the materials also can be used as a stand-alone resource for anyone wishing to learn the basics about survey design and administration in education settings.
|REL 2017266||Puerto Rico school characteristics and student graduation: Implications for research and policy
The purpose of the study is to examine the relationship between Puerto Rico’s high school characteristics and student graduation rates. The study examines graduation rates for all public high schools for students who started grade 10 in 2010/11 (in Puerto Rico high school begins in grade 10) and were expected to graduate at the end of the 2012/13 school year, which were the most recent graduation data available. Using data provided by the Puerto Rico Department of Education as well as publicly available data, this study first examined the correlational relationships between graduation rates and two types of variables: student composition characteristics, which are not amenable to change or intervention but help to improve the description of graduation trends in Puerto Rico (for example, the percentage of students who are living in poverty); and school characteristics, which are amenable to change or intervention by officials (for example, the ratio of students per teacher). Regression analyses were used to estimate the conditional association between various characteristics and on-time graduation in Puerto Rico high schools after controlling for other factors. The percentage of students proficient in Spanish language arts was associated with higher graduation rates, after controlling for other school characteristics both overall and by subgroup (males, females, students below poverty, and special education students). After controlling for other characteristics, the percentage of students proficient in mathematics was not associated with graduation rates. Lower student-to-teacher ratios were associated with higher graduation rates for males, students living in poverty, and special education students, after controlling for other school characteristics. The percentage of highly qualified teachers was associated with lower graduation rates overall and for all subgroups except females, after controlling for other school characteristics. Correlations between each school characteristic and graduation rates are also presented in the report. The findings from this study provide a starting point for stakeholders in Puerto Rico who are interested in addressing the low rates of graduation in their high schools and communities through the use of data-driven decision-making.
|REL 2017264||Establishing and sustaining networked improvement communities: Lessons from Michigan and Minnesota
The purpose of this report is to share lessons learned by Regional Educational Laboratory (REL) Midwest researchers as they worked with educators in Michigan and Minnesota to establish and sustain two networked improvement communities (NICs). A NIC is a type of collaborative research partnership that uses principles of improvement science within networks to learn from variation across contexts. At the request of the Michigan Department of Education, REL Midwest worked with educators at the school, district, intermediate school district, and state levels to establish the Michigan Focus NIC, with the goal of reducing disparities in student achievement within schools. At the request of the Minnesota Department of Education, REL Midwest worked with educators at the state and regional levels to establish the Minnesota Statewide System of Support NIC. This NIC aimed to improve the supports that the Minnesota Department of Education provides to its six Regional Centers of Excellence, which implement school improvement strategies in the schools in the state with the lowest performance and largest achievement gaps. Although there is practical guidance for how NICs should structure their work, few published accounts describe the process of forming a NIC. Through its experience working with educators to form two NICs, REL Midwest learned that it is important to: build a cohesive team with members representing different types of expertise; reduce uncertainty by clarifying what participation would entail; build engagement by aligning work with ongoing efforts; use activities that are grounded in daily practice to narrow the problem of practice to one that is high leverage and actionable; and embed capacity building into NICs to build additional expertise for using continuous improvement research to address problems of practice. This report offers guidance to researchers and educators as they work to establish and sustain NICs. The lessons learned come from efforts to establish NICs in two specific contexts and therefore may not be generalizable to other contexts.
|NCEE 20174023||Descriptive analysis in education: A guide for researchers
Whether the goal is to identify and describe trends and variation in populations, create new measures of key phenomena, or describe samples in studies aimed at identifying causal effects, description plays a critical role in the scientific process in general and education research in particular. Descriptive analysis identifies patterns in data to answer questions about who, what, where, when, and to what extent. This guide describes how to more effectively approach, conduct, and communicate quantitative descriptive analysis. The primary audience for this guide includes members of the research community who conduct and publish both descriptive and causal studies, although it could also be useful for policymakers and practitioners who are consumers of research findings. The guide contains chapters that discuss the important role descriptive analysis plays; how to approach descriptive analysis; how to conduct descriptive analysis; and how to communicate descriptive analysis findings.
|REL 2017228||Summary of research on online and blended learning programs that offer differentiated learning options
This report presents a summary of empirical studies of K-12 online and blended instructional approaches that offer differentiated learning options. In these approaches, instruction is provided in whole or in part online. This report includes studies that examine student achievement outcomes and summarizes the methodology, measures, and findings used in the studies of these instructional approaches. Of the 162 studies that were reviewed, 17 met all inclusion criteria and are summarized in this report. The majority of the studies examined blended instructional approaches, while all approaches provided some means to differentiate the content, difficulty level, and/or pacing of the online content. Among the blended instructional approaches, 45 percent were designed to support differentiation of the in-class component of instruction. The majority of studies examining these approaches compared student performance on common standardized achievement measures between students receiving the instructional approach and those in comparison classrooms or schools. Among the most rigorous studies, statistically significant positive effects were found for four blended instructional approaches.
|NCES 2016332||NCES-Barron's Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, , 2008, 2014
The NCES−Barron’s Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, 2008, 2014 (NCES 2015-332) contain the Barron’s college admissions competitiveness selectivity ratings for 1972, 1982, 1992, 2004, 2008, 2014 along with the NCES Higher Education Information System (HEGIS) FICE ID and Integrated Postsecondary Education Data System (IPEDS) UNITID codes and the Office of Postsecondary Education OPEID codes of each postsecondary institution included. Also included are the city and state of each institution included in the Barron’s lists. The years selected correspond to the years that students in the longitudinal studies (NLS-72, HS&B, NELS:88, ELS-2000,HSLS:09, and BPS) initially attended the 4-year postsecondary institutions. Each of the six NCES−Barron’s index files is available in a separate worksheet in an Excel workbook file that is in Excel 1997–2003 compatible format.
|REL 2017209||Stated Briefly Benchmarking education management information systems across the Federated States of Micronesia
The chief state school officers of the Federated States of Micronesia (FSM) have called for the improvement of the education management information system (EMIS) in each of the four FSM states (Chuuk, Kosrae, Pohnpei, and Yap). To assist the FSM, Regional Educational Laboratory Pacific conducted separate assessments of the quality of the current EMIS in Chuuk, Kosrae, Pohnpei, and Yap. This report integrates the findings from all four states, thereby providing an opportunity for comparison on each aspect of quality. As part of a focus group interview, knowledgeable data specialists in each of the four states of the FSM responded to 46 questions covering significant areas of their EMIS. The interview protocol, adapted by Regional Educational Laboratory Pacific from the World Bank's System Assessment and Benchmarking for Education Results assessment tool, provides a means for rating aspects of an EMIS system using four benchmarking levels: latent (the process or action required to improve the aspect of quality is not in place), emerging (the process or action is in progress of implementation), established (the process or action is in place and it meets standards), and mature (the process or action is an example of best practice). Overall, data specialists in all four states rated their systems as either emerging or established.
|REL 2017265||What does it mean when a study finds no effects?
This short brief for education decisionmakers discusses three main factors that may contribute to a finding of no effects: failure of theory, failure of implementation, and failure of research design. It provides readers with questions to ask themselves to better understand 'no effects' findings, and describes other contextual factors to consider when deciding what to do next.
|REL 2016178||Summary of 20 years of research on the effectiveness of adolescent literacy programs and practices
This literature review searched the peer-reviewed studies of reading comprehension instructional practices conducted and published between 1994 and 2014 and summarizes the instructional practices that have demonstrated positive or potentially positive effects in scientifically rigorous studies employing experimental designs. Each study was rated by the review team utilizing the What Works Clearinghouse standards. The review of the literature resulted in the identification of 7,144 studies. Of these studies, only 111 met eligibility for review. Thirty-three of these studies were determined by the study team to have met What Works Clearinghouse standards. The 33 studies represented 29 different interventions or classroom practices. Twelve of these studies demonstrated positive or potentially positive effects. These 12 studies are described and the commonalities among the studies are summarized.
|NCER 20162003||Synthesis of IES-Funded Research on Mathematics: 2002–2013
This synthesis reviews published papers on IES-supported research from projects awarded between 2002 and 2013. The authors identified 28 specific contributions that IES-funded research made to support mathematics learning and teaching from kindergarten through secondary school. The publication organizes the contributions by topic and grade level and each section describes the contributions IES-funded researchers are making in these areas and discusses the projects behind the contributions.
|REL 2016138||Summary of research on the association between state interventions in chronically low-performing schools and student achievement
This report presents a summary of research on the associations between state interventions in chronically low-performing schools and student achievement. The majority of the research focused on one type of state intervention: working with a turnaround partner. In this type of intervention, states assign an individual or team to work with a school to identify strengths and weaknesses, develop a school improvement plan, and provide technical assistance as the school implements the plan. In some cases, additional funding is also provided to support implementation of the school improvement efforts. Most of the studies were descriptive, which limits conclusions about the effectiveness of the interventions. Results of studies of turnaround partner interventions were mixed, and suggested that student achievement was more likely to improve when particular factors were in place in schools such as strong leadership, use of data to guide instruction, and a positive school culture characterized by trust and increased expectations for students. Although researchers sought to include research on a variety of state intervention types, few studies were identified that examined other types of interventions such as school closure, charter conversion, and school redesign.
|NCSER 2015002||The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary for such research designs (e.g., to answer questions about impacts for children with low-incidence disabilities). A type of experimental design that is well suited for such low-incidence populations is the single-case design (SCD). These designs involve observations of a single case (e.g., a child or a classroom) over time in the absence and presence of an experimenter-controlled treatment manipulation to determine whether the outcome is systematically related to the treatment.
Research using SCD is often omitted from reviews of whether evidence-based practices work because there has not been a common metric to gauge effects as there is in group design research. To address this issue, the National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors' recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices.