Search Results: (1-15 of 22 records)
|REL 2021075||Evaluating the Implementation of Networked Improvement Communities in Education: An Applied Research Methods Report
The purpose of this study was to develop a framework that can be used to evaluate the implementation of networked improvement communities (NICs) in public prekindergarten (PK)–12 education and to apply this framework to the formative evaluation of the Minnesota Alternative Learning Center Networked Improvement Community (Minnesota ALC NIC), a partnership between Regional Educational Laboratory Midwest, the Minnesota Department of Education, and five alternative learning centers (ALCs) in Minnesota. The partnership formed with the goal of improving high school graduation rates among students in ALCs. The evaluation team developed and used research tools aligned with the evaluation framework to gather data from 37 school staff in the five ALCs participating in the Minnesota ALC NIC. Data sources included attendance logs, postmeeting surveys (administered following three NIC sessions), a post–Plan-Do-Study-Act survey, continuous improvement artifacts, and event summaries. The evaluation team used descriptive analyses for quantitative and qualitative data, including frequency tables to summarize survey data and coding artifacts to indicate completion of continuous improvement milestones. Engagement in the Minnesota ALC NIC was strong, as measured by attendance data and post–Plan-Do-Study-Act surveys, but the level of engagement varied by continuous improvement milestones. Based on postmeeting surveys, NIC members typically viewed the NIC as relevant and useful, particularly because of the opportunities to work within teams and develop relationships with staff from other schools. The percentage of meeting attendees agreeing that the NIC increased their knowledge and skills increased over time. Using artifacts from the NIC, the evaluation team determined that most of the teams completed most continuous improvement milestones. Whereas the post–Plan-Do-Study-Act survey completed by NIC members indicated that sharing among different NIC teams was relatively infrequent, contemporaneous meeting notes recorded specific instances of networking among teams. This report illustrates how the evaluation framework and its aligned set of research tools were applied to evaluate the Minnesota ALC NIC. With slight adaptations, these tools can be used to evaluate the implementation of a range of NICs in public PK–12 education settings. The study has several limitations, including low response rates to postmeeting surveys, reliance on retrospective measures of participation in continuous improvement activities, and the availability of extant data on a single Plan-Do-Study-Act cycle. The report includes suggestions for overcoming these limitations when applying the NIC evaluation framework to other NICs in public PK–12 education settings.
|REL 2021014||Continuous Improvement in Education: A Toolkit for Schools and Districts
Continuous improvement processes engage key players within a system to focus on a specific problem of practice and, through a series of iterative cycles, test changes, gather data about the changes, and study the potential influence of these changes on outcomes of interest (Bryk et al., 2015). This practitioner-friendly toolkit is designed to provide an overview of Continuous Improvement processes in education, with a focus on the use of Plan-Do-Study-Act (PDSA) cycles (Langley, Moen, Nolan, Nolan & Norman, 2009). It also offers related tools and resources that educational practitioners can use to implement continuous improvement processes in their own schools, districts, or agencies.
The toolkit includes a customizable workbook, reproducible templates, and short informational videos. The toolkit begins with an introduction to continuous improvement, followed by customizable content for a series of meetings that guide a team of educators through the process of identifying a common problem, generating a series of evidence-based change practices to test and study, testing those change practices, collecting and analyzing data, and reflecting on and using evidence to identify next steps.
The toolkit leads educational practitioners through a series of PDSA cycles, designed explicitly for an educational setting. Real-world case examples illustrate the process in an educational context.
|NCEE 2020004||How States and Districts Support Evidence Use in School Improvement
The Every Student Succeeds Act encourages educators to use school improvement strategies backed by rigorous research. This snapshot, based on national surveys administered in 2018, describes what guidance states provided on improvement strategies and how districts selected such strategies in lowest-performing schools. Most states pointed districts and schools to evidence on improvement strategies, but few required schools to choose from a list of approved strategies. In turn, most districts reported that evidence of effectiveness was "very important" when choosing improvement strategies, but the evidence districts relied on probably varies in quality.
|NCES 2020047||U.S. PIAAC Skills Map: State and County Indicators of Adult Literacy and Numeracy
The U.S. PIAAC Skills Map allows users to access estimates of adult literacy and numeracy proficiency in all U.S. states and counties through heat maps and summary card displays. It also provides estimates of the precision of its indicators and facilitates statistical comparisons among states and counties.
|NCEE 20194001||Are Ratings from Tiered Quality Rating and Improvement Systems Valid Measures of Program Quality? A Synthesis of Validation Studies from Race to the Top-Early Learning Challenge States
The Race to the Top-Early Learning Challenge grant program (RTT-ELC) promoted the use of rating systems to document and improve the quality of early learning programs. These publications assess the progress made by RTT-ELC states in implementing Tiered Quality Rating and Improvement Systems (TQRIS). The publications are based on interviews with state administrators, administrative TQRIS data on early learning programs and ratings, and validation studies from a subset of RTT-ELC grantee states. The publications find that states made progress in promoting program participation in TQRIS but that most programs did not move from lower to high rating levels during the study period and higher TQRIS ratings were generally not related to better developmental outcomes for children.
|NCES 2019084||Technology and K-12 Education: The NCES Ed Tech Equity Initiative
This interactive brochure provides an overview of the Initiative—including its purpose, goal, and target outcomes.
|NCES 2019085||Technology and K-12 Education: Advancing the NCES Ed Tech Equity Initiative
This infographic outlines the key steps NCES is taking to advance the NCES Ed Tech Equity Initiative.
|NCES 2019086||Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Framework
Check out our new factsheet to learn about the factors most critical to informing ed tech equity in the context of K-12 education!
|NCES 2019087||Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Data Collection Priorities
This factsheet outlines the key subtopics NCES will prioritize in its ed tech equity data collections.
|NFES 2017007||The Forum Guide to Collecting and Using Attendance Data
The Forum Guide to Collecting and Using Attendance Data is designed to help state and local education agency staff improve their attendance data practices – the collection, reporting, and use of attendance data to improve student and school outcomes. The guide offers best practice suggestions and features real-life examples of how attendance data have been used by education agencies. The guide includes a set of voluntary attendance codes that can be used to compare attendance data across schools, districts, and states. The guide also features tip sheets for a wide range of education agency staff who work with attendance data.
|REL 2017244||Quality improvement efforts among early childhood education programs participating in Iowa’s Quality Rating System
The purpose of this study was to examine the use and outcomes of quality improvement activities among early childhood education programs participating in the Iowa Quality Rating System (Iowa QRS). The study summarized survey responses from 388 program administrators, describing how staff of programs in Iowa QRS participate in quality improvement activities such as training, coaching, and continuing education. The study also used logistic regression analysis to examine the relationship between quality improvement activities and increases in Iowa QRS ratings, in a subset of 146 programs that received Iowa QRS ratings at two different points in time. Survey responses indicated that almost all programs had staff participate in trainings and a majority of programs offered coaching, but participation in continuing education was less common. The most common topic of professional development was health and safety practices, followed by child development and classroom practices. Analysis results found that Iowa QRS ratings tend to increase across time, and programs that provide key staff with 15 or more training hours per year are more likely to increase ratings over time than programs that do not. The results also suggest that topics covered in professional development are important, with both positive and negative relationships observed between different professional development topics and rating outcomes. The study findings can help Iowa QRS administrators plan and allocate resources to support programs' quality improvement efforts. The findings also can help administrators in other states better understand the types of quality improvement activities to which programs are drawn naturally, as well as factors that may facilitate or impede programs' pursuit of quality.
|REL 2017221||The "I" in QRIS Survey: Collecting data on quality improvement activities for early childhood education programs
Working closely with the Early Childhood Education Research Alliance and Iowa’s Quality Rating System Oversight Committee, Regional Educational Laboratory Midwest developed a new tool—the "I" in QRIS Survey—to help states collect data on the improvement activities and strategies used by early childhood education (ECE) providers participating in a Quality Rating and Improvement System (QRIS). As national attention increasingly has focused on the potential for high-quality early childhood education and care to reduce school-readiness gaps, states developed QRIS to document the quality of ECE programs, support systematic quality improvement efforts, and provide clear information to families about their child care choices. An essential element of a QRIS is the support offered to ECE providers to assist them in improving their quality. Although all the Midwestern states offer support to ECE providers to improve quality as part of their QRIS, states do not collect information systematically about how programs use these quality improvement resources. This survey measures program-level participation in workshops and trainings, coaching, mentoring, activities aimed at increasing the educational attainment of ECE staff, and financial incentive to encourage providers to improve quality. States can use this tool to document the current landscape of improvement activities, to identify gaps or strengths in quality improvement services offered across the state, and to identify promising improvement strategies. The survey is intended for use by state education agencies and researchers interested in the "I" in QRIS and can be adapted for their specific state context.
|NFES 2017017||Forum Guide to Collecting and Using Disaggregated Data on Racial/Ethnic subgroups
The Forum Guide to Collecting and Using Disaggregated Data on Racial/Ethnic Subgroups discusses strategies for collecting data on more detailed racial/ethnic subgroups than the seven categories used in federal reporting. This guide is intended to help state and district personnel learn more about data disaggregation in the field of education, decide whether this effort might be appropriate for them, and, if so, how to implement or continue a data disaggregation project. Access to and analysis of more detailed—that is, disaggregated—data can be a useful tool for improving educational outcomes for small groups of students who otherwise would not be distinguishable in the aggregated data used for federal reporting. Disaggregating student data can help schools and communities plan appropriate programs, decide which interventions to select, use limited resources where they are needed most, and see important trends in educational outcomes and achievement.
|REL 2016159||Stated Briefly: Examining changes to Michigan's early childhood quality rating and improvement system (QRIS)
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. Documenting and improving early childhood program quality is a national priority, leading to a rapid expansion of Quality Rating and Improvement Systems (QRISs). QRISs document and improve the quality of early childhood education programs and provide clear information to families about their child care choices. This study described how early childhood programs were rated in Michigan's QRIS and examined how alternative approaches to calculating ratings affected the number of programs rated at each quality level. Using extant data from 2,390 early childhood education programs that voluntarily participated in Michigan's QRIS, the study found that programs in Michigan self-rated at low quality (level 1) and high quality (level 5) more often than at moderate quality (levels 2 through 4). The study also found that programs with both a self-rating and an independent observation of quality generally had higher self-ratings than observational ratings. The study used simulated data to compare the distributions of ratings in the original QRIS, the newly revised QRIS with relaxed domain requirements, and an approach that only used programs' overall scores. Findings revealed that in the new relaxed system and the total score approach, programs were rated at higher levels of quality when compared to the original QRIS. Implications of changes to the calculation systems in QRIS are discussed in terms of program ratings and financial implications for states.
|REL 2016143||Development and implementation of quality rating and improvement systems in Midwest Region states
Recent federal and state policies that recognize the benefits of high-quality early childhood education and care, such as the Race to the Top–Early Learning Challenge and the Preschool for All initiative, have led to a rapid expansion of quality rating and improvement systems (QRISs). Although 49 states implement a QRIS in some form, each system differs in its approach to defining, rating, supporting, and communicating program quality. This study examined QRISs in use across the Midwest Region to describe approaches that states use in developing and implementing a QRIS. The purpose was to create a resource for QRIS administrators to use as they refine their systems over time. Researchers used qualitative techniques, including a review of existing documents and semistructured interviews with state officials in the Midwest Region to document the unique and common approaches to QRIS implementation. Findings suggest that the process of applying for a Race to the Top–Early Learning Challenge grant helped advance the development of a QRIS system, even in states that were not awarded funding. Also, all seven states in the Midwest Region use a variety of direct observations in classrooms to measure quality within each QRIS, despite the logistical and financial burdens associated with observational assessment. Five of the states in the Midwest Region use alternate pathways to rate certain early childhood education programs in their QRIS, most commonly for accredited or state prekindergarten programs. Finally, linking state subsidies and other early childhood education funding to QRIS participation encouraged early childhood education providers to participate in a QRIS. Developing and refining a QRIS is an ongoing process for all states in the Midwest Region and systems are continually evolving. Ongoing changes require policymakers, researchers, providers, and families to periodically relearn the exact requirements of their QRISs, but if changes are based on evidence in the field of changing needs of children and families, revised QRISs may better measure quality and better serve the public. Findings from this report can help inform the decisions of state QRIS administrators as they expand and refine their systems.