Search Results: (1-15 of 24 records)
|Overview of the Middle Grades Longitudinal Study of 2017–18 (MGLS:2017): Technical Report
This technical report provides general information about the study and the data files and technical documentation that are available. Information was collected from students, their parents or guardians, their teachers, and their school administrators. The data collection included direct and indirect assessments of middle grades students’ mathematics, reading, and executive function, as well as indirect assessments of socioemotional development in 2018 and again in 2020. MGLS:2017 field staff provided additional information about the school environment through an observational checklist.
|Practical Measurement for Continuous Improvement in the Classroom: A Toolkit for Educators
This toolkit is designed to guide educators in developing and improving practical measurement instruments for use in networked improvement communities (NICs) and other education contexts in which principles of continuous improvement are applied. Continuous improvement includes distinct repeating processes: understanding the problem, identifying specific targets for improvement, determining the change to introduce, implementing the change, and evaluating if and how the change led to improvements. This toolkit is intended for a team of educators who have already identified specific student learning needs and strategies to improve instruction to address those needs and are ready to test these strategies using continuous improvement processes. The toolkit aims to help the team with the final step in the cycle, which includes collecting data to measure implementation of changes and intended outcomes and using those data to inform future action. Measures for continuous improvement should be closely aligned to student learning goals and implementation of instructional strategies driving the continuous improvement effort, and they should be practical to use in a classroom setting. A team of educators can use this toolkit to proceed through a series of steps to identify what to measure, consider existing instruments, draft instruments, evaluate and refine instruments, plan data collection routines, and plan for data discussions to interpret the data and inform action. Regional Educational Laboratory (REL) Southwest developed the resources in the toolkit in partnership with the Oklahoma State Department of Education team working with the Oklahoma Excel NICs.
|Evaluating the Implementation of Networked Improvement Communities in Education: An Applied Research Methods Report
The purpose of this study was to develop a framework that can be used to evaluate the implementation of networked improvement communities (NICs) in public prekindergarten (PK)–12 education and to apply this framework to the formative evaluation of the Minnesota Alternative Learning Center Networked Improvement Community (Minnesota ALC NIC), a partnership between Regional Educational Laboratory Midwest, the Minnesota Department of Education, and five alternative learning centers (ALCs) in Minnesota. The partnership formed with the goal of improving high school graduation rates among students in ALCs. The evaluation team developed and used research tools aligned with the evaluation framework to gather data from 37 school staff in the five ALCs participating in the Minnesota ALC NIC. Data sources included attendance logs, postmeeting surveys (administered following three NIC sessions), a post–Plan-Do-Study-Act survey, continuous improvement artifacts, and event summaries. The evaluation team used descriptive analyses for quantitative and qualitative data, including frequency tables to summarize survey data and coding artifacts to indicate completion of continuous improvement milestones. Engagement in the Minnesota ALC NIC was strong, as measured by attendance data and post–Plan-Do-Study-Act surveys, but the level of engagement varied by continuous improvement milestones. Based on postmeeting surveys, NIC members typically viewed the NIC as relevant and useful, particularly because of the opportunities to work within teams and develop relationships with staff from other schools. The percentage of meeting attendees agreeing that the NIC increased their knowledge and skills increased over time. Using artifacts from the NIC, the evaluation team determined that most of the teams completed most continuous improvement milestones. Whereas the post–Plan-Do-Study-Act survey completed by NIC members indicated that sharing among different NIC teams was relatively infrequent, contemporaneous meeting notes recorded specific instances of networking among teams. This report illustrates how the evaluation framework and its aligned set of research tools were applied to evaluate the Minnesota ALC NIC. With slight adaptations, these tools can be used to evaluate the implementation of a range of NICs in public PK–12 education settings. The study has several limitations, including low response rates to postmeeting surveys, reliance on retrospective measures of participation in continuous improvement activities, and the availability of extant data on a single Plan-Do-Study-Act cycle. The report includes suggestions for overcoming these limitations when applying the NIC evaluation framework to other NICs in public PK–12 education settings.
|Continuous Improvement in Education: A Toolkit for Schools and Districts
Continuous improvement processes engage key players within a system to focus on a specific problem of practice and, through a series of iterative cycles, test changes, gather data about the changes, and study the potential influence of these changes on outcomes of interest (Bryk et al., 2015). This practitioner-friendly toolkit is designed to provide an overview of Continuous Improvement processes in education, with a focus on the use of Plan-Do-Study-Act (PDSA) cycles (Langley, Moen, Nolan, Nolan & Norman, 2009). It also offers related tools and resources that educational practitioners can use to implement continuous improvement processes in their own schools, districts, or agencies.
The toolkit includes a customizable workbook, reproducible templates, and short informational videos. The toolkit begins with an introduction to continuous improvement, followed by customizable content for a series of meetings that guide a team of educators through the process of identifying a common problem, generating a series of evidence-based change practices to test and study, testing those change practices, collecting and analyzing data, and reflecting on and using evidence to identify next steps.
The toolkit leads educational practitioners through a series of PDSA cycles, designed explicitly for an educational setting. Real-world case examples illustrate the process in an educational context.
|How States and Districts Support Evidence Use in School Improvement
The Every Student Succeeds Act encourages educators to use school improvement strategies backed by rigorous research. This snapshot, based on national surveys administered in 2018, describes what guidance states provided on improvement strategies and how districts selected such strategies in lowest-performing schools. Most states pointed districts and schools to evidence on improvement strategies, but few required schools to choose from a list of approved strategies. In turn, most districts reported that evidence of effectiveness was "very important" when choosing improvement strategies, but the evidence districts relied on probably varies in quality.
|U.S. PIAAC Skills Map: State and County Indicators of Adult Literacy and Numeracy
The U.S. PIAAC Skills Map allows users to access estimates of adult literacy and numeracy proficiency in all U.S. states and counties through heat maps and summary card displays. The Skills Map also includes state- and county-level estimates for six age groups and four education groups. It also provides estimates of the precision of its indicators and facilitates statistical comparisons among states and counties. The users guide explains reporting practices and statistical methods that are needed to accurately use these state and county estimates and it provides examples of common uses.
|Are Ratings from Tiered Quality Rating and Improvement Systems Valid Measures of Program Quality? A Synthesis of Validation Studies from Race to the Top-Early Learning Challenge States
The Race to the Top-Early Learning Challenge grant program (RTT-ELC) promoted the use of rating systems to document and improve the quality of early learning programs. These publications assess the progress made by RTT-ELC states in implementing Tiered Quality Rating and Improvement Systems (TQRIS). The publications are based on interviews with state administrators, administrative TQRIS data on early learning programs and ratings, and validation studies from a subset of RTT-ELC grantee states. The publications find that states made progress in promoting program participation in TQRIS but that most programs did not move from lower to high rating levels during the study period and higher TQRIS ratings were generally not related to better developmental outcomes for children.
|Technology and K-12 Education: The NCES Ed Tech Equity Initiative
This interactive brochure provides an overview of the Initiative—including its purpose, goal, and target outcomes.
|Technology and K-12 Education: Advancing the NCES Ed Tech Equity Initiative
This infographic outlines the key steps NCES is taking to advance the NCES Ed Tech Equity Initiative.
|Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Framework
Check out our new factsheet to learn about the factors most critical to informing ed tech equity in the context of K-12 education!
|Technology and K-12 Education: The NCES Ed Tech Equity Initiative: Data Collection Priorities
This factsheet outlines the key subtopics NCES will prioritize in its ed tech equity data collections.
|The Forum Guide to Collecting and Using Attendance Data
The Forum Guide to Collecting and Using Attendance Data is designed to help state and local education agency staff improve their attendance data practices – the collection, reporting, and use of attendance data to improve student and school outcomes. The guide offers best practice suggestions and features real-life examples of how attendance data have been used by education agencies. The guide includes a set of voluntary attendance codes that can be used to compare attendance data across schools, districts, and states. The guide also features tip sheets for a wide range of education agency staff who work with attendance data.
|Quality improvement efforts among early childhood education programs participating in Iowa’s Quality Rating System
The purpose of this study was to examine the use and outcomes of quality improvement activities among early childhood education programs participating in the Iowa Quality Rating System (Iowa QRS). The study summarized survey responses from 388 program administrators, describing how staff of programs in Iowa QRS participate in quality improvement activities such as training, coaching, and continuing education. The study also used logistic regression analysis to examine the relationship between quality improvement activities and increases in Iowa QRS ratings, in a subset of 146 programs that received Iowa QRS ratings at two different points in time. Survey responses indicated that almost all programs had staff participate in trainings and a majority of programs offered coaching, but participation in continuing education was less common. The most common topic of professional development was health and safety practices, followed by child development and classroom practices. Analysis results found that Iowa QRS ratings tend to increase across time, and programs that provide key staff with 15 or more training hours per year are more likely to increase ratings over time than programs that do not. The results also suggest that topics covered in professional development are important, with both positive and negative relationships observed between different professional development topics and rating outcomes. The study findings can help Iowa QRS administrators plan and allocate resources to support programs' quality improvement efforts. The findings also can help administrators in other states better understand the types of quality improvement activities to which programs are drawn naturally, as well as factors that may facilitate or impede programs' pursuit of quality.
|The "I" in QRIS Survey: Collecting data on quality improvement activities for early childhood education programs
Working closely with the Early Childhood Education Research Alliance and Iowa’s Quality Rating System Oversight Committee, Regional Educational Laboratory Midwest developed a new tool—the "I" in QRIS Survey—to help states collect data on the improvement activities and strategies used by early childhood education (ECE) providers participating in a Quality Rating and Improvement System (QRIS). As national attention increasingly has focused on the potential for high-quality early childhood education and care to reduce school-readiness gaps, states developed QRIS to document the quality of ECE programs, support systematic quality improvement efforts, and provide clear information to families about their child care choices. An essential element of a QRIS is the support offered to ECE providers to assist them in improving their quality. Although all the Midwestern states offer support to ECE providers to improve quality as part of their QRIS, states do not collect information systematically about how programs use these quality improvement resources. This survey measures program-level participation in workshops and trainings, coaching, mentoring, activities aimed at increasing the educational attainment of ECE staff, and financial incentive to encourage providers to improve quality. States can use this tool to document the current landscape of improvement activities, to identify gaps or strengths in quality improvement services offered across the state, and to identify promising improvement strategies. The survey is intended for use by state education agencies and researchers interested in the "I" in QRIS and can be adapted for their specific state context.
|Forum Guide to Collecting and Using Disaggregated Data on Racial/Ethnic subgroups
The Forum Guide to Collecting and Using Disaggregated Data on Racial/Ethnic Subgroups discusses strategies for collecting data on more detailed racial/ethnic subgroups than the seven categories used in federal reporting. This guide is intended to help state and district personnel learn more about data disaggregation in the field of education, decide whether this effort might be appropriate for them, and, if so, how to implement or continue a data disaggregation project. Access to and analysis of more detailed—that is, disaggregated—data can be a useful tool for improving educational outcomes for small groups of students who otherwise would not be distinguishable in the aggregated data used for federal reporting. Disaggregating student data can help schools and communities plan appropriate programs, decide which interventions to select, use limited resources where they are needed most, and see important trends in educational outcomes and achievement.