Search Results: (1-15 of 23 records)
Pub Number | Title | ![]() |
---|---|---|
REL 2018281 | Scientific evidence for the validity of the New Mexico Kindergarten Observation Tool
The purpose of this study was to determine whether there was scientific support for using the New Mexico Kindergarten Observation Tool (KOT) to measure distinct domains of children's knowledge and skills at kindergarten entry. The research team conducted exploratory and confirmatory factor analyses to identify the latent constructs (or domains) measured in the 2015 KOT field test. In addition, internal consistency analyses were conducted and Rasch modeling was applied to examine item functioning and differential item functioning among student subgroups. Correlational analyses were conducted to examine patterns of associations between validated KOT domains and an independent kindergarten assessment—the Dynamic Indicators of Basic Early Literacy Skills (DIBELS). Finally, the research team examined the proportion of classroom-level variance in children's KOT scores by calculating the variance partition coefficient after fitting four-level unconditional models. Factor analyses provided support for a two-domain structure measuring children's knowledge and skills in two distinct areas: (1) cognitive school readiness (or academic knowledge and skills) and (2) noncognitive school readiness (or learning and social skills) as well as support for a one-domain structure measuring children's general school readiness. In addition, these KOT domains were moderately correlated with the DIBELS; the KOT cognitive domain was more strongly correlated with DIBELS than the KOT noncognitive domain. For each of the 26 KOT items, rating scale categories functioned appropriately. Three KOT items demonstrated differential item functioning for student subgroups, which signals potential bias for these items. Additional work is required to determine whether those items are truly unfair to certain student subgroups. Finally, classroom-level variation in children's KOT ratings was found. Although there was not scientific support for generating KOT scores based on the state's six intended domains (Physical Development, Health, and Well-Being; Literacy; Numeracy; Scientific Conceptual Understanding; Self, Family, and Community; Approaches to Learning), the 2015 KOT field test produced valid and reliable measures of children's knowledge and skills across two distinct domains and for one overall score that kindergarten teachers can use to better understand and plan for individual children's knowledge and skills at the beginning of kindergarten. Recommended next steps for New Mexico include replication of construct validity analyses with the most recent version of the KOT, consultation with a content expert review panel to investigate further the three items flagged for potential item bias, and further investigation of the sources of classroom-level variance. |
12/5/2017 |
REL 2017240 | School discipline data indicators: A guide for districts and schools
Disproportionate rates of suspension for students of color are a local, state, and national concern. In particular, African American, Hispanic/Latino(a), and American Indian students experience suspensions more frequently than their White peers. Disciplinary actions that remove students from classroom instruction undermine their academic achievement and weaken their connection with school. This REL Northwest guide is designed to help educators use data to reduce disproportionate rates of suspension and expulsion based on race or ethnicity. It provides examples of selecting and analyzing data to determine whether racial disproportionality exists in a school or district's discipline practices. The guide also describes how to apply the Plan-Do-Study-Act continuous improvement cycle to inform intervention decisions and monitor progress toward desired outcomes. |
4/14/2017 |
REL 2017263 | Analyzing student-level disciplinary data: A guide for districts
The purpose of this report is to help guide districts in analyzing their own student-level disciplinary data to answer important questions about the use of disciplinary actions. This report, developed in collaboration with the Regional Educational Laboratory Northeast and Islands Urban School Improvement Alliance, provides information to district personnel about how to analyze their student-level data and answer questions about the use of disciplinary actions, such as whether these actions are disproportionately applied to some student subgroups, and whether there are differences in student academic outcomes across the types of disciplinary actions that students receive. This report identifies several considerations that should be accounted for prior to conducting any analysis of student-level disciplinary data. These include defining all data elements to be used in the analysis, establishing rules for transparency (including handling missing data), and defining the unit-of-analysis. The report also covers examples of descriptive analyses that can be conducted by districts to answer questions about their use of the disciplinary actions. SPSS syntax is provided to assist districts in conducting all of the analyses described in the report. The report will help guide districts to design and carry out their own analyses, or to engage in conversations with external researchers who are studying disciplinary data in their districts. |
3/29/2017 |
REL 2017267 | Exploring district-level expenditure-to-performance ratios
Districts across the nation are seeking ways to increase efficiency by maintaining, if not improving, educational outcomes using fewer resources. One proxy for school district efficiency is an expenditure-to-performance ratio, for example a ratio of per pupil expenditures to student academic performance. Using state education department data from an example state in the Regional Educational Laboratory Northeast & Islands Region, researchers created six different expenditure-to-performance ratios and investigated how districts' inclusion in the highest quartile on districts rankings varied according to the expenditure and performance measures used to calculate each ratio. By demonstrating the variability in district rankings depending on the ratio being examined, this guide provides states and districts with evidence to suggest that state policymakers should carefully consider the examination of expenditure and performance measures that are most relevant to their questions of interest when investigating district efficiency. |
3/22/2017 |
REL 2017179 | A Guide to Calculating District Expenditure-to-Performance Ratios Using Publicly Available Data
Districts across the nation are seeking ways to increase efficiency by maintaining, if not improving, educational outcomes using fewer resources. One measure that is sometimes used as a proxy for school district efficiency is an expenditure-to-performance ratio, for example a ratio of per pupil expenditures to student academic performance. This guide shows states and districts how to use publicly available data about district-level expenditures and student academic performance to create six expenditure-to-performance ratios. By illustrating the steps needed to calculate different expenditure-to-performance ratios, the guide also provides states and districts with a straightforward strategy for exploring how conclusions about district efficiency may vary, sometimes substantially, depending on which types of expenditures and which measures of performance are considered. The guide is based on a recent Regional Educational Laboratory (REL) Northeast and Islands project conducted for the Northeast Rural Districts Research Alliance and uses state education department data from one state in the REL Northeast and Islands region. Through the illustration of the steps necessary for calculating expenditure-to-performance ratios, the guide provides states and districts with a set of steps they can use to explore districts' resource use. Particularly given the descriptive nature of the expenditure-to-performance ratios, the guide also summarizes both the implications for and the limitations of their use. |
2/14/2017 |
REL 2017167 | A comparison of two approaches to identifying beating-the-odds high schools in Puerto Rico
The Regional Educational Laboratory Northeast and Islands conducted this study using data on public high schools in Puerto Rico from national and territory databases to compare methods for identifying beating-the-odds schools. Schools were identified by two methods, a status method that ranked high-poverty schools based on their current observed performance and an exceeding-achievement-expectations method that ranked high-poverty schools based on the extent to which their actual performance exceeded (or fell short of) their expected performance. Graduation rates, reading proficiency rates, and mathematics proficiency rates were analyzed to identify schools for each method. The identified schools were then compared by method to determine agreement rates—that is, the amount of overlap in schools identified by each method. The report presents comparisons of the groups of schools—those identified by each method and all public high-poverty high schools in Puerto Rico—on descriptive information. Using the two methods—ranking by status and ranking by exceeding-achievement-expectations—two different lists of beating-the-odds schools were identified. The status method identified 17 schools, and the exceeding-achievement-expectations method identified 15 schools. Six schools were identified by both methods. The agreement rate between the two lists of beating-the-odds schools was 38 percent. The analyses suggest that using both methods to identify beating-the-odds schools is the best strategy because high schools identified by both methods demonstrate high levels of absolute performance and appear to be achieving higher levels of graduation rates and percent proficiency than might be expected given their demographics and prior performance. |
12/6/2016 |
REL 2017197 | Strategies for estimating teacher supply and
demand using student and teacher data
The Minnesota Department of Education partnered with Regional Educational Laboratory Midwest to redesign the state's teacher supply and demand study in order to increase its utility for stakeholders. This report summarizes the four-step process that was followed in redesigning the study, focusing on the state data sources and analytic methods that can address stakeholders' research questions. Because many data elements used in the study are common across states, the process described may help stakeholders in other states improve their studies of teacher supply and demand. |
12/1/2016 |
REL 2016180 | Predicting math outcomes from a reading screening assessment in grades 3–8
District and state education leaders and teachers frequently use assessments to identify students who are at risk of performing poorly on end-of-year reading achievement tests. This study explores the use of a universal screening assessment of reading skills for the identification of students who are at risk for low achievement in mathematics and provides support for the interpretation of screening scores to inform instruction. The study results demonstrate that a reading screening assessment predicted poor performance on a mathematics outcome (the Stanford Achievement Test) with similar levels of accuracy as screening assessments that specifically measure mathematics skills. These findings indicate that a school district could use an assessment of reading skills to screen for risk in both reading and mathematics, potentially reducing costs and testing time. In addition, this document provides a decision tree framework to support implementation of screening practices and interpretation by teachers. |
9/21/2016 |
REL 2016164 | Survey methods for educators: Analysis and reporting of survey data (part 3 of 3)
Educators at the state and local levels are increasingly using data to inform policy decisions. While student achievement data is often used to inform instructional or programmatic decisions, educators may also need additional sources of data, some of which may not be housed in their existing data systems. Creating and administering surveys is one way to collect such data. However, documentation available to educators about administering surveys may provide insufficient guidance about sampling or analysis approaches. Furthermore, some educators may not have training or experience in survey methods. In response to this need, REL Northeast & Islands created a series of three complementary guides that provide an overview of the survey research process designed for educators. The guides describe (1) survey development, (2) sampling respondents and survey administration, and (3) analysis and reporting of survey data. Part three of this series, "Analysis and Reporting of Survey Data," outlines the following steps, drawn from the research literature: 1. Review the analysis plan 2. Prepare and check data files 3. Calculate response rates 4. Calculate summary statistics 5. Present the results in tables or figures The guide provides detailed, real-world examples of how these steps have been used in a REL research alliance project. With this guide, educators will be able to analyze and report their own survey data. |
8/2/2016 |
REL 2016160 | Survey methods for educators: Selecting samples and administering surveys (part 2 of 3)
Educators at the state and local levels are increasingly using data to inform policy decisions. While student achievement data is often used to inform instructional or programmatic decisions, educators may also need additional sources of data, some of which may not be housed in their existing data systems. Creating and administering surveys is one way to collect such data. However, documentation available to educators about administering surveys may provide insufficient guidance about sampling or analysis approaches. Furthermore, some educators may not have training or experience in survey methods. In response to this need, REL Northeast & Islands created a series of three complementary guides that provide an overview of the survey research process designed for educators. The guides describe (1) survey development, (2) sampling respondents and survey administration, and (3) analysis and reporting of survey data. Part two of this series, "Sampling Respondents and Survey Administration," outlines the following steps, drawn from the research literature: 1. Define the population 2. Specify the sampling procedure 3. Determine the sample size 4. Select the sample 5. Administer the survey The guide provides detailed, real-world examples of how these steps have been used in a REL research alliance project. With this guide, educators will be able to develop their own sampling and survey administration plans. |
8/2/2016 |
REL 2016163 | Survey methods for educators: Collaborative survey development (part 1 of 3)
Educators at the state and local levels are increasingly using data to inform policy decisions. While student achievement data is often used to inform instructional or programmatic decisions, educators may also need additional sources of data, some of which may not be housed in their existing data systems. Creating and administering surveys is one way to collect such data. However, documentation available to educators about administering surveys may provide insufficient guidance about the survey development process. Furthermore, some educators may not have training or experience in survey methods. In response to this need, REL Northeast & Islands created a series of three complementary guides that provide an overview of the survey research process designed for educators. The guides describe (1) survey development, (2) sampling respondents and survey administration, and (3) analysis and reporting of survey data. Part one of this series, "Collaborative Survey Development," outlines the following steps, drawn from the research literature: 1. Identify topics of interest 2. Identify relevant, existing survey items 3. Draft new survey items and adapt existing survey items 4. Review draft survey items with stakeholders and content experts 5. Refine the draft survey using cognitive interviewing The guide provides detailed, real-world examples of how these steps have been used in REL research alliance projects. With this guide, educators will be able to develop their own surveys in collaboration with other practitioners, researchers, and content experts. |
8/2/2016 |
REL 2016130 | Decision points and considerations for identifying rural districts that have closed student achievement gaps
Rural districts have long faced challenges in closing the achievement gap between high-poverty students and their more affluent peers. This research brief outlines key decision points and considerations for state and district decisionmakers who wish to identify rural districts that have closed academic achievement gaps. Examining these districts’ experiences with organizational and instructional policies and practices may suggest activities associated with making achievement gains and narrowing achievement gaps that can be systematically investigated. Key issues in the process are highlighted by examples from recent work with rural stakeholder groups in Colorado and Nebraska. |
4/26/2016 |
REL 2015097 | Development and Examination of an Alternative School Performance Index in South Carolina
The purpose of this study was to examine the extent to which the measures that make up each of the three separate accountability indices of school performance in South Carolina could be used to create an overall, reliable index of school performance. Data from public elementary, middle, and high schools in 2012/13 were used in confirmatory factor analysis models designed to estimate the relations between the measures under different specifications. Four different factor models were compared at each school level, beginning with a one-factor model and ending with a bi-factor model. Results from the study suggest that the measures which currently are combined into three separate indices of school performance can instead be combined into a single index of school performance using a bi-factor model. The reliability of the school performance general factor estimated by the bi-factor model ranged from .89 to .95. Using this alternative school performance rating, the study found that approximately 3 percent of elementary schools, 2 percent of middle schools, and 3 percent of high schools were observed to statistically outperform their predicted performance when accounting for the school’s demographic characteristics. These schools, referred to as schools beating the odds, were found in most of the demographic profiles which represent South Carolina schools. The results of this study can inform decisions related to the development of new accountability indices in South Carolina and other states with similar models. |
8/18/2015 |
REL 2015077 | Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model Versus Logistic Regression
The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by which students are identified as at-risk or not at-risk readers. Logistic regression and CART were compared using data on a sample of grades 1 and 2 Florida public school students who participated in both interim assessments and an end-of-the year summative assessment during the 2012/13 academic year. Grade-level analyses were conducted and comparisons between methods were based on traditional measures of diagnostic accuracy, including sensitivity (i.e., proportion of true positives), specificity (proportion of true negatives), positive and negative predictive power, and overall correct classification. Results indicate that CART is comparable to logistic regression, with the results of both methods yielding negative predictive power greater than the recommended standard of .90. Details of each method are provided to assist analysts interested in developing early warning systems using one of the methods. |
2/25/2015 |
REL 2015071 | How Methodology Decisions Affect the Variability of Schools Identified as Beating the Odds
Schools that show better academic performance than would be expected given characteristics of the school and student populations are often described as "beating the odds" (BTO). State and local education agencies often attempt to identify such schools as a means of identifying strategies or practices that might be contributing to the schools' relative success. Key decisions on how to identify BTO schools may affect whether schools make the BTO list and thereby the identification of practices used to beat the odds. The purpose of this study was to examine how a list of BTO schools might change depending on the methodological choices and selection of indicators used in the BTO identification process. This study considered whether choices of methodologies and type of indicators affect the schools that are identified as BTO. The three indicators were (1) type of performance measure used to compare schools, (2) the types of school characteristics used as controls in selecting BTO schools, and (3) the school sample configuration used to pool schools across grade levels. The study applied statistical models involving the different methodologies and indicators and documented how the lists schools identified as BTO changed based on the models. Public school and student data from one midwest state from 2007-08 through 2010-11 academic years were used to generate BTO school lists. By performing pairwise comparisons among BTO school lists and computing agreement rates among models, the project team was able to gauge the variation in BTO identification results. Results indicate that even when similar specifications were applied across statistical methods, different sets of BTO schools were identified. In addition, for each statistical method used, the lists of BTO schools identified varied with the choice of indicators. Fewer than half of the schools were identified as BTO in more than one year. The results demonstrate that different technical decisions can lead to different identification results. |
2/24/2015 |
1 - 15
Next >>
Page 1
of 2