Search Results: (1-15 of 28 records)
Pub Number | Title | Date |
---|---|---|
NCEE 2024004 | Appropriate Identification of Children with Disabilities for IDEA Services: A Report from Recent National Estimates
Appropriately identifying children with disabilities--in ways that are timely, comprehensive, and accurate--is critical for ensuring that learners receive the supports they need to meet early milestones and succeed in school. In turn, the Individuals with Disabilities Education Act (IDEA) charges states and school districts with: (1) finding all children, birth through age 21, suspected of having a disability; (2) evaluating them to determine if they are eligible for IDEA services; and (3) measuring and addressing racial or ethnic disparities in who is identified. Since IDEA's reauthorization in 2004, there is greater access to data and more sophisticated approaches to screen for and detect certain disabilities, an increasingly diverse child population, and new regulations on how to measure disparities in identification. This report examines how state and district practices during the 2019-2020 school year aligned with IDEA’s goals of appropriate identification. |
6/11/2024 |
NCES 2023015 | Middle Grades Longitudinal Study of 2017–18 (MGLS:2017) Assessment Item Level File (ILF), Read Me
This ReadMe provides guidance and documentation for users of the Middle Grades Longitudinal Study of 2017-18 (MGLS:2017) Assessment Item Level File (ILF)(NCES 2023-014) made available to researchers under a restricted use only license. Other supporting documentation includes MGLS_Math_and_Reading_Items_User_Guide.xlsx, MGLS_MS1_Math_Item_Images.pdf, MGLS_MS2_Math_Item_Images.pdf, MGLS_MS1_MS2_Reading_Sample_Item_Type_Images.pdf, MGLS_MS1_MS2_EF_HeartsFlowers_Instructions.pptx, and MGLS_MS2_EF_Spatial_2-back_Instructions.pptx |
8/16/2023 |
NCES 2023014 | MGLS 2017 Assessment Item Level Files (ILF)
The Middle Grades Longitudinal Study of 2017-18 (MGLS:2017) measured student achievement in mathematics and reading along with executive function. The MGLS:2017 ILF contains the item level data from these direct measures that can be used in psychometric research for replicating or enhancing the scoring found in the MGLS:2017 RUF or in creating new scores. The Middle Grades Longitudinal Study of 2017–18 (MGLS:2017) Assessment Item Level File (ILF) contains two .csv files representing the two rounds of data collection: the MGLS:2017 Main Study (MS) Base Year (MS1) and the Main Study Follow-up (MS2) files. |
8/16/2023 |
NCES 2023013 | User’s Manual for the MGLS:2017 Data File, Restricted-Use Version
This manual provides guidance and documentation for users of the Middle Grades Longitudinal Study of 2017–18 (MGLS:2017) restricted-use school and student data files (NCES 2023-131). An overview of MGLS:2017 is followed by chapters on the study data collection instruments and methods; direct and indirect student assessment data; sample design and weights; response rates; data preparation; data file content, including the composite variables; and the structure of the data file. Appendices include a psychometric report, a guide to scales, field test reports, and school and student file variable listings. |
8/16/2023 |
WWC 2023005 | Class-Wide Function-Related Intervention Teams (CW-FIT) Intervention Report
This What Works Clearinghouse (WWC) intervention report summarizes the research on the effectiveness of Class-Wide Function-Related Intervention Teams (CW-FIT) and provides detailed information about program implementation and cost. CW-FIT is a classroom management strategy that aims to help teachers improve student behavior and create a positive learning environment. Teachers establish classroom rules, provide instruction on target skills, place students into teams, and then reward teams for demonstrating target skills. Based on eight studies that meet standards, the WWC found strong evidence that CW-FIT positively impacted student behavior and promising evidence that CW-FIT positively impacted teacher practice. |
5/16/2023 |
REL 2020039 | The Reliability and Consequential Validity of Two Teacher-Administered Student Mathematics Diagnostic Assessments
Several school districts in Georgia currently use two teacher-administered diagnostic assessments of student mathematical knowledge as part of their multi-tiered system of support in grades K-8. These assessments are the Global Strategy Stage (GloSS; New Zealand Ministry of Education, 2012) and the Individual Knowledge Assessment of Number (IKAN; New Zealand Ministry of Education, 2011). However, little is known about the inter-assessor reliability and consequential validity of these assessments. Inter-assessor reliability indicates whether two teachers obtain the same score for a student after administering the test on two occasions, and consequential validity explores perceptions of the value of using the assessments. Rather than rely on occasional testimonials from the field, decisions about using diagnostic assessments across the state should be based on psychometric data from an external source. Districts not currently using the GloSS and IKAN have indicated that they would consider using them to assess students’ current level of mathematical understanding and determine appropriate levels of instruction and intervention, if they were proven to be reliable and valid diagnostic assessments. This study found that the inter-assessor reliability for the GloSS measure and the IKAN Counting Interview is adequate. The inter-assessor reliability for the IKAN Written Assessment (one of the two components of the IKAN) is inadequate, and additional attention must be directed toward improving training for this measure so that reliability can be established. Teachers indicated that they found the data from the GloSS and IKAN assessments more useful than screening data currently in use for guiding decisions about how to provide intervention. Although teachers interviewed in the study’s focus groups expressed strong support for using both assessments, they reported in the study survey that the GloSS is more useful than the IKAN because it addresses students' solution strategies, which most other mathematics measures do not assess. Teachers did express some criticisms of both assessments; for example, they felt the IKAN Written Assessment should be untimed and that the GloSS should include familiar vocabulary. |
9/14/2020 |
REL 2020026 | Relationships between Schoolwide Instructional Observation Scores and Student Academic Achievement and Growth in Low‑Performing Schools in Massachusetts
The Massachusetts Department of Elementary and Secondary Education (DESE), like other state education agencies and districts, recognizes that a key lever to turning around low-performing schools is the quality of instruction (Hill & Harvey, 2004; Hopkins, Harris, Watling, & Beresford, 1999). As part of the annual monitoring of state-designated low-performing schools, DESE’s external low-performing school monitors use Teachstone’s Classroom Assessment Scoring System (CLASS) tool to conduct observations. DESE’ external monitors rated low-performing schools on three domains of instruction—Emotional Support, Classroom Organization, and Instructional Support. This paper examines the relationships between these observation scores and academic growth and achievement within a school, after adjusting for the percentage of students with low incomes and the grade levels in these low-performing schools. Results show statistically significant positive relationships between schoolwide average observation scores for each instructional domain and school-level academic growth in both English language arts (ELA) and mathematics. On a 7-point scale, a 1-point increase in a school’s overall observation rating was associated with an increase in student growth of 4.4 percentile points of growth in ELA and 5.1 percentile points of growth in mathematics. For schoolwide achievement, which is measured by the percentage of students who met or exceeded expectations on the state assessment, results show a significant positive relationship between the classroom organization domain and ELA schoolwide achievement. There was no significant relationship between observation scores and schoolwide achievement in ELA for any other domain or for mathematics schoolwide achievement. The relationship between observation scores and current achievement levels may be weak because achievement levels may be influenced by many other factors including students’ prior achievement and the economic and social challenges their families face. |
9/8/2020 |
NCEE 20194007 | Teacher Preparation Experiences and Early Teaching Effectiveness
This report examines the frequency of particular teacher preparation experiences and explores their relationships to beginning teachers' effectiveness in improving student test scores once they get to the classroom. The report found both differences in how teachers prepare for their certification in the field and that certain experiences are related to improving test scores in the classroom. The findings provide a detailed look into current teacher preparation practices and identify potential avenues for improving them. |
9/30/2019 |
WWC IRTP664 | ACT/SAT Test Preparation and Coaching Programs
Transition to College
ACT and SAT test preparation and coaching programs are designed to increase students' scores on college entrance exams. These programs familiarize students with the format of the test, introduce test-taking strategies, and provide practice with the types of problems that may be included on the tests. The WWC reviewed the research on ACT and SAT test preparation and coaching programs and found that they have positive effects on general academic achievement for high school students. |
10/4/2016 |
NCEE 2016002 | Can student test scores provide useful measures of school principals' performance?
This study assessed the extent to which four principal performance measures based on student test scores--average achievement, school value-added, adjusted average achievement, and adjusted school value-added--accurately reflect principals' contributions to student achievement in future years. Average achievement used information on students' end-of-year achievement without taking into account the students' past achievement; school value-added accounted for students' own past achievement by measuring their growth; and adjusted average achievement and adjusted school value-added credited principals if their schools' average achievement and value-added, respectively, exceeded predictions based on the schools' past performance on those same measures. The study conducted two sets of analyses using Pennsylvania's statewide data on students and principals from 2007/08 to 2013/14. First, using data on 2,424 principals, the study assessed the extent to which ratings from each measure are stable by examining the association between principals' ratings from earlier and later years. Second, using data on 123 principals, the study examined the relationship between the stable part of each principal's rating and his or her contributions to student achievement in future years. Based on results from both analyses, the study simulated each measure's accuracy for predicting principals' contributions to student achievement in the following year. The study found that the two performance measures that did not account for students' past achievement--average achievement and adjusted average achievement--provided no information for predicting principals' contributions to student achievement in the following year. The two performance measures that accounted for students' past achievement--school value-added and adjusted school value-added--provided, at most, a small amount of information for predicting principals' contributions in the following year, with less than one-third of each difference in value-added ratings across principals reflecting differences in their future contributions. These findings suggest that principal evaluation systems should emphasize measures that were found to provide at least some information about principals' future contributions: school value-added or adjusted school value-added. However, study findings also indicate that even the value-added measures will often be inaccurate in identifying principals who will contribute effectively or ineffectively to student achievement in future years. Therefore, states and districts should exercise caution when using these measures to make major decisions about principals and seek to identify nontest measures that can accurately predict principals' future contributions. |
9/29/2016 |
REL 2016121 | How current teachers in the Republic of Palau performed on a practice teacher certification examination
The purpose of this study was to examine teachers' performance on the Praxis I Pre-Professional Skills Tests® (PPST) in reading, writing, and math, and the relationships between test performance and selected teacher demographic and professional characteristics, in order to further the development and implementation of Palau's Professional Personnel and Certification System. The multiple choice sections of the practice Praxis I PPST tests of reading, writing, and math were administered and analyzed using descriptive statistics, along with cross-tabulations of test performance by teacher characteristics. Overall, the study found that while scores across subject areas were relatively low, teachers in Palau scored higher in reading than in writing and math. The performance of Palau test takers differed depending upon the language spoken in the home, English proficiency, level of education, years of teaching, and grade levels taught. In addition, respondents with better command of English performed better on the assessment. Level of education attained was significantly associated with a higher percentage of correct responses, and teachers with less than seven years of teaching experience answered slightly more questions correctly in reading, writing, and math than teachers with more years of teaching experience. Finally, teachers at the upper elementary and high school levels performed better on the assessments than teachers at the lower elementary level. The results of this study provide the Palau Research Alliance and Ministry of Education with information that may help establish appropriate passing scores on the Praxis PPST I reading, writing, and math subtests; may be used to create a multiyear plan of sustained improvement in the teacher workforce; may alert Palau leadership to the difficulties inherent in using English-based tests to assess the performance of those who do not have a strong command of the English language; and may be used to guide preservice curricular requirements and indicate the supporting professional development needs of Palau teachers at various grade levels. |
9/28/2016 |
REL 2016134 | Stated Briefly: Can scores on an interim high school reading assessment accurately predict low performance on college readiness exams?
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. The purpose of this study was to examine the extent to which performance on Florida's interim reading assessment could be used to identify students who may not perform well on the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT) and ACT Plan. Data included the 2013/14 PSAT/NMSQT or ACT Plan results for students in grade 10 from two districts, as well as their grade 9 results on the Florida Assessments for Instruction in Reading—Florida Standards (FAIR-FS). PSAT/NMSQT Critical Reading performance is best predicted in the study sample by a student's reading comprehension skills, while PSAT/NMSQT Mathematics and Writing performance is best predicted by a student's syntactic knowledge. Syntactic knowledge is the most important predictor of ACT Plan English, Reading, and Science in the study sample, whereas reading comprehension skills were found to best predict ACT Plan Mathematics results. Sensitivity rates ranged from 81 percent to 89 percent correct across all of the models. These results provide preliminary evidence that FAIR-FS scores could be used to create an early warning system for performance on both the PSAT/NMSQT and ACT Plan. |
5/3/2016 |
REL 2016124 | Can scores on an interim high school reading assessment accurately predict low performance on college readiness exams?
The purpose of this study was to examine the relationship between measures of reading comprehension, decoding, and language with college-ready performance. This research was motivated by leaders in two Florida school districts interested in the extent to which performance on Florida’s interim reading assessment could be used to identify students who may not perform well on the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT) and ACT Plan. One of the districts primarily administers the PSAT/NMSQT and the other primarily administers the ACT Plan. Data included the 2013/14 PSAT/NMSQT or ACT Plan results for students in grade 10 from these districts, as well as their grade 9 results on the Florida Assessments for Instruction in Reading – Florida Standards (FAIR-FS). Classification and regression tree (CART) analyses formed the framework for an early warning system of risk for each PSAT/NMSQT and ACT Plan subject-area assessment. PSAT/NMSQT Critical Reading performance is best predicted in the study sample by a student’s reading comprehension skills, while PSAT/NMSQT Mathematics and Writing performance is best predicted by a student’s syntactic knowledge. Syntactic knowledge is the most important predictor of ACT Plan English, Reading, and Science in the study sample, whereas reading comprehension skills were found to best predict ACT Plan Mathematics results. Sensitivity rates (the percentage of students correctly identified as at risk) ranged from 81 percent to 89 percent correct across all of the CART models. These results provide preliminary evidence that FAIR-FS scores could be used to create an early warning system for performance on both the PSAT/NMSQT and ACT Plan. The potential success of using FAIR-FS scores as an early warning system could enable districts to identify at-risk students without adding additional testing burden, time away from instruction, or additional cost. The analyses should be replicated statewide to verify the stability of the models and the generalizability of the results to the larger Florida student population. |
4/20/2016 |
REL 2014009 | Participation and Pass Rates for College Preparatory Transition Courses in Kentucky
The purpose of this study was to examine Kentucky high school students' participation and pass rates in college preparatory transition courses, which are voluntary remedial courses in math and reading offered to grade 12 students in the state. Three groups of students were compared using the population of grade 12 students in Kentucky public schools in school year 2011/12 (n=33,928): students meeting state benchmarks, students approaching state benchmarks (1 to 3 points below), and students performing below state benchmarks (4 or more points below). The courses targeted students who were approaching state benchmarks, but all students were eligible to take them. Results were examined for member school districts of the Southeast/South-Central Educational Cooperative (a research partner with Regional Educational Laboratory Appalachia), a matched comparison group of districts with similar characteristics identified through propensity score matching, and the state as a whole. The study found that most students, even those targeted for the intervention, did not participate in the college preparatory transition courses. Among students who were approaching state benchmarks in math, fewer than one-third (28.1 percent) took transition courses, and among students approaching state benchmarks in reading, fewer than one-tenth (8.0 percent) enrolled in transition courses. Despite the intention of the policy, students from all three groups (meeting, approaching, and below state benchmarks) enrolled in the courses. Statewide pass rates for students who did enroll in transition courses in math or reading were more than 90 percent. Examining participation and pass rates can help schools and districts understand how college preparatory transition courses are used and may be adapted to meet the needs of students targeted for intervention. |
3/4/2014 |
REL 2014004 | Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests
The study used reading and math achievement data for grades 4 and 5 in 46 Indiana schools to compare estimates of teacher value added from two student assessments: the criterion-referenced Indiana Statewide Testing for Education Progress Plus (ISTEP+) and a norm-referenced test (NRT) that is widely used in Indiana and other Midwest Region states. The study found a moderate relationship between value-added estimates for a single year based on the ISTEP+ and NRT, with average yearly correlation coefficients of 0.44 to 0.65. Overall, findings indicate variability between the estimates of teacher value added from two different tests administered to the same students in the same years. Although specific sources of the variability in estimates of teacher value added across assessments could not be isolated in this research design, the research literature points to measurement error as an important contributor. The findings suggest that incorporating confidence intervals for value added estimates reduces the likelihood that teachers' performance will be misclassified based on measurement error. |
1/23/2014 |
1 - 15
Next >>
Page 1
of 2