Search Results: (1-7 of 7 records)
Pub Number | Title | Date |
---|---|---|
NCES 2023015 | Middle Grades Longitudinal Study of 2017–18 (MGLS:2017) Assessment Item Level File (ILF), Read Me
This ReadMe provides guidance and documentation for users of the Middle Grades Longitudinal Study of 2017-18 (MGLS:2017) Assessment Item Level File (ILF)(NCES 2023-014) made available to researchers under a restricted use only license. Other supporting documentation includes MGLS_Math_and_Reading_Items_User_Guide.xlsx, MGLS_MS1_Math_Item_Images.pdf, MGLS_MS2_Math_Item_Images.pdf, MGLS_MS1_MS2_Reading_Sample_Item_Type_Images.pdf, MGLS_MS1_MS2_EF_HeartsFlowers_Instructions.pptx, and MGLS_MS2_EF_Spatial_2-back_Instructions.pptx |
8/16/2023 |
NCES 2023014 | MGLS 2017 Assessment Item Level Files (ILF)
The Middle Grades Longitudinal Study of 2017-18 (MGLS:2017) measured student achievement in mathematics and reading along with executive function. The MGLS:2017 ILF contains the item level data from these direct measures that can be used in psychometric research for replicating or enhancing the scoring found in the MGLS:2017 RUF or in creating new scores. The Middle Grades Longitudinal Study of 2017–18 (MGLS:2017) Assessment Item Level File (ILF) contains two .csv files representing the two rounds of data collection: the MGLS:2017 Main Study (MS) Base Year (MS1) and the Main Study Follow-up (MS2) files. |
8/16/2023 |
NCES 2023013 | User’s Manual for the MGLS:2017 Data File, Restricted-Use Version
This manual provides guidance and documentation for users of the Middle Grades Longitudinal Study of 2017–18 (MGLS:2017) restricted-use school and student data files (NCES 2023-131). An overview of MGLS:2017 is followed by chapters on the study data collection instruments and methods; direct and indirect student assessment data; sample design and weights; response rates; data preparation; data file content, including the composite variables; and the structure of the data file. Appendices include a psychometric report, a guide to scales, field test reports, and school and student file variable listings. |
8/16/2023 |
REL 2017191 | The content, predictive power, and potential bias in five widely used teacher observation instruments
This study was designed to inform decisions about the selection and use of five widely-used teacher observation instruments. The purpose was to explore (1) patterns across instruments in the dimensions of instruction that they measure, (2) relationships between teachers' scores in specific dimensions of instruction and their contributions to student achievement growth (value-added), and (3) whether teachers' observation ratings depend on the types of students they are assigned to teach. Researchers analyzed the content of the Classroom Assessment Scoring System (CLASS), Framework for Teaching (FFT), Protocol for Language Arts Teaching Observations (PLATO), Mathematical Quality of Instruction (MQI), and UTeach Observational Protocol (UTOP). The content analysis then informed correlation analyses using data from the Gates Foundation's Measures of Effective Teaching (MET) project. Participants were 5,409 4th-9th grade math and English language arts (ELA) teachers from six school districts. Observation ratings were correlated with teachers' value-added scores and with three composition measures: proportions of nonwhite students, low-income students, and low achieving students in the classroom. Results show that eight of ten dimensions of instruction are captured in all five instruments, but instruments differ in the number and types of elements they assess within each dimension. Observation ratings in all dimensions with quantitative data were significantly but modestly correlated with teachers' value-added scores—with classroom management showing the strongest and most consistent correlations. Finally, among teachers who were randomly assigned to groups of students, observation ratings for some instruments were associated with the proportion of nonwhite and lower achieving students in the classroom, more often in ELA classes than in math classes. Findings reflect conceptual consistency across the five instruments, but also differences in the coverage and the specific practices they assess within a given dimension. They also suggest that observation scores for classroom management more strongly and consistently predict teacher contributions to student achievement growth than scores in other dimensions. Finally, the results indicate that the types of students assigned to a teacher can affect observation ratings, particularly in ELA classrooms. When selecting among instruments, states and districts should consider which provide the best coverage of priority dimensions, how much weight to attach to various observation scores in their evaluation of teacher effectiveness, and how they might target resources toward particular classrooms to reduce the likelihood of bias in ratings. |
11/1/2016 |
REL 2016180 | Predicting math outcomes from a reading screening assessment in grades 3–8
District and state education leaders and teachers frequently use assessments to identify students who are at risk of performing poorly on end-of-year reading achievement tests. This study explores the use of a universal screening assessment of reading skills for the identification of students who are at risk for low achievement in mathematics and provides support for the interpretation of screening scores to inform instruction. The study results demonstrate that a reading screening assessment predicted poor performance on a mathematics outcome (the Stanford Achievement Test) with similar levels of accuracy as screening assessments that specifically measure mathematics skills. These findings indicate that a school district could use an assessment of reading skills to screen for risk in both reading and mathematics, potentially reducing costs and testing time. In addition, this document provides a decision tree framework to support implementation of screening practices and interpretation by teachers. |
9/21/2016 |
REL 2016126 | Stated Briefly: Who will succeed and who will struggle? Predicting early college success with Indiana’s Student Information System
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examined whether data on Indiana high school students, their high schools, and the Indiana public colleges and universities in which they enroll predict their academic success during the first two years in college. The researchers obtained student-level, school-level, and university-related data from Indiana's state longitudinal data system on the 68,802 students who graduated high school in 2010. For the 32,564 graduates who first entered a public 2-year or 4-year college, the researchers examined their success during the first two years of college using four indicators of success: (1) enrolling in only nonremedial courses, (2) completion of all attempted credits, (3) persistence to the second year of college, and (4) an aggregation of the other three indicators. HLM was used to predict students' performance on indicators using students' high school data, information about their high schools and information about the colleges they first attended. Half of Indiana 2010 high school graduates who enrolled in a public Indiana college were successful by all indicators of success. College success differed by student demographic and academic characteristics, by the type of college a student first entered, and by the indicator of college success used. Academic preparation in high school predicted all indicators of college success, and student absences in high school predicted two individual indicators of college success and a composite of college success indicators. While statistical relationships were found, the predictors collectively only predicted less than 35 percent of the variance. The predictors from this study can be used to identify students who will likely struggle in college, but there will likely be false positive (and false negative) identifications. Additional research is needed to identify other predictors--possibly non-cognitive predictors--that can improve the accuracy of the identification models. |
2/17/2016 |
REL 2013008 | Evaluating the screening accuracy of the Florida Assessments for Instruction in
Reading (FAIR)
This report analyzed student performance on the FAIR reading comprehension screen across grades 4-10 and the Florida Comprehensive Assessment Test (FCAT) 2.0 to determine how well the FAIR and the 2011 FCAT 2.0 scores predicted 2012 FCAT 2.0 performance. The first key finding was that the reading comprehension screen of the Florida Assessments for Instruction in Reading (FAIR) was more accurate than the 2011 Florida Comprehensive Assessment Test (FCAT) 2.0 scores in correctly identifying students as not at risk for failing to meet grade-level standards on the 2012 FCAT 2.0. The second key finding was that using both the FAIR screen and the 2011 FCAT 2.0 lowered the underidentification rate of at-risk students by 12–20 percentage points compared with the results using the 2011 FCAT 2.0 score alone. |
9/10/2013 |
1 - 7