Skip Navigation

Search Results: (1-15 of 24 records)

 Pub Number  Title  Date
WWC 2023005 Class-Wide Function-Related Intervention Teams (CW-FIT) Intervention Report
This What Works Clearinghouse (WWC) intervention report summarizes the research on the effectiveness of Class-Wide Function-Related Intervention Teams (CW-FIT) and provides detailed information about program implementation and cost. CW-FIT is a classroom management strategy that aims to help teachers improve student behavior and create a positive learning environment. Teachers establish classroom rules, provide instruction on target skills, place students into teams, and then reward teams for demonstrating target skills. Based on eight studies that meet standards, the WWC found strong evidence that CW-FIT positively impacted student behavior and promising evidence that CW-FIT positively impacted teacher practice.
5/16/2023
REL 2020039 The Reliability and Consequential Validity of Two Teacher-Administered Student Mathematics Diagnostic Assessments
Several school districts in Georgia currently use two teacher-administered diagnostic assessments of student mathematical knowledge as part of their multi-tiered system of support in grades K-8. These assessments are the Global Strategy Stage (GloSS; New Zealand Ministry of Education, 2012) and the Individual Knowledge Assessment of Number (IKAN; New Zealand Ministry of Education, 2011). However, little is known about the inter-assessor reliability and consequential validity of these assessments. Inter-assessor reliability indicates whether two teachers obtain the same score for a student after administering the test on two occasions, and consequential validity explores perceptions of the value of using the assessments. Rather than rely on occasional testimonials from the field, decisions about using diagnostic assessments across the state should be based on psychometric data from an external source. Districts not currently using the GloSS and IKAN have indicated that they would consider using them to assess students’ current level of mathematical understanding and determine appropriate levels of instruction and intervention, if they were proven to be reliable and valid diagnostic assessments. This study found that the inter-assessor reliability for the GloSS measure and the IKAN Counting Interview is adequate. The inter-assessor reliability for the IKAN Written Assessment (one of the two components of the IKAN) is inadequate, and additional attention must be directed toward improving training for this measure so that reliability can be established. Teachers indicated that they found the data from the GloSS and IKAN assessments more useful than screening data currently in use for guiding decisions about how to provide intervention. Although teachers interviewed in the study’s focus groups expressed strong support for using both assessments, they reported in the study survey that the GloSS is more useful than the IKAN because it addresses students' solution strategies, which most other mathematics measures do not assess. Teachers did express some criticisms of both assessments; for example, they felt the IKAN Written Assessment should be untimed and that the GloSS should include familiar vocabulary.
9/14/2020
REL 2020026 Relationships between Schoolwide Instructional Observation Scores and Student Academic Achievement and Growth in Low‑Performing Schools in Massachusetts
The Massachusetts Department of Elementary and Secondary Education (DESE), like other state education agencies and districts, recognizes that a key lever to turning around low-performing schools is the quality of instruction (Hill & Harvey, 2004; Hopkins, Harris, Watling, & Beresford, 1999). As part of the annual monitoring of state-designated low-performing schools, DESE’s external low-performing school monitors use Teachstone’s Classroom Assessment Scoring System (CLASS) tool to conduct observations. DESE’ external monitors rated low-performing schools on three domains of instruction—Emotional Support, Classroom Organization, and Instructional Support. This paper examines the relationships between these observation scores and academic growth and achievement within a school, after adjusting for the percentage of students with low incomes and the grade levels in these low-performing schools. Results show statistically significant positive relationships between schoolwide average observation scores for each instructional domain and school-level academic growth in both English language arts (ELA) and mathematics. On a 7-point scale, a 1-point increase in a school’s overall observation rating was associated with an increase in student growth of 4.4 percentile points of growth in ELA and 5.1 percentile points of growth in mathematics. For schoolwide achievement, which is measured by the percentage of students who met or exceeded expectations on the state assessment, results show a significant positive relationship between the classroom organization domain and ELA schoolwide achievement. There was no significant relationship between observation scores and schoolwide achievement in ELA for any other domain or for mathematics schoolwide achievement. The relationship between observation scores and current achievement levels may be weak because achievement levels may be influenced by many other factors including students’ prior achievement and the economic and social challenges their families face.
9/8/2020
NCEE 20194007 Teacher Preparation Experiences and Early Teaching Effectiveness
This report examines the frequency of particular teacher preparation experiences and explores their relationships to beginning teachers' effectiveness in improving student test scores once they get to the classroom. The report found both differences in how teachers prepare for their certification in the field and that certain experiences are related to improving test scores in the classroom. The findings provide a detailed look into current teacher preparation practices and identify potential avenues for improving them.
9/30/2019
WWC IRTP664 ACT/SAT Test Preparation and Coaching Programs Transition to College
ACT and SAT test preparation and coaching programs are designed to increase students' scores on college entrance exams. These programs familiarize students with the format of the test, introduce test-taking strategies, and provide practice with the types of problems that may be included on the tests. The WWC reviewed the research on ACT and SAT test preparation and coaching programs and found that they have positive effects on general academic achievement for high school students.
10/4/2016
NCEE 2016002 Can student test scores provide useful measures of school principals' performance?
This study assessed the extent to which four principal performance measures based on student test scores--average achievement, school value-added, adjusted average achievement, and adjusted school value-added--accurately reflect principals' contributions to student achievement in future years. Average achievement used information on students' end-of-year achievement without taking into account the students' past achievement; school value-added accounted for students' own past achievement by measuring their growth; and adjusted average achievement and adjusted school value-added credited principals if their schools' average achievement and value-added, respectively, exceeded predictions based on the schools' past performance on those same measures. The study conducted two sets of analyses using Pennsylvania's statewide data on students and principals from 2007/08 to 2013/14. First, using data on 2,424 principals, the study assessed the extent to which ratings from each measure are stable by examining the association between principals' ratings from earlier and later years. Second, using data on 123 principals, the study examined the relationship between the stable part of each principal's rating and his or her contributions to student achievement in future years. Based on results from both analyses, the study simulated each measure's accuracy for predicting principals' contributions to student achievement in the following year. The study found that the two performance measures that did not account for students' past achievement--average achievement and adjusted average achievement--provided no information for predicting principals' contributions to student achievement in the following year. The two performance measures that accounted for students' past achievement--school value-added and adjusted school value-added--provided, at most, a small amount of information for predicting principals' contributions in the following year, with less than one-third of each difference in value-added ratings across principals reflecting differences in their future contributions. These findings suggest that principal evaluation systems should emphasize measures that were found to provide at least some information about principals' future contributions: school value-added or adjusted school value-added. However, study findings also indicate that even the value-added measures will often be inaccurate in identifying principals who will contribute effectively or ineffectively to student achievement in future years. Therefore, states and districts should exercise caution when using these measures to make major decisions about principals and seek to identify nontest measures that can accurately predict principals' future contributions.
9/29/2016
REL 2016121 How current teachers in the Republic of Palau performed on a practice teacher certification examination
The purpose of this study was to examine teachers' performance on the Praxis I Pre-Professional Skills Tests® (PPST) in reading, writing, and math, and the relationships between test performance and selected teacher demographic and professional characteristics, in order to further the development and implementation of Palau's Professional Personnel and Certification System. The multiple choice sections of the practice Praxis I PPST tests of reading, writing, and math were administered and analyzed using descriptive statistics, along with cross-tabulations of test performance by teacher characteristics. Overall, the study found that while scores across subject areas were relatively low, teachers in Palau scored higher in reading than in writing and math. The performance of Palau test takers differed depending upon the language spoken in the home, English proficiency, level of education, years of teaching, and grade levels taught. In addition, respondents with better command of English performed better on the assessment. Level of education attained was significantly associated with a higher percentage of correct responses, and teachers with less than seven years of teaching experience answered slightly more questions correctly in reading, writing, and math than teachers with more years of teaching experience. Finally, teachers at the upper elementary and high school levels performed better on the assessments than teachers at the lower elementary level. The results of this study provide the Palau Research Alliance and Ministry of Education with information that may help establish appropriate passing scores on the Praxis PPST I reading, writing, and math subtests; may be used to create a multiyear plan of sustained improvement in the teacher workforce; may alert Palau leadership to the difficulties inherent in using English-based tests to assess the performance of those who do not have a strong command of the English language; and may be used to guide preservice curricular requirements and indicate the supporting professional development needs of Palau teachers at various grade levels.
9/28/2016
REL 2016134 Stated Briefly: Can scores on an interim high school reading assessment accurately predict low performance on college readiness exams?
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. The purpose of this study was to examine the extent to which performance on Florida's interim reading assessment could be used to identify students who may not perform well on the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT) and ACT Plan. Data included the 2013/14 PSAT/NMSQT or ACT Plan results for students in grade 10 from two districts, as well as their grade 9 results on the Florida Assessments for Instruction in Reading—Florida Standards (FAIR-FS). PSAT/NMSQT Critical Reading performance is best predicted in the study sample by a student's reading comprehension skills, while PSAT/NMSQT Mathematics and Writing performance is best predicted by a student's syntactic knowledge. Syntactic knowledge is the most important predictor of ACT Plan English, Reading, and Science in the study sample, whereas reading comprehension skills were found to best predict ACT Plan Mathematics results. Sensitivity rates ranged from 81 percent to 89 percent correct across all of the models. These results provide preliminary evidence that FAIR-FS scores could be used to create an early warning system for performance on both the PSAT/NMSQT and ACT Plan.
5/3/2016
REL 2016124 Can scores on an interim high school reading assessment accurately predict low performance on college readiness exams?
The purpose of this study was to examine the relationship between measures of reading comprehension, decoding, and language with college-ready performance. This research was motivated by leaders in two Florida school districts interested in the extent to which performance on Florida’s interim reading assessment could be used to identify students who may not perform well on the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT) and ACT Plan. One of the districts primarily administers the PSAT/NMSQT and the other primarily administers the ACT Plan. Data included the 2013/14 PSAT/NMSQT or ACT Plan results for students in grade 10 from these districts, as well as their grade 9 results on the Florida Assessments for Instruction in Reading – Florida Standards (FAIR-FS). Classification and regression tree (CART) analyses formed the framework for an early warning system of risk for each PSAT/NMSQT and ACT Plan subject-area assessment. PSAT/NMSQT Critical Reading performance is best predicted in the study sample by a student’s reading comprehension skills, while PSAT/NMSQT Mathematics and Writing performance is best predicted by a student’s syntactic knowledge. Syntactic knowledge is the most important predictor of ACT Plan English, Reading, and Science in the study sample, whereas reading comprehension skills were found to best predict ACT Plan Mathematics results. Sensitivity rates (the percentage of students correctly identified as at risk) ranged from 81 percent to 89 percent correct across all of the CART models. These results provide preliminary evidence that FAIR-FS scores could be used to create an early warning system for performance on both the PSAT/NMSQT and ACT Plan. The potential success of using FAIR-FS scores as an early warning system could enable districts to identify at-risk students without adding additional testing burden, time away from instruction, or additional cost. The analyses should be replicated statewide to verify the stability of the models and the generalizability of the results to the larger Florida student population.
4/20/2016
REL 2014009 Participation and Pass Rates for College Preparatory Transition Courses in Kentucky
The purpose of this study was to examine Kentucky high school students' participation and pass rates in college preparatory transition courses, which are voluntary remedial courses in math and reading offered to grade 12 students in the state. Three groups of students were compared using the population of grade 12 students in Kentucky public schools in school year 2011/12 (n=33,928): students meeting state benchmarks, students approaching state benchmarks (1 to 3 points below), and students performing below state benchmarks (4 or more points below). The courses targeted students who were approaching state benchmarks, but all students were eligible to take them. Results were examined for member school districts of the Southeast/South-Central Educational Cooperative (a research partner with Regional Educational Laboratory Appalachia), a matched comparison group of districts with similar characteristics identified through propensity score matching, and the state as a whole. The study found that most students, even those targeted for the intervention, did not participate in the college preparatory transition courses. Among students who were approaching state benchmarks in math, fewer than one-third (28.1 percent) took transition courses, and among students approaching state benchmarks in reading, fewer than one-tenth (8.0 percent) enrolled in transition courses. Despite the intention of the policy, students from all three groups (meeting, approaching, and below state benchmarks) enrolled in the courses. Statewide pass rates for students who did enroll in transition courses in math or reading were more than 90 percent. Examining participation and pass rates can help schools and districts understand how college preparatory transition courses are used and may be adapted to meet the needs of students targeted for intervention.
3/4/2014
REL 2014004 Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests
The study used reading and math achievement data for grades 4 and 5 in 46 Indiana schools to compare estimates of teacher value added from two student assessments: the criterion-referenced Indiana Statewide Testing for Education Progress Plus (ISTEP+) and a norm-referenced test (NRT) that is widely used in Indiana and other Midwest Region states. The study found a moderate relationship between value-added estimates for a single year based on the ISTEP+ and NRT, with average yearly correlation coefficients of 0.44 to 0.65. Overall, findings indicate variability between the estimates of teacher value added from two different tests administered to the same students in the same years. Although specific sources of the variability in estimates of teacher value added across assessments could not be isolated in this research design, the research literature points to measurement error as an important contributor. The findings suggest that incorporating confidence intervals for value added estimates reduces the likelihood that teachers' performance will be misclassified based on measurement error.
1/23/2014
REL 2014006 Testing the Importance of Individual Growth Curves in Predicting Performance on a High-Stakes Reading Comprehension Test in Florida
REL Southeast at Florida State University evaluated student growth in reading comprehension over the school year and compared the growth to the end-of-year Florida Comprehensive Assessment Test (FCAT). Using archival data for 2009/10, the study analyzes a stratified random sample of 800,000 Florida students in grades 3-10: their fall, winter, and spring reading comprehension scores on the Florida Assessments for Instruction in Reading (FAIR) and their reading comprehension scores on the FCAT. This study examines the relationship among descriptive measures of growth and inferential measures for students in grades 3-10 and considers how well such measures statistically explain differences in end-of-year reading comprehension after controlling for student performance on a mid-year status assessment. Student differences in reading comprehension performance were explained by the four growth estimates (measured by the coefficient of determination, R2) and differed by status variable used (performance on the fall, winter, or spring FAIR reading comprehension screen).
1/22/2014
REL 2013008 Evaluating the screening accuracy of the Florida Assessments for Instruction in Reading (FAIR)
This report analyzed student performance on the FAIR reading comprehension screen across grades 4-10 and the Florida Comprehensive Assessment Test (FCAT) 2.0 to determine how well the FAIR and the 2011 FCAT 2.0 scores predicted 2012 FCAT 2.0 performance. The first key finding was that the reading comprehension screen of the Florida Assessments for Instruction in Reading (FAIR) was more accurate than the 2011 Florida Comprehensive Assessment Test (FCAT) 2.0 scores in correctly identifying students as not at risk for failing to meet grade-level standards on the 2012 FCAT 2.0. The second key finding was that using both the FAIR screen and the 2011 FCAT 2.0 lowered the underidentification rate of at-risk students by 12–20 percentage points compared with the results using the 2011 FCAT 2.0 score alone.
9/10/2013
NCES 2013454 Testing Integrity: Issues and Recommendations for Best Practice
This report is part of a broader effort by the Department of Education to identify and disseminate practices and policies to assist efforts to improve the validity and reliability of assessment results. The report draws upon the opinions of experts and practitioners who responded to the Department’s Request for Information (RFI), the comments and discussions from NCES’ Testing Integrity Symposium, and, where available, policy manuals or professional standards published by State Education Agencies (SEAs) and professional associations.

The report focuses on four areas related to testing integrity: (1) the prevention of irregularities in academic testing; (2) the detection and analysis of testing irregularities; (3) the response to an investigation of alleged and/or actual misconduct; and (4) testing integrity practices for technology-based assessments.

2/12/2013
WWC SSRS10010 WWC Review of the Report "Learning the Control of Variables Strategy in Higher and Lower Achieving Classrooms: Contributions of Explicit Instruction and Experimentation"
In Learning the Control of Variables Strategy in Higher and Lower Achieving Classrooms: Contributions of Explicit Instruction and Experimentation, researchers examined three separate methods for teaching the control of variables strategy (CVS). The WWC determined that this study is a well-implemented randomized controlled trial, and the research described in this report meets WWC evidence standards without reservations.
10/2/2012
   1 - 15     Next >>
Page 1  of  2