Skip Navigation

Search Results: (16-20 of 20 records)

 Pub Number  Title  Date
REL 20104014 The Effectiveness of a Program to Accelerate Vocabulary Development in Kindergarten
The study, The Effectiveness of a Program to Accelerate Vocabulary Development in Kindergarten, found that the 24-week K-PAVE program had a significant positive impact on students' vocabulary development and academic knowledge and on the vocabulary and comprehension support that teachers provided during book read-alouds and other instructional time.

K-PAVE is designed to build children's vocabulary and comprehension skills, oral language skills, and enhance teacher-child relationships. K-PAVE is one of only a few kindergarten-age-appropriate vocabulary interventions and the only intervention with teacher training materials. An existing preschool version of K-PAVE had already demonstrated some evidence of positive effects from an impact study.

The K-PAVE intervention group included 64 schools, 128 kindergarten classrooms and teachers, and 1,296 kindergarten students (596 treatment and 700 control students).
11/22/2010
NCEE 20104012 Compendium of Student, Teacher, and Classroom Measures Used in NCEE Evaluations of Educational Interventions
This NCEE Reference Report is a ready resource available to help evaluators researchers' select outcome measures for their future studies and also assist policymakers in understanding the measures used in existing IES studies. The two-volume "Compendium of Student, Teacher, and Classroom Measures Used in NCEE Evaluations of Educational Interventions" provides comparative information about the domain, technical quality, and history of use of outcome measures used in IES-funded evaluations between 2005 and 2008. The Compendium is intended to facilitate the comparisons of results across studies, thus expanding an understanding of these measures within the educational research community.

Focusing exclusively on studies that employed randomized controlled trials or regression discontinuity designs, the Compendium also used outcome measures that were (1) available to other researchers and (2) had information available about psychometric properties. For example, Volume I describes typical or common considerations when selecting measures and the approach used to collect and summarize information on the 94 measures reviewed. While Volume II provides detailed descriptions of these measures including source information and references.
5/3/2010
NCEE 2009013 Technical Methods Report: Using State Tests in Education Experiments: A Discussion of the Issues
Securing data on students' academic achievement is typically one of the most important and costly aspects of conducting education experiments. As state assessment programs have become practically universal and more uniform in terms of grades and subjects tested, the relative appeal of using state tests as a source of study outcome measures has grown. However, the variation in state assessments--in both content and proficiency standards--complicates decisions about whether a particular state test is suitable for research purposes and poses difficulties when planning to combine results across multiple states or grades. This discussion paper aims to help researchers evaluate and make decisions about whether and how to use state test data in education experiments. It outlines the issues that researchers should consider, including how to evaluate the validity and reliability of state tests relative to study purposes; factors influencing the feasibility of collecting state test data; how to analyze state test scores; and whether to combine results based on different tests. It also highlights best practices to help inform ongoing and future experimental studies. Many of the issues discussed are also relevant for non-experimental studies.
11/16/2009
NCEE 20094065 Do Typical RCTs of Education Interventions Have Sufficient Statistical Power for Linking Impacts on Teacher Practice and Student Achievement Outcomes
For RCTs of education interventions, it is often of interest to estimate associations between student and mediating teacher practice outcomes, to examine the extent to which the study's conceptual model is supported by the data, and to identify specific mediators that are most associated with student learning. This paper develops statistical power formulas for such exploratory analyses under clustered school-based RCTs using ordinary least squares (OLS) and instrumental variable (IV) estimators, and uses these formulas to conduct a simulated power analysis. The power analysis finds that for currently available mediators, the OLS approach will yield precise estimates of associations between teacher practice measures and student test score gains only if the sample contains about 150 to 200 study schools. The IV approach, which can adjust for potential omitted variable and simultaneity biases, has very little statistical power for mediator analyses. For typical RCT evaluations, these results may have design implications for the scope of the data collection effort for obtaining costly teacher practice mediators.
10/13/2009
NCEE 20090061 The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions
Reports in this series are designed for use by researchers, methodologists, and evaluation specialists to provide guidance in resolving or advancing challenges to evaluation methods. This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the finite-population model) or randomly selected from a vaguely-defined universe (the super-population model). Appropriate estimators are derived and discussed for each model. Using data from five large-scale clustered RCTs in the education area, the empirical analysis estimates impacts and their standard errors using the considered estimators. For all studies, the estimators yield identical findings concerning statistical significance. However, standard errors sometimes differ, suggesting that policy conclusions from RCTs could be sensitive to the choice of estimator. Thus, a key recommendation is that analysts test the sensitivity of their impact findings using different estimation methods and cluster-level weighting schemes.
8/31/2009
<< Prev    16 - 20    
Page 2  of  2