Skip Navigation

Search Results: (1-2 of 2 records)

 Pub Number  Title  Date
NCEE 20154011 Statistical Theory for the RCT-YES Software: Design-Based Causal Inference for RCTs
This Second Edition report updates the First Edition published in June 2015 that presents the statistical theory underlying the RCT-YES software that estimates and reports impacts for RCTs for a wide range of designs used in social policy research. The preface to the new report summarizes the updates from the previous version. The report discusses a unified, non-parametric design-based approach for impact estimation using the building blocks of the Neyman-Rubin-Holland causal inference model that underlies experimental designs. This approach differs from the more model-based impact estimation methods that are typically used in education research. The report discusses impact and variance estimation, asymptotic distributions of the estimators, hypothesis testing, the inclusion of baseline covariates to improve precision, the use of weights, subgroup analyses, baseline equivalency analyses, and estimation of the complier average causal effect parameter.
6/2/2015
NCEE 20090061 The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions
Reports in this series are designed for use by researchers, methodologists, and evaluation specialists to provide guidance in resolving or advancing challenges to evaluation methods. This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the finite-population model) or randomly selected from a vaguely-defined universe (the super-population model). Appropriate estimators are derived and discussed for each model. Using data from five large-scale clustered RCTs in the education area, the empirical analysis estimates impacts and their standard errors using the considered estimators. For all studies, the estimators yield identical findings concerning statistical significance. However, standard errors sometimes differ, suggesting that policy conclusions from RCTs could be sensitive to the choice of estimator. Thus, a key recommendation is that analysts test the sensitivity of their impact findings using different estimation methods and cluster-level weighting schemes.
8/31/2009
   1 - 2