Search Results: (1-8 of 8 records)
|NCEE 20184002||Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects.
|NCES 2017012||NATES 2013: Nonresponse Bias Analysis Report
The 2013 National Adult Training and Education Survey (NATES) was a pilot study that tested the feasibility of using address-based sampling and a mailed questionnaire to collect data on the education, training, and credentials of U.S. adults. This report presents study findings related to nonresponse bias. Nonresponse adjustments corrected for bias on key outcome measures, but not for many background variables. Auxiliary data were found to be of potential use in correcting this bias.
|REL 2017191||The content, predictive power, and potential bias in five widely used teacher observation instruments
This study was designed to inform decisions about the selection and use of five widely-used teacher observation instruments. The purpose was to explore (1) patterns across instruments in the dimensions of instruction that they measure, (2) relationships between teachers' scores in specific dimensions of instruction and their contributions to student achievement growth (value-added), and (3) whether teachers' observation ratings depend on the types of students they are assigned to teach. Researchers analyzed the content of the Classroom Assessment Scoring System (CLASS), Framework for Teaching (FFT), Protocol for Language Arts Teaching Observations (PLATO), Mathematical Quality of Instruction (MQI), and UTeach Observational Protocol (UTOP). The content analysis then informed correlation analyses using data from the Gates Foundation's Measures of Effective Teaching (MET) project. Participants were 5,409 4th-9th grade math and English language arts (ELA) teachers from six school districts. Observation ratings were correlated with teachers' value-added scores and with three composition measures: proportions of nonwhite students, low-income students, and low achieving students in the classroom. Results show that eight of ten dimensions of instruction are captured in all five instruments, but instruments differ in the number and types of elements they assess within each dimension. Observation ratings in all dimensions with quantitative data were significantly but modestly correlated with teachers' value-added scores—with classroom management showing the strongest and most consistent correlations. Finally, among teachers who were randomly assigned to groups of students, observation ratings for some instruments were associated with the proportion of nonwhite and lower achieving students in the classroom, more often in ELA classes than in math classes. Findings reflect conceptual consistency across the five instruments, but also differences in the coverage and the specific practices they assess within a given dimension. They also suggest that observation scores for classroom management more strongly and consistently predict teacher contributions to student achievement growth than scores in other dimensions. Finally, the results indicate that the types of students assigned to a teacher can affect observation ratings, particularly in ELA classrooms. When selecting among instruments, states and districts should consider which provide the best coverage of priority dimensions, how much weight to attach to various observation scores in their evaluation of teacher effectiveness, and how they might target resources toward particular classrooms to reduce the likelihood of bias in ratings.
|NCES 2016062||2012/14 Beginning Postsecondary Students Longitudinal Study (BPS:12/14) Data File Documentation
This publication describes the methodology used in the 2012/14 Beginning Postsecondary Students Longitudinal Study (BPS:12/14). BPS:12/14 is the first follow-up study of students who began postsecondary education in the 2011 – 12 academic year. These students were first interviewed as part of the 2011 – 12 National Postsecondary Student Aid Study (NPSAS:12). In particular, this report details the methodology and outcomes of the BPS:12/14 sample design, student interview design, student interview data collection processes, administrative records matching, data file processing, and weighting procedures.
|NCES 2009003||Early Childhood Longitudinal Study,
Kindergarten Class of 1998–99 (ECLS-K), Eighth-Grade Methodology Report
This methodology report provides technical information about the development, design, and conduct of the eighth grade data collection of the Early Childhood Longitudinal Study, Kindergarten Class of 1998–99 (ECLS-K). Detailed information on the development of the instruments, sample design, data collection methods, data preparation and editing, response rates, and weighting and variance estimation is included.
|NCES 2009029||An Evaluation of Bias in the 2007
National Households Education Surveys Program: Results From a Special Data Collection Effort
The National Household Education Surveys Program (NHES) is a random digit dialing (RDD) survey program developed by the National Center for Education Statistics (NCES) in the Institute of Education Sciences, U.S. Department of Education. The surveys are designed to help NCES collect data directly from households about important education topics. Like many household studies that rely on landline phone sampling frames, NHES has experienced both declining response rates and increasing undercoverage rates. The study described in this report was designed to examine bias in the NHES:2007 due to nonresponse, as well as bias due to noncoverage of households that only had cell phones and households without any telephones. Results from this study suggest that there is no systematic pattern of bias in key statistics from the NHES:2007, though it might underestimate some indicators such as the percentage of preschoolers who watch two or more hours of TV in a typical weekday and overestimate some indicators such as the percentage of preschoolers with mothers who are not in the labor force.
|NCES 2009047||National Household
Education Surveys Program of 2007: Methodology Report
This report documents the design and collection of the National Household Education Surveys Program (NHES) of 2007. Chapter 1 provides an overview of the collection and the report. Chapter 2 discusses the design of the questionnaires. Chapter 3 presents the sample design. Chapter 4 provides information about the data collection experience. Chapter 5 focuses on unit response rates. Item response rates and imputation are discussed in chapter 6. Chapter 7 contains information about weighting and variance estimation. Chapter 8 provides a summary of bias analyses conducted as part of the study. Chapter 9 provides a comparison of estimates to extant data sources. Chapter 10 summarizes the re-interview study.
|NCES 2007016||Nonresponse Bias in the 2005 National Household
Education Surveys Program
This report includes assessments of the potential for both unit and item nonresponse bias in the surveys fielded as part of the 2005 National Household Education Surveys Program. The analysis of unit nonresponse bias showed no evidence of bias in the estimates considered from the Early Childhood Program Participation and After-School Programs and Activities Surveys. For the Adult Education Survey, the only evidence of unit nonresponse bias is in estimates of sex: females were more likely to respond than males. The weighting class adjustment for nonresponse should reduce or correct this bias.