Skip Navigation

Search Results: (1-15 of 21 records)

 Pub Number  Title  Date
NCES 2019113 U.S. PIRLS and ePIRLS 2016 Technical Report and User's Guide
The U.S. PIRLS and ePIRLS 2016 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Progress in International Reading Literacy Study (PIRLS) and ePIRLS 2016, along with information designed to facilitate access to the U.S. PIRLS and ePIRLS 2016 data.
8/27/2019
NCES 2018020 U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 Technical Report and User's Guide
The U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Trends in International Mathematics and Science Study (TIMSS) 2015 and TIMSS Advanced 1995 & 2015, along with information designed to facilitate access to the U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 data.
11/1/2018
NCES 2017095 Technical Report and User Guide for the 2015 Program for International Student Assessment (PISA)
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA 2015 in the United States, as well as information on how to access the PISA 2015 data. The report includes information about sampling requirements and sampling in the United States; participation rates at the school and student level; how schools and students were recruited; instrument development; field operations used for collecting data; detail concerning various aspects of data management, including data processing, scaling, and weighting. In addition, the report describes the data available from both international and U.S. sources, special issues in analyzing the PISA 2015 data, as well as a description of merging data files.
12/19/2017
NCEE 20184002 Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects.
10/3/2017
REL 2017265 What does it mean when a study finds no effects?
This short brief for education decisionmakers discusses three main factors that may contribute to a finding of no effects: failure of theory, failure of implementation, and failure of research design. It provides readers with questions to ask themselves to better understand 'no effects' findings, and describes other contextual factors to consider when deciding what to do next.
10/27/2016
NCSER 2015002 The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary for such research designs (e.g., to answer questions about impacts for children with low-incidence disabilities). A type of experimental design that is well suited for such low-incidence populations is the single-case design (SCD). These designs involve observations of a single case (e.g., a child or a classroom) over time in the absence and presence of an experimenter-controlled treatment manipulation to determine whether the outcome is systematically related to the treatment.

Research using SCD is often omitted from reviews of whether evidence-based practices work because there has not been a common metric to gauge effects as there is in group design research. To address this issue, the National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors' recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices.
1/7/2016
REL 2014052 Forming a Team to Ensure High-Quality Measurement in Education Studies
This brief provides tips for forming a team of staff and consultants with the needed expertise to make key measurement decisions that will ensure high-quality data for answering the study’s research questions. The brief outlines the main responsibilities of measurement team members. It also describes typical measurement tasks and discusses how the measurement team members can work together to complete the measurement tasks successfully.
9/16/2014
REL 2014064 Reporting What Readers Need to Know about Education Research Measures: A Guide
This brief provides five checklists to help researchers provide complete information describing (1) their study's measures; (2) data collection training and quality; (3) the study's reference population, study sample, and measurement timing; (4) evidence of the reliability and construct validity of the measures; and (5) missing data and descriptive statistics. The brief includes an example of parts of a report's methods and results section illustrating how the checklists can be used to check the completeness of reporting.
9/9/2014
REL 2014014 Developing a Coherent Research Agenda: Lessons from the REL Northeast & Islands Research Agenda Workshops
This report describes the approach that REL Northeast and Islands (REL-NEI) used to guide its eight research alliances toward collaboratively identifying a shared research agenda. A key feature of their approach was a two-workshop series, during which alliance members created a set of research questions on a shared topic of education policy and/or practice. This report explains how REL-NEI conceptualized and organized the workshops, planned the logistics, overcame geographic distance among alliance members, developed and used materials (including modifications for different audiences and for a virtual platform), and created a formal research agenda after the workshops. The report includes links to access the materials used for the workshops, including facilitator and participant guides and slide decks.
7/10/2014
NCES 2013046 U.S. TIMSS and PIRLS 2011 Technical Report and User's Guide
The U.S. TIMSS and PIRLS 2011 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Trends in International Mathematics and Science Study (TIMSS) 2011 and the Progress in International Reading Literacy Study (PIRLS) 2011, along with information designed to facilitate access to the U.S. TIMSS and PIRLS 2011 data.
11/26/2013
NCES 2013190 The Adult Education Training and Education Survey (ATES) Pilot Study
This report describes the process and findings of a national pilot test of survey items that were developed to assess the prevalence and key characteristics of occupational certifications and licenses and subbaccalaureate educational certificates. The pilot test was conducted as a computer-assisted telephone interview (CATI) survey, administered from September 2010 to January 2011.
4/9/2013
NCEE 20124025 Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach
This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials—the Education Technology Evaluation and the Teach for America Study—to provide evidence on the performance of RD estimators in two specific contexts. More generally, the report presents and implements a method for examining the performance of RD estimators that could be used in other contexts. The study found that the RD and experimental designs produced impact estimates that were meaningful in size, though not significantly different from one another. The study also found that manipulation of the assignment variable in RD designs can substantially influence RD impact estimates, particularly if manipulation is related to the outcome and occurs close to the assignment variable's cutoff value.
4/25/2012
NCES 2011463 The NAEP Primer
The purpose of the NAEP Primer is to guide educational researchers through the intricacies of the NAEP database and make its technologies more user-friendly. The NAEP Primer makes use of its publicly accessible NAEP mini-sample that is included on the CD. The mini-sample contains real data from the 2005 mathematics assessment that have been approved for public use. Only public schools are included in this subsample that contains selected variables for about 10 percent of the schools and students in this assessment. All students who participated in NAEP in the selected public schools are included. This subsample is not sufficient to make state comparisons. In addition, to ensure confidentiality, no state, school, or student identifiers are included.

The NAEP Primer document covers the following topics:
  • Introduction and Overview: includes a technical history of NAEP, an overview of the NAEP Primer mini-sample and its design and implications for analysis, and a listing of relevant resources for further information.
  • The NAEP Database describes the contents of the NAEP database, the NAEP Primer mini-sample and the types of variables it includes, the NAEP database products, an overview of the NAEP 2005 Mathematics, Reading, and Science Data Companion, and how to obtain a Restricted-Use Data License.
  • NAEP Data Tools: provides the user with the information on the resources available to prepare the data for analysis, and how to find and use the various NAEP data tools.
  • Analyzing NAEP Data: includes recommendations for running statistical analyses with SPSS, SAS, STATA, and WesVar, including addressing the effect of BIB spiraling, plausible values, jackknife, etc. Worked examples and simple analyses use the NAEP Primer mini-sample.
  • Marginal Estimation of Score Distributions: discusses the principles of marginal estimation as used in NAEP and the role of plausible values.
  • Direct Estimation Using AM Software: presents an approach to direct estimation using the AM software including examples of analyses.
  • Fitting of Hierarchical Linear Models: presents information and examples on the use of the HLM program to do hierarchical linear modeling with NAEP data.
  • An appendix includes excerpted sections from the 2005 Data Companion to give the reader additional insight on topics introduced in previous sections of the Primer.
Please note that national results computed from the NAEP Primer mini-sample will be close to—but not identical to—published results in NAEP reports. National estimates should not be made with these data, and these data cannot be published as official estimates of NAEP.

Also note that the NAEP Primer consists of two publications: NCES 2011463 and NCES 2011464
8/4/2011
NCES 2011464 NAEP Primer Mini-Sample
The purpose of the NAEP Primer is to guide educational researchers through the intricacies of the NAEP database and make its technologies more user-friendly. The NAEP Primer makes use of its publicly accessible NAEP mini-sample that is included on the CD. The mini-sample contains real data from the 2005 mathematics assessment that have been approved for public use. Only public schools are included in this subsample that contains selected variables for about 10 percent of the schools and students in this assessment. All students who participated in NAEP in the selected public schools are included. This subsample is not sufficient to make state comparisons. In addition, to ensure confidentiality, no state, school, or student identifiers are included.

The NAEP Primer document covers the following topics:
  • Introduction and Overview: includes a technical history of NAEP, an overview of the NAEP Primer mini-sample and its design and implications for analysis, and a listing of relevant resources for further information.
  • The NAEP Database describes the contents of the NAEP database, the NAEP Primer mini-sample and the types of variables it includes, the NAEP database products, an overview of the NAEP 2005 Mathematics, Reading, and Science Data Companion, and how to obtain a Restricted-Use Data License.
  • NAEP Data Tools: provides the user with the information on the resources available to prepare the data for analysis, and how to find and use the various NAEP data tools.
  • Analyzing NAEP Data: includes recommendations for running statistical analyses with SPSS, SAS, STATA, and WesVar, including addressing the effect of BIB spiraling, plausible values, jackknife, etc. Worked examples and simple analyses use the NAEP Primer mini-sample.
  • Marginal Estimation of Score Distributions: discusses the principles of marginal estimation as used in NAEP and the role of plausible values.
  • Direct Estimation Using AM Software: presents an approach to direct estimation using the AM software including examples of analyses.
  • Fitting of Hierarchical Linear Models: presents information and examples on the use of the HLM program to do hierarchical linear modeling with NAEP data.
  • An appendix includes excerpted sections from the 2005 Data Companion to give the reader additional insight on topics introduced in previous sections of the Primer.
Please note that national results computed from the NAEP Primer mini-sample will be close to—but not identical to—published results in NAEP reports. National estimates should not be made with these data, and these data cannot be published as official estimates of NAEP.

Also note that the NAEP Primer consists of two publications: NCES 2011463 and NCES 2011464
8/4/2011
NCES 2011049 Third International Mathematics and Science Study 1999 Video Study Technical Report, Volume 2: Science
This second volume of the Third International Mathematics and Science Study (TIMSS) 1999 Video Study Technical Report focuses on every aspect of the planning, implementation, processing, analysis, and reporting of the science components of the TIMSS 1999 Video Study. Chapter 2 provides a full description of the sampling approach implemented in each country. Chapter 3 details how the data were collected, processed, and managed. Chapter 4 describes the questionnaires collected from the teachers in the videotaped lessons, including how they were developed and coded. Chapter 5 provides details about the codes applied to the video data by a team of international coders as well as several specialist groups. Chapter 6 describes procedures for coding the content and the classroom discourse of the video data by specialists. Lastly, in chapter 7, information is provided regarding the weights and variance estimates used in the data analyses. There are also numerous appendices to this report, including the questionnaires and manuals used for data collection, transcription, and coding.
7/27/2011
   1 - 15     Next >>
Page 1  of  2