Skip Navigation

Search Results: (1-15 of 40 records)

 Pub Number  Title  Date
NPEC 2018023 The History and Origins of Survey Items for the Integrated Postsecondary Education Data System (2016–17 Update)
This report updates the 2011–12 Integrated Postsecondary Education Data System (IPEDS) survey components report—The History and Origins of Survey Items for the Integrated Postsecondary Education Data System—in order to reflect the 2016–17 data collection. The report was developed to document the sources of current IPEDS data items as background information for interested parties and to provide guidance when NCES, technical review panels, and others are considering potential changes to the IPEDS data collection.
3/6/2018
NCEE 20184002 Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects.
10/3/2017
NCEE 20174026 Comparing Impact Findings from Design-Based and Model-Based Methods: An Empirical Investigation
This report compares empirical results from different approaches to analyzing data from randomized controlled trials (RCTs). It focuses on how impact estimates compare between recently-developed design-based methods and traditional model-based methods. Design-based methods use the potential outcomes framework and known features of study designs to connect statistical methods to the building blocks of causal inference. They differ from model-based methods that have commonly been used in education research, including hierarchical linear model (HLM) methods and robust cluster standard error (RCSE) methods for clustered designs. This study re-analyzes nine past RCTs in the education area using both design- and model-based methods. The study finds that model-based and design-based methods yield very similar impact estimates and levels of statistical significance, especially when the underlying analytic assumptions (e.g., weights used to aggregate clusters and blocks) are aligned.
7/25/2017
NCES 2017092 A Quarter Century of Changes in the Elementary and Secondary Teaching Force: From 1987 to 2012

This report looks at changes in several key characteristics of the teaching force between the 1987-88 and 2011-12 school years, including the number of teachers, the level of teaching experience, and the racial/ethnic diversity of the teaching force. The report focuses on how these demographic changes varied across different types of teachers and schools.

Among the findings about changes in the teacher workforce over this 25 year period:

  • The teacher work force grew by 46 percent between 1987-88 and 2011-12. Above average growth was seen among teachers in the fields of English as a Second Language, English language arts, mathematics, foreign language, natural science, and special education. Below-average growth was seen in the fields of general elementary education, vocational-technical education, and art/music;

  • The growth in the teaching force varied across different types of schools. The teaching force in high-poverty public schools grew by nearly 325 percent while the number of teachers in low-poverty schools declined by almost 20 percent. The number of teachers in private schools grew at a higher rate than in public schools. However, private school teachers still account for only about 12 percent of the teacher work force; and

  • The teacher force became more diverse. While minority teachers remain underrepresented in the teaching force, both the number and proportion of minority teachers increased. Between 1987–88 and 2011–12, the number of minority teachers grew by 104 percent, compared to 38 percent growth in the number of White teachers. The percentage of teachers who belonged to all minority groups increased from 12.4 percent in 1987-88 to 17.3 percent in 2011-12.

This report utilizes data from the Schools and Staffing Survey (SASS), a large-scale sample survey of elementary and secondary teachers and schools in the United States. SASS has been conducted seven times—in school years 1987-88, 1990-91, 1993-94, 1999-2000, 2003-04, 2007-08, and 2011-12.

4/11/2017
NCER 20162003 Synthesis of IES-Funded Research on Mathematics: 2002–2013
This synthesis reviews published papers on IES-supported research from projects awarded between 2002 and 2013. The authors identified 28 specific contributions that IES-funded research made to support mathematics learning and teaching from kindergarten through secondary school. The publication organizes the contributions by topic and grade level and each section describes the contributions IES-funded researchers are making in these areas and discusses the projects behind the contributions.
7/26/2016
NCER 20142000 Partially Nested Randomized Controlled Trials in Education Research: A Guide to Design and Analysis

In some tests of educational interventions, individual students are randomized directly to the treatment or control group, and both intervention and control protocols are administered in an individual setting. Such an experiment is an Individual-Level Randomized Controlled Trial (I-RCT). In other tests, clusters of students (e.g., classrooms) are randomized. This sort of experiment is called a Cluster Randomized Controlled Trial (C-RCT). However, in some designs, students in the treatment group are clustered like those in a C-RCT, but students in the control group are unclustered, like students in an I-RCT. This design is called a Partially Nested Randomized Controlled Trial (PN-RCT). It is partially nested because students in the treatment group are nested in some higher level unit, such as a tutoring group or class, but students in the control group are not nested as part of the experimental design.

This paper, commissioned by the National Center for Education Research, provides readers with an introduction to PN-RCTs and ways to design and analyze the results from them. This paper was written primarily for applied education researchers with introductory knowledge of quantitative impact evaluation methods. However, those with more advanced knowledge will also benefit from some of the technical examples and appendices.

  • Chapters 1 and 2 define PN-RCTs and address design issues such as possibilities for random assignment, cluster formation, statistical power, and confounding factors that may mask the contribution of the intervention.
  • Chapter 3 discusses basic statistical models that adjust for the clustering of treatment students within intervention clusters, associated computer code for estimation, and a step-by-step guide, using examples, on how to estimate the models and interpret the output.
  • Chapter 4 and the technical appendixes discuss more advanced statistical topics pertaining to PN-RCTs.
7/31/2014
NCSER 20143000 Improving Reading Outcomes for Students with or at Risk for Reading Disabilities: A Synthesis of the Contributions from the Institute of Education Sciences Research Centers
The report describes what has been learned regarding the improvement of reading outcomes for children with or at risk for reading disabilities through research funded by the Institute's National Center for Education Research and National Center for Special Education Research and published in peer-reviewed outlets through December 2011. The synthesis describes contributions to the knowledge base produced by IES-funded research across four focal areas:
  • Assessment: What have we learned about effective identification and assessment of students who have or are at risk for reading difficulties or disabilities?
  • Basic Cognitive and Linguistic Processes: What are the basic cognitive and linguistic processes that support successful reading and how can these skills be improved for students who have or who are at risk for reading disabilities?
  • Intervention: How do we make reading instruction more effective for students who have or are at risk for developing reading disabilities? How do we teach reading to students with low incidence disabilities?
  • Professional Development: How do we bring research-based instructional practices to the classroom?
2/27/2014
NCSER 20133001 Synthesis of IES Research on Early Intervention and Early Childhood Education
The report describes what has been learned from research grants on early intervention and early childhood education funded by the Institute's National Center for Education Research and National Center for Special Education Research, and published in peer-reviewed outlets through June 2010. This synthesis describes contributions to the knowledge base produced by IES-funded research across four focal areas:
* Early childhood classroom environments and general instructional practices;
* Educational practices designed to impact children's academic and social outcomes;
* Measuring young children's skills and learning; and
* Professional development for early educators.
Research supported by IES has made significant contributions to the evidence base in these areas. The authors also raise important questions for education research in the future, including:
* What are the crucial features of high-quality early childhood education?
* Which instruction is most effective for which children and under what circumstances?
* How do we effectively and efficiently support teachers in improving their instruction?
7/23/2013
NPEC 2012835 Defining and Reporting Subbaccalaureate Certificates in IPEDS
Subbaccalaureate certificates, postsecondary awards conferred as the result of successful completion of a formal program of study below the baccalaureate level, have become more prominent in higher education over the last decade. Institutions of all sectors offer subbaccalaureate certificates, which can range in length from a few months to more than 2 years. Subbaccalaureate certificates provide individuals with a means for gaining specific skills and knowledge that can be readily transferred to the workforce. As part of its mission to promote the quality, comparability, and utility of postsecondary data, the National Postsecondary Education Cooperative (NPEC) convened a working group to examine subbaccalaureate certificates and how they are reported in the U.S. Department of Education's Integrated Postsecondary Education Data System (IPEDS).
9/28/2012
NPEC 2012831 Information Required to Be Disclosed Under the Higher Education Act of 1965: Suggestions for Dissemination – A Supplemental Report
In 2009, the National Postsecondary Education Cooperative (NPEC) issued a report that provided suggestions on how postsecondary institutions could meet disclosure requirements under the Higher Education Act of 1965 (HEA), as amended by the Higher Education Opportunity Act (HEOA) of 2008. This paper was commissioned by NPEC to determine if institutions were implementing the suggestions in its 2009 report. This paper identifies how institutions have implemented the NPEC’s 2009 report suggestions on presenting disclosure requirements. Additionally, this report identifies other resources and tools that could be used by institutions to present disclosure requirements in a consumer-friendly manner.
1/9/2012
NCSER 20123000REV Secondary School Programs and Performance of Students With Disabilities: A Special Topic Report of Findings From the National Longitudinal Transition Study-2 (NLTS2)
Secondary School Programs and Performance of Students With Disabilities: A Special Topic Report of Findings From the National Longitudinal Transition Study-2 uses data from the National Longitudinal Transition Study-2 dataset to provide a national picture of what courses students with disabilities took in high school, in what settings, and with what success in terms of credits and grades earned.

This report has been revised to reflect the updated NLTS2 dataset released in 2013.
11/17/2011
NPEC 2012833 The History and Origins of Survey Items for the Integrated Postsecondary Education Data System
This project was conducted to determine the origin of items in the 2011-12 Integrated Postsecondary Education Data System (IPEDS) survey components. The report was developed to document the sources of current IPEDS data items as background information for interested parties and to provide guidance when NCES, technical review panels, and others are considering potential changes to the IPEDS data collection.
11/4/2011
NPEC 2012834 Suggestions for Improvements to the Collection and Dissemination of Federal Financial Aid Data
Several offices within the U.S. Department of Education collect and disseminate data about student financial aid. However, limitations of these data sources may make it difficult for consumers, policymakers, and researchers to gain a complete picture of the sources, types, and amounts of aid going to students at institutions of higher education and the relationship between aid and policy goals such as access and success. This report presents the findings and recommendations of the National Postsecondary Education Cooperative (NPEC) Working Group on Financial Aid Data, which sought to identify potential improvements to the collection and dissemination of federal financial aid data.
11/4/2011
NCEE 20124015 Whether and How to Use State Tests to Measure Student Achievement in a Multi-State Randomized Experiment: An Empirical Assessment Based on Four Recent Evaluations
An important question for educational evaluators is how best to measure academic achievement, the outcome of primary interest in many studies. In large-scale evaluations, student achievement has typically been measured by administering a common standardized test to all students in the study (a “study-administered test”). In the era of No Child Left Behind (NCLB), however, state assessments have become an increasingly viable source of information on student achievement. Using state tests scores can yield substantial cost savings for the study and can eliminate the burden of additional testing on students and teaching staff. On the other hand, state tests can also pose certain difficulties: their content may not be well aligned with the outcomes targeted by the intervention and variation in the content and scale of the tests can complicate pooling scores across states and grades.

This NCEE Reference Report, Whether and How to Use State Tests to Measure Student Achievement in a Multi-State Randomized Experiment: An Empirical Assessment Based on Four Recent Evaluations, examines the sensitivity of impact findings to (1) the type of assessment used to measure achievement (state tests or a study-administered test); and (2) analytical decisions about how to pool state test data across states and grades. These questions are examined using data from four recent IES-funded experimental design studies that measured student achievement using both state tests and a study-administered test. Each study spans multiple states and two of the studies span several grade levels.
10/12/2011
NCEE 20124016 Estimating the Impacts of Educational Interventions Using State Tests or Study-Administered Tests
State assessments provide a relatively inexpensive and increasingly accessible source of data on student achievement. In the past, rigorous evaluations of educational interventions typically administered standardized tests selected by the researchers ("study-administered tests") to measure student achievement outcomes. Increasingly, researchers are turning to the lower cost option of using state assessments for measures of student achievement.
10/11/2011
   1 - 15     Next >>
Page 1  of  3