Skip Navigation

Search Results: (1-15 of 109 records)

 Pub Number  Title  Date
NCES 2022043 National Household Education Surveys Program of 2019: Qualitative Study of Nonresponding Addresses
This report documents the methods and findings of a qualitative study of nonrespondent addresses to the 2019 administration of the National Household Education Surveys Program (NHES:2019). The study included two components: (1) 85 in-depth, qualitative interviews and (2) 760 address observations. The overarching goal was to better understand the drivers of nonresponse to the NHES and to provide additional, actionable information on how to combat this growing problem.
1/24/2022
REL 2022133 Branching Out: Using Decision Trees to Inform Education Decisions
Classification and Regression Tree (CART) analysis is a statistical modeling approach that uses quantitative data to predict future outcomes by generating decision trees. CART analysis can be useful for educators to inform their decisionmaking. For example, educators can use a decision tree from a CART analysis to identify students who are most likely to benefit from additional support early—in the months and years before problems fully materialize. This guide introduces CART analysis as an approach that allows data analysts to generate actionable analytic results that can inform educators’ decisions about the allocation of extra supports for students. Data analysts with intermediate statistical software programming experience can use the guide to learn how to conduct a CART analysis and support research directors in local and state education agencies and other educators in applying the results. Research directors can use the guide to learn how results of CART analyses can inform education decisions.
12/27/2021
REL 2021074 Steps to Develop a Model to Estimate School- and District-Level Postsecondary Success
This tool is intended to support state and local education agencies in developing a statistical model for estimating student postsecondary success at the school or district level. The tool guides education agency researchers, analysts, and decisionmakers through options to consider when developing their own model. The resulting model generates an indicator of a school's or district's contribution to the postsecondary success of its students after contextual factors are accounted for that might be outside a school's or district's control, such as student demographic characteristics and community characteristics. State and local education agencies could use the information generated by the models they develop to help meet federal and state reporting requirements and to inform their own efforts to improve their students’ postsecondary success.
3/22/2021
NCES 2017147 Best Practices for Determining Subgroup Size in Accountability Systems While Protecting Personally Identifiable Student Information
The Every Student Succeeds Act (ESSA) of 2015 (Public Law 114-95) requires each state to create a plan for its statewide accountability system. In particular, ESSA calls for state plans that include strategies for reporting education outcomes by grade for all students and for economically disadvantaged students, students from major racial and ethnic groups, students with disabilities, and English learners. In their plans, states must specify a single value for the minimum number of students needed to provide statistically sound data for all students and for each subgroup, while protecting personally identifiable information (PII) of individual students. This value is often referred to as the "minimum n-size."

Choosing a minimum n-size is complex and involves important and difficult trade-offs. For example, the selection of smaller minimum n-sizes will ensure that more students' outcomes are included in a state's accountability system, but smaller n-sizes can also increase the likelihood of the inadvertent disclosure of PII. Similarly, smaller minimum n-sizes enable more complete data to be reported, but they may also affect the reliability and statistical validity of the data.

To inform this complex decision, Congress required the Institute of Education Sciences (IES) of the U.S. Department of Education to produce and widely disseminate a report on "best practices for determining valid, reliable, and statistically significant minimum numbers of students for each of the subgroups of students" (Every Student Succeeds Act of 2015 (ESSA 2015), Public Law 114-95). Congress also directed that the report describe how such a minimum number "will not reveal personally identifiable information about students." ESSA prohibits IES from recommending any specific minimum number of students in a subgroup (Section 9209).

IES produced this report to assist states as they develop accountability systems that (1) comply with ESSA; (2) incorporate sound statistical practices and protections; and (3) meet the information needs of state accountability reporting, while still protecting the privacy of individual students.

As presented in this report, the minimum n-size refers to the lowest statistically defensible subgroup size that can be reported in a state accountability system. Before getting started, it is important to understand that the minimum n-size a state establishes and the privacy protections it implements will directly determine how much data will be publicly reported in the system.
1/12/2017
REL 2015077 Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model Versus Logistic Regression
The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by which students are identified as at-risk or not at-risk readers. Logistic regression and CART were compared using data on a sample of grades 1 and 2 Florida public school students who participated in both interim assessments and an end-of-the year summative assessment during the 2012/13 academic year. Grade-level analyses were conducted and comparisons between methods were based on traditional measures of diagnostic accuracy, including sensitivity (i.e., proportion of true positives), specificity (proportion of true negatives), positive and negative predictive power, and overall correct classification. Results indicate that CART is comparable to logistic regression, with the results of both methods yielding negative predictive power greater than the recommended standard of .90. Details of each method are provided to assist analysts interested in developing early warning systems using one of the methods.
2/25/2015
REL 2015071 How Methodology Decisions Affect the Variability of Schools Identified as Beating the Odds
Schools that show better academic performance than would be expected given characteristics of the school and student populations are often described as "beating the odds" (BTO). State and local education agencies often attempt to identify such schools as a means of identifying strategies or practices that might be contributing to the schools' relative success. Key decisions on how to identify BTO schools may affect whether schools make the BTO list and thereby the identification of practices used to beat the odds. The purpose of this study was to examine how a list of BTO schools might change depending on the methodological choices and selection of indicators used in the BTO identification process. This study considered whether choices of methodologies and type of indicators affect the schools that are identified as BTO. The three indicators were (1) type of performance measure used to compare schools, (2) the types of school characteristics used as controls in selecting BTO schools, and (3) the school sample configuration used to pool schools across grade levels. The study applied statistical models involving the different methodologies and indicators and documented how the lists schools identified as BTO changed based on the models. Public school and student data from one midwest state from 2007-08 through 2010-11 academic years were used to generate BTO school lists. By performing pairwise comparisons among BTO school lists and computing agreement rates among models, the project team was able to gauge the variation in BTO identification results. Results indicate that even when similar specifications were applied across statistical methods, different sets of BTO schools were identified. In addition, for each statistical method used, the lists of BTO schools identified varied with the choice of indicators. Fewer than half of the schools were identified as BTO in more than one year. The results demonstrate that different technical decisions can lead to different identification results.
2/24/2015
REL 2014064 Reporting What Readers Need to Know about Education Research Measures: A Guide
This brief provides five checklists to help researchers provide complete information describing (1) their study's measures; (2) data collection training and quality; (3) the study's reference population, study sample, and measurement timing; (4) evidence of the reliability and construct validity of the measures; and (5) missing data and descriptive statistics. The brief includes an example of parts of a report's methods and results section illustrating how the checklists can be used to check the completeness of reporting.
9/9/2014
NCES 2014097 NCES Statistical Standards
This publication contains the 2012 revised statistical standards and guidelines for the National Center for Education Statistics (NCES). These standards and guidelines are intended for use by NCES staff and contractors to guide them in their data collection, analysis, and dissemination activities. They are also intended to present a clear statement for data users regarding how data should be collected in NCES surveys, and the limits of acceptable applications and use. Users should be cognizant that the contents of this publication are continually being reviewed for technological and statistical advances.
5/22/2014
NCES 2013009REV Highlights From TIMSS 2011: Mathematics and Science Achievement of U.S. Fourth- and Eighth-Grade Students in an International Context
The Trends in International Mathematics and Science Study (TIMSS) 2011 is the fifth administration of this international comparative study since 1995 when first administered. TIMSS is used to compare over time the mathematics and science knowledge and skills of fourth- and eighth-graders. TIMSS is designed to align broadly with mathematics and science curricula in the participating countries. The results, therefore, suggest the degree to which students have learned mathematics and science concepts and skills likely to have been taught in school. In 2011, there were 54 countries and 20 other educational systems that participated in TIMSS, at the fourth- or eighth-grade level, or both.

The focus of the report is on the performance of U.S. students relative to their peers in other countries in 2011, and on changes in mathematics and science achievement since 2007 and 1995. For a number of participating countries and education systems, changes in achievement can be documented over the last 16 years, from 1995 to 2011. This report also describes achievement within the United States by sex, race/ethnicity, and enrollment in public schools with different levels of poverty. In addition, it describes achievement in nine states that participated in TIMSS both as part of the U.S. national sample of public and private schools as well as individually with state-level samples of public schools.

In addition to numerical scale results, TIMSS also includes international benchmarks. The TIMSS international benchmarks provide a way to interpret the scale scores by describing the types of knowledge and skills students demonstrate at different levels along the TIMSS scale.

After the initial release of the NCES TIMSS 2011 national report and supplemental tables, several minor changes were made to the report, Appendix A, and to footnotes in Appendix E. View the errata notice for details.
12/11/2012
NCSER 20133000 Translating the Statistical Representation of the Effects of Education Interventions Into More Readily Interpretable Forms
This new Institute of Education Sciences (IES) report assists with the translation of effect size statistics into more readily interpretable forms for practitioners, policymakers, and researchers. This paper is directed to researchers who conduct and report education intervention studies. Its purpose is to stimulate and guide researchers to go a step beyond reporting the statistics that represent group differences. With what is often very minimal additional effort, those statistical representations can be translated into forms that allow their magnitude and practical significance to be more readily understood by those who are interested in the intervention that was evaluated.
11/28/2012
NCES 2012151 Statistical Methods for Protecting Personally Identifiable Information in the Disclosure of Graduation Rates of First-Time, Full-Time Degree- or Certificate-Seeking Undergraduate Students by 2-Year Degree-Granting Institutions of Higher Education
This Technical Brief provides guidance to Title IV 2-year degree-granting institutions in meeting the statutory disclosure requirement related to graduation rates while minimizing the risk of revealing the graduation status of individual students.
10/25/2011
NCEE 20124016 Estimating the Impacts of Educational Interventions Using State Tests or Study-Administered Tests
State assessments provide a relatively inexpensive and increasingly accessible source of data on student achievement. In the past, rigorous evaluations of educational interventions typically administered standardized tests selected by the researchers ("study-administered tests") to measure student achievement outcomes. Increasingly, researchers are turning to the lower cost option of using state assessments for measures of student achievement.
10/11/2011
NCES 2011609 NCES Handbook of Survey Methods
This publication presents explanations of how each survey program in NCES obtains and prepares the data it publishes. The Handbook aims to provide users of NCES data with the information necessary to evaluate the suitability of the statistics for their needs, with a focus on the methodologies for survey design, data collection, and data processing.
6/28/2011
NCES 2011607 National Institute of Statistical Sciences Configuration and Data Integration Technical Panel: Final Report
NCES asked the National Institute of Statistical Sciences (NISS) to convene a technical panel of survey and policy experts to examine potential strategies for configuration and data integration among successive national longitudinal education surveys. In particular the technical panel was asked to address two related issues: how could NCES configure the timing of its longitudinal studies (e.g., Early Childhood Longitudinal Study [ECLS], Education Longitudinal Study [ELS], and High School Longitudinal Study [HSLS]) in a maximally efficient and informative manner. The main, but not sole, focus was at the primary and secondary levels; and what could NCES do to support data integration for statistical and policy analyses that cross breakpoints between longitudinal studies. The NISS technical panel delivered its report to NCES in 2009. The principle recommendations included in the report are: 1. The technical panel recommended that NCES should configure K-12 studies as a series of three studies: (i) a K-5 study, followed immediately by (ii) a 6-8 study, followed immediately by (iii) a 9-12 study. One round of such studies, ignoring postsecondary follow-up to the 9-12 study, requires 13 years to complete. 2. The technical panel also recommended that budget permitting; NCES should initiate a new round of K-12 studies every 10 years. This can be done in a way that minimizes the number of years in which multiple major assessments occur. The panel found that there is no universal strategy by means of which NCES can institutionalize data integration across studies. One strategy was examined in detail: continuation of students from one study to the next. Based on experiments conducted by NISS the technical panel found that: 3. the case for continuation on the basis that it supports cross-study statistical inference is weak. Use of high-quality retrospective data that are either currently available or are likely to be available in the future can accomplish nearly as much at lower cost. 4. Continuation is problematic in at least two other senses: first, principled methods for constructing weights may not exist and, second, no matter how much NCES might advise to the contrary, researchers are likely to attempt what is likely to be invalid or uninformative inference on the basis of continuation cases alone. 5. The technical panel urged that, as an alternative means of addressing specific issues that cross studies, NCES consider the expense and benefit of small, targeted studies that target specific components of student’s trajectories.
3/28/2011
NCES 2011608 National Institute of Statistical Sciences Data Confidentiality Technical Panel: Final Report
NCES asked the National Institute of Statistical Sciences (NISS) to convene a technical panel of survey and policy experts to examine the NCES current and planned data dissemination strategies for confidential data with respect to: mandates and directives that NCES make data available; current and prospective technologies for protecting and accessing confidential data, as well as for breaking confidentiality; and the various user communities for NCES data and these communities’ uses of the data. The principle goals of the technical panel were to review the NCES current and planned data dissemination strategies for confidential data, assessing whether these strategies are appropriate in terms of both disclosure risk and data utility, and then to recommend to NCES any changes that the task force deems desirable or necessary. The NISS technical panel delivered its report to NCES in 2008. The report included four principal recommendations, the first three of which confirmed existing NCES strategies and practices:

  1. The technical panel recommended that all NCES analyses and publications be based on restricted databases produced by applying data swapping operations to original data as collected and edited.
  2. The technical panel also recommended that access to restricted databases be controlled under license from NCES.
  3. The panel recommended that NCES produce public databases whenever possible (by applying further appropriate statistical disclosure limitation techniques) and provide access to the public databases electronically by means of a data access system (DAS).
  4. Furthermore the panel recommended that NCES tailor the user interfaces of data access systems to user communities.
3/10/2011
   1 - 15     Next >>
Page 1  of  8