Skip Navigation
small NCES header image

Publications & Products Search

     

Search by:            |   Results per page     |  Clear Search

Release Date       

Type of Product (help)

Survey/Program Area

Visit the IES Publications & Products Search to query all IES publications and products.

Search Results: (1-15 of 222 records)

 Pub Number  Title  Date
REL 2014035 Speak Out, Listen Up! Tools for Using Student Perspectives and Local Data for School Improvement
Listening closely to what students say about their school experiences can be beneficial to educators for understanding and addressing school-related topics and problems and rethinking policies and practices. The purpose of this toolkit is to provide educators with a purposeful and systematic way to elicit and listen to student voice to inform school improvement efforts. School improvement is complex work that relies on multiple sources of information to frame challenges and address and monitor change efforts. Student voice brings an additional, important source of information to these efforts. The toolkit offers three tools. ASK (Analyzing Surveys with Kids) involves students in analyzing and interpreting survey results associated with a school-related topic or problem and then producing suggestions for school improvement. Inside-Outside Fishbowl organizes a special kind of focus group in which students and educators trade roles as speakers and listeners during a facilitated discussion of a school-related topic or problem, and jointly develop an action plan. S4 (Students Studying Students’ Stories) guides a digital storytelling process in which students produce and analyze videotaped interviews of other students about a school-related topic or problem and then host forums with educators to suggest improvements. The toolkit includes detailed information about how the tools work, the questions they address, the number and types of participants needed, the amount of time required, space and materials considerations, and directions for using the tools. It also includes a tool template so schools and districts can create new student voice tools for their particular needs and interests.
7/29/2014
REL 2014014 Developing a Coherent Research Agenda: Lessons from the REL Northeast & Islands Research Agenda Workshops
This report describes the approach that REL Northeast and Islands (REL-NEI) used to guide its eight research alliances toward collaboratively identifying a shared research agenda. A key feature of their approach was a two-workshop series, during which alliance members created a set of research questions on a shared topic of education policy and/or practice. This report explains how REL-NEI conceptualized and organized the workshops, planned the logistics, overcame geographic distance among alliance members, developed and used materials (including modifications for different audiences and for a virtual platform), and created a formal research agenda after the workshops. The report includes links to access the materials used for the workshops, including facilitator and participant guides and slide decks.
7/10/2014
REL 2014015 The Effects of Increased Learning Time on Student Academic and Nonacademic Outcomes: Findings from a Meta-Analytic Review
REL Appalachia conducted a systematic review of the research evidence on the effects of increased learning time. After screening more than 7,000 studies, REL Appalachia identified 30 that met the most rigorous standards for research. A review of those 30 studies found that increased learning time does not always produce positive results. However, some forms of instruction tailored to the needs of specific types of students were found to improve their circumstances. Specific findings include:
  • Increased learning time promoted student achievement in mathematics and literacy when instruction was led by a certified teacher and when teachers used a traditional instructional style (i.e., the teacher is responsible for the progression of activities and students follow directions to complete tasks).
  • Increased learning time improved literacy outcomes for students performing below standards.
  • Increased learning time improved social-emotional skills of students with attention deficit/hyperactivity disorder.
7/9/2014
REL 2014024 Professional Practice, Student Surveys, and Value-Added: Multiple Measures of Teacher Effectiveness in the Pittsburgh Public Schools
Responding to federal and state prompting, school districts across the country are implementing new teacher evaluation systems that aim to increase the rigor of evaluation ratings, better differentiate effective teaching, and support personnel and staff development initiatives that promote teacher effectiveness and ultimately improve student achievement. Pittsburgh Public Schools (PPS) has been working for the last several years to develop richer and more-comprehensive measures of teacher effectiveness in support of a larger effort to promote effective teaching. In partnership with PPS, REL Mid-Atlantic collected data from Pittsburgh on three different types of teacher performance measures: professional practice measures derived from the Danielson Framework for Teaching; Tripod student survey measures; and value-added measures designed to assess each teacher’s contribution to student achievement growth. The study found that each of the three types of measures has the potential to differentiate the performance levels of different teachers. Moreover, the three types of measures are positively but modestly correlated with each other, suggesting that they are valid and complementary measures of teacher effectiveness and that they can be combined to produce a measure that is more comprehensive than any single measure. School-level variation in the ratings on the professional practice measure, however, suggests that different principals may have different standards in assigning ratings, which in turn suggests that the measure might be improved by using more than one rater of professional practice for each teacher.
7/8/2014
REL 2014051 Going public: Writing About Research in Everyday Language
This brief describes approaches that writers can use to make impact research more accessible to policy audiences. It emphasizes three techniques: making concepts as simple as possible, focusing on what readers need to know, and reducing possible misinterpretations. A glossary of common concepts is included showing the approaches applied to a range of concepts common to impact research, such as ‘regression models’ and ‘effect sizes.’
6/24/2014
REL 2014033 Disproportionality in school discipline: An assessment of trends in Maryland, 2009–12
This study examines whether disproportionate rates of suspensions and expulsions exist for racial/ethnic minority students and special education students in Maryland during the period 2009/10 to 2011/12. The study found that disproportionalities between Black and White students increased in 2011/12 despite an overall decrease in the number of out-of-school suspensions and expulsions. Moreover, black students receive out-of-school suspensions or expulsions at more than twice the rate of White students. In addition, special education students are removed from school at more than twice the rate of students who are not in special education. This “Stated Briefly” report is a companion piece that summarizes the results of another report of the same name, released on March 5, 2014.
6/17/2014
REL 2014027 The English Language Learner Program Survey for Principals
The English Language Learner (ELL) Program Survey for Principals includes survey questions for state education agencies to use to collect data about: 1) school-level policies and practices for educating ELL students; 2) the types of professional development related to ELL education that principals have received and would like to receive; 3) principals’ familiarity with state guidelines and standards for ELL student education; and 4) principals’ beliefs about the education of ELL students.

The survey tool was developed by the Regional Educational Laboratory Northeast and Islands in support of the research agenda of the English Language Learners Research Alliance, which is dedicated to improving state, district, and school collection and use of data about ELLs and to exploring programs and services that best fit the needs of students who are English Learners.
6/17/2014
REL 2014036 Using evidence-based decision trees instead of formulas to identify at-risk readers
The purpose of this study was to examine whether the early identification of students who are at-risk for reading comprehension difficulties is improved using logistic regression or classification and regression tree (CART). This research question was motivated by state education leaders’ interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by which students are identified as at-risk or not at-risk readers. Logistic regression and CART were compared using data on a sample of grades 1 and 2 Florida public school students who participated in both interim assessments and an end-of-the year summative assessment during the 2012/13 academic year. Grade-level analyses were conducted and comparisons between methods were based on traditional measures of diagnostic accuracy, including sensitivity (i.e., proportion of true positives), specificity (proportion of true negatives), positive and negative predictive power, overall correct classification, and the receiver operating characteristic area under the curve. Results indicate that CART is comparable to logistic regression, with the results of both methods yielding negative predictive power greater than the recommended standard of .90. The comparability of results suggests that CART should be used due to its ease in interpretation by practitioners. In addition, CART holds several technical advantages over logistic regression.
6/17/2014
REL 2014048 Making the Most of Opportunities to Learn What Works: A School District's Guide
This guide for district and school leaders shows how to recognize opportunities to embed randomized controlled trials (RCTs) into planned policies or programs. Opportunistic RCTs can generate strong evidence for informing education decisions—with minimal added cost and disruption. The guide also outlines the key steps to conduct RCTs and responds to common questions and concerns about RCTs. Readers will find a real life example of how one district took advantage of an opportunity to learn whether a summer reading program worked.
6/2/2014
REL 2014034 Program Monitoring: The Role of Leadership in Planning, Assessment, and Communication
This guide examines three components of program monitoring—planning, assessment, and communication—and identifies key measures of successful leadership for each component. This guide is one piece of a four-part series on program planning and monitoring released by REL Pacific at McREL.
5/27/2014
REL 2014032 Beating the Odds: Finding Schools Exceeding Achievement Expectations with High-Risk Students
State education leaders are often interested in identifying schools that have demonstrated success with improving the literacy of students who are at the highest level of risk for reading difficulties. The identification of these schools that are “beating the odds” is typically accomplished by comparing a school’s observed performance on a particular exam, such as a state achievement exam, with how the school would be expected to perform when taking into account its demographic characteristics including the percentage of students classified as economically disadvantaged, minority, or as an English language learner. This study used longitudinal data from the Florida Department of Education on grade 3 public school students for the academic years 2010/11-2012/13 to determine which schools are exceeding student achievement expectations, and what demographic similarities exist between schools that are exceeding expectations and other schools.
5/6/2014
REL 2014028 Suspension and Expulsion Patterns in Six Oregon School Districts
This Regional Educational Laboratory (REL) Northwest study identifies how frequently students in six selected urban districts received exclusionary discipline during the 2011/12 school year, the most common reasons for such discipline, the percentage of students receiving multiple suspensions, and how many school days students lost to suspensions. The study also examined the application of exclusionary discipline at different grade spans and by student gender, race/ethnicity, and special education status.

Key findings include:
  • During 2011/12, 6.4 percent of students were removed from regular classroom instruction because they were suspended or expelled. The most common reasons were physical and verbal aggression and insubordination/disruption.
  • Nearly 40 percent of students who were suspended received more than one suspension over the school year.
  • The average number of school days suspended among students receiving at least one suspension was 3.3 days.
5/6/2014
REL 20140037 Recognizing and Conducting Opportunistic Experiments in Education: A Guide for Policymakers and Researchers
Opportunistic experiments are type of randomized controlled trial that study the effects of a planned intervention or policy change with minimal added disruption and cost. This guide defines opportunistic experiments and provides examples, discusses issues to consider when identifying potential opportunistic experiments, and outlines the critical steps to complete opportunistic experiments. It concludes with a discussion of the potentially low cost of conducting opportunistic experiments and the potentially high cost of not conducting them. Readers will also find a checklist of key questions to consider when conducting opportunistic experiments.
5/6/2014
REL 2014016 Alternative student growth measures for teacher evaluation: Profiles of early‑adopting districts
States and districts are beginning to use student achievement growth — as measured by state assessments (often using statistical techniques known as value-added models or student growth models) — as part of their teacher evaluation systems. But this approach has limited application in most states, because their assessments are typically administered only in grades 3–8 and only in math and reading. In response, some districts have turned to alternative measures of student growth. These alternative measures include alternative assessment-based value-added models (VAMs) that use the results of end-of-course assessments or commercially available tests in statistical models, and student learning objectives (SLOs), which are determined by individual teachers, approved by principals, and used in evaluations that do not involve sophisticated statistical modeling.

For this report, administrators in eight districts that were early adopters of alternative measures of student growth were interviewed about how they used these measures to evaluate teacher performance. Key findings from the study are:
  • Districts using SLOs chose them as a teacher-guided method of assessing student growth, while those using alternative assessment-based VAMs chose to take advantage of existing assessments.
  • SLOs can be used for teacher evaluation in any grade or subject, but require substantial effort by teachers and principals, and ensuring consistency is challenging.
  • In the four SLO districts, SLOs are required of all teachers across grades K–12, regardless of whether the teachers serve grades or subjects that include district-wide standardized tests.
  • Alternative student assessments used by VAM districts differ by developer, alignment with specific courses, and coverage of grades and subjects.
  • VAMs applied to end-of-course and commercial assessments create consistent districtwide measures but generally require technical support from an outside provider.
4/29/2014
REL 2014019 Early Childhood Educator and Administrator Surveys on the use of assessments and standards in early childhood settings
The Early Childhood Educator Survey and the Early Childhood Administrator Survey allow users to collect consistent data on the use of child assessments and learning standards in early childhood learning settings. Each survey includes modules on educator/administrator background information, assessment use, and learning standards implementation. The surveys and modules can be used either together or individually, and are part of a research agenda to improve early childhood programming and child outcomes through research- and evidence-based practices. Regional Educational Laboratory Northeast & Islands developed these surveys in partnership with its Early Childhood Education Research Alliance.
4/15/2014
   1 - 15     Next >>
Page 1  of  15
Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics - http://nces.ed.gov
U.S. Department of Education