Skip Navigation

Search Results: (1-6 of 6 records)

 Pub Number  Title  Date
REL 2021107 Characteristics and Performance of High School Equivalency Exam Takers in New Jersey
Since 2014 the New Jersey Department of Education has offered three high school equivalency (HSE) exams for nongraduates seeking credentials: the GED, the High School Equivalency Test (HiSET), and the Test Assessing Secondary Completion (TASC). This study used data on exam takers who had been grade 8 students in a New Jersey public school between 2008/09 and 2013/14 and who had attempted at least one HSE exam in New Jersey between March 2014 and December 2018. It analyzed how the characteristics of exam takers differ across exams and from the characteristics of non–exam takers, how the performance of exam takers with similar backgrounds varies, and how a recent reduction in the passing threshold for two of the exams affected passing rates. Among all students who had been grade 8 students in a New Jersey public school during the study years, HSE exam takers completed fewer years of school, were more likely to have been eligible for the national school lunch program in grade 8, and were more likely to identify as Black or Hispanic than non–exam takers. GED takers had received higher grade 8 standardized test scores, were more likely to identify as White, and were less likely to have been eligible for the national school lunch program in grade 8 than HiSET and TASC takers. Under the New Jersey Department of Education's original passing thresholds, exam takers in the study sample were more likely to pass the HiSET and TASC than the GED on the first attempt (after grade 8 standardized test scores were controlled for). However, after the reduction in passing thresholds, the first-attempt passing rate was similar across the three exams. Under the new passing thresholds, two-thirds of GED takers and more than half of HiSET and TASC takers passed on the first attempt, and—when all exam attempts are included—three-quarters of all exam takers ever passed each exam.
REL 2014016 Alternative student growth measures for teacher evaluation: Profiles of early‑adopting districts
States and districts are beginning to use student achievement growth — as measured by state assessments (often using statistical techniques known as value-added models or student growth models) — as part of their teacher evaluation systems. But this approach has limited application in most states, because their assessments are typically administered only in grades 3–8 and only in math and reading. In response, some districts have turned to alternative measures of student growth. These alternative measures include alternative assessment-based value-added models (VAMs) that use the results of end-of-course assessments or commercially available tests in statistical models, and student learning objectives (SLOs), which are determined by individual teachers, approved by principals, and used in evaluations that do not involve sophisticated statistical modeling.

For this report, administrators in eight districts that were early adopters of alternative measures of student growth were interviewed about how they used these measures to evaluate teacher performance. Key findings from the study are:
  • Districts using SLOs chose them as a teacher-guided method of assessing student growth, while those using alternative assessment-based VAMs chose to take advantage of existing assessments.
  • SLOs can be used for teacher evaluation in any grade or subject, but require substantial effort by teachers and principals, and ensuring consistency is challenging.
  • In the four SLO districts, SLOs are required of all teachers across grades K–12, regardless of whether the teachers serve grades or subjects that include district-wide standardized tests.
  • Alternative student assessments used by VAM districts differ by developer, alignment with specific courses, and coverage of grades and subjects.
  • VAMs applied to end-of-course and commercial assessments create consistent districtwide measures but generally require technical support from an outside provider.
NCEE 20114033 Variability in Pretest-Posttest Correlation Coefficients by Student Achievement Level
State assessments are increasingly used as outcome measures for education evaluations. The scaling of state assessments produces variability in measurement error, with the conditional standard error of measurement increasing as average student ability moves toward the tails of the achievement distribution. This report examines the variability in pretest-posttest correlation coefficients of state assessment data for samples of low-performing, average-performing, and proficient students to illustrate how sample characteristics (including the measurement error of observed scores) affect pretest-posttest correlation coefficients. As an application, this report highlights how statistical power can be attenuated when correlation coefficients vary according to sample characteristics. Achievement data from four states and two large districts in both English/Language Arts and Mathematics for three recent years are examined. The results confirm that pretest-posttest correlation coefficients are smaller for samples of low performers, reducing statistical power for impact studies. Substantial variation across state assessments was also found. These findings suggest that it may be useful to assess the pretest-posttest correlation coefficients of state assessments for an intervention’s target population during the planning phase of a study.
REL 2011102 How Student and School Characteristics are Associated with Performance on the Maine High School Assessment
Using multilevel regression models to examine how student characteristics, student prior achievement measures, and school characteristics are associated with performance on the Maine High School Assessment, the study finds statistically significant relationships between several of these variables and assessment scores in reading, writing, math, and science.
REL 2010087 The Relationship Between Changes in the Percentage of Students Passing and in the Percentage Testing Advanced on State Assessment Tests for Kentucky and Virginia
Under the accountability provisions of the No Child Left Behind Act of 2001, states are required to assess students in reading and math and to identify them as below proficient or as proficient or advanced (both considered passing). Because schools are held accountable only for ensuring that students test proficient or better, there have been concerns that a focus on increasing the percentage of students testing proficient might unintentionally lead to fewer students testing at the advanced level. This REL Appalachia report, The Relationship Between Changes in the Percentage of Students Passing and in the Percentage Testing Advanced on State Assessment Tests for Kentucky and Virginia, finds that schools in Kentucky and Virginia with the greatest increases in the percentage testing proficient or better also have the greatest increases in the percentage testing advanced.
NCES 2010456 Mapping State Proficiency Standards Onto NAEP Scales: 2005-2007
This research and development report compares the standards that states use in reporting 4th- and 8th- grade reading and mathematics proficiency using NAEP as a common metric. The state standards used in reporting 2006-07 results were mapped onto the NAEP scales to compare the standards across the states and in relation to the NAEP achievement levels.

The mapping procedure offers an approximate way to assess the relative rigor of the states’ adequate yearly progress (AYP) standards established under the No Child Left Behind Act of 2001. Once mapped, the NAEP scale equivalent score representing the state's proficiency standards can be compared to indicate the relative rigor of those standards. The term rigor as used here does not imply a judgment about state standards. Rather, it is intended to be descriptive of state-to-state variation in the location of the state standards on a common metric.
   1 - 6