Trend scoring is used in NAEP to compare the consistency of scoring over time (i.e., cross-year interrater agreement). During trend scoring, the NAEP electronic scoring system allows for the presentation of a pool of scored responses from a prior assessment to current scorers. Comparing current scores to the scores given in the prior assessment offers the ability to generate reports to evaluate scoring consistency over time for a specific NAEP item.
The trend set is an important addition to the traditional paper training sets. Trend responses are the randomly pulled responses drawn for each item from prior scoring. Each trend set is composed of 600 responses and is available before (on compact disc) and during (electronic scoring system) the project for trainer use and review.
After thorough preparation using the paper training sets, the trainer reviews a broader range of responses (via compact disc) using the trend responses specific to that item. This occurs during the trainer preparation period (i.e. before the scoring window). It is important to note that the trend scores do not represent validity; the responses may be scored correctly or incorrectly. Therefore, when trainers review trend during the preparation phase, trend scores should be thought of as pattern markers, indicators of scoring patterns that may or may not have been evident within the paper-training sets.
After being trained on a new item, the scorers are given a trend set with 200 responses to score. If the scorers as a group meet minimum performance criteria, they then begin the scoring of current year responses. Performance criteria are:
If they do not, the process is repeated until they do meet the criteria. Clarification of training, or retraining, often is conducted prior to the next attempt.
Trend responses can serve other functions as well.
During trainer preparation the trainers may:
While training scorers, trend responses may be:
As a calibration tool, trend responses may be:
When retraining scorers, trend responses may be: