Skip to main content
Skip Navigation

Table of Contents | Search Technical Documentation | References

The NAEP Database → Database Quality Control: 2013 to 2019

NAEP Technical DocumentationDatabase Quality Control: 2013 to 2019

  

  

 

2019 Summary Comparison Tables

2018 Summary Comparison Tables

2017 Summary Comparison Tables

2016 Summary Comparison Tables

2015 Summary Comparison Tables

2014 Summary Comparison Tables

2013 Summary Comparison Tables

Beginning in 2013, a new automated quality control process replaced the earlier manual process. This new process involves comparing item-by-item percentage summary statistics generated by the NAEP Materials Processing and Scoring Contractor from their raw data, once processed, to the same statistics generated from the database of the NAEP Design, Analysis, and Reporting Contractor. For every item, response percentages for each category are compared between the two file systems to ensure the accurate transmission of the NAEP Materials Processing and Scoring Contractor’s processed data to the NAEP Design, Analysis, and Reporting Contractor’s final database.

The process involves three steps.

  1. The NAEP Materials Processing and Scoring Contractor computes frequency distributions for every student, school, and teacher question and then delivers statistics to the NAEP Design, Analysis, and Reporting Contractor along with the NAEP Materials Processing and Scoring Contractor's final data file.
  2. The NAEP Design, Analysis, and Reporting Contractor independently produces frequency distributions from their database after processing the data.
  3. Software programs exhaustively compare the statistical properties (including differences in frequencies, percentages, averages, and medians) of the two sets of frequency distributions to ensure reasonable accuracy.

It is important to note that the NAEP Materials Processing and Scoring Contractor's database contains data for every processed booklet, digital test form, and questionnaire, while the final database is a subset of that data. For example, students may be removed from analysis because they are ineligible or excluded (ineligible students are removed from the final database when the data are merged with sampling weights; however, excluded students remain in the database to allow some information for these students to be examined). Similarly, collected teacher or school questionnaire information may be removed from analysis because the teacher and/or school had no students who were assessed. In the case of student data, the excluded student rate is approximately two percent. The goal of the exercise is to ensure that there are no unreasonable or unexplainable differences in the summaries of responses. There has been no case that raised a concern for unreasonable or unexplainable differences in the summaries of responses.

The summary comparison tables summarize the differences in percentages for the NAEP assessments. For each subject/grade/instrument, the statistics include the differences in percentages across all of the questions in that group. The statistics of percentage differences shown include the minimum, maximum, average, and median difference, all of which are expressed based on the absolute value of the response-category level. Summary comparison tables from the 2013 through 2019 NAEP assessments are available via the links in the upper right section of this page.


Last updated 23 May 2023 (SK)