NAEP provides information about the treatment of specific items in NAEP scales and about evidence of differential item functioning (DIF) for specific items. Using current methodologies in psychometrics, the assumption of conditional independence and the assumption that the data fit the Item Response Theory (IRT) models are examined and controlled in NAEP in several ways. They are examined by considering results of DIF analyses, item fit statistics, and plots of empirical and theoretical item response functions. They are controlled by treating missing and "not reached" responses in reasonable ways, maintaining the context and administration of items across assessments, collapsing categories of polytomous items when appropriate, combining two or more items into a single item, or making decisions about the inclusion or deletion of an item in a scale based on data. The identification and amelioration of violations of IRT assumptions is an area of ongoing research in educational measurement. For example, recent studies have investigated local item dependence (Yen 1993; Habing and Donoghue in press), assessing the fit of the item response function (Orlando and Thissen 2000; Donoghue and Hombo 1999, Hombo, Donoghue, and Thayer 2000), item parameter drift (Donoghue and Isham 1998) and detecting and describing multidimensionality (e.g., Roussos, Stout, and Marden 1998; Zhang and Stout 1999).