Crime & Safety Surveys (CSS)



5. DATA QUALITY AND COMPARABILITY

Sampling Error


Standard errors of percentage and population counts were calculated with the Taylor series approximation method using PSU and strata variables available from the data set, and by using the generalized variance function (gvf) constant parameters. The gvf represents the curve fitted to the individual standard errors that are calculated using the jackknife repeated replication technique. For more detailed information, see also National Crime Victimization Survey documentation.

Top

Nonsampling Error

The key sources of nonsampling error in the SCS are described below.

Coverage error. Coverage error in the NCVS (and therefore the SCS) would result from coverage error in the census and the supplemental procedures, and is addressed at that level; For more detailed information, see National Crime Victimization Survey documentation.

Unit nonresponse. Because interviews with students can only be completed after households have responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate (see table SCS-1). Thus, the overall unweighted SCS response rate is calculated by multiplying the household completion rate by the student completion rate.

NCES Statistical Standard 4-4-1 requires that any survey stage of data collection with a unit or item response less than 85 percent must be evaluated for potential nonresponse bias. The Census Bureau completed a unit nonresponse bias analysis to determine the extent to which there might be bias in the estimates produced using SCS data. The analysis of unit nonresponse bias found evidence of potential bias for both the NCVS and SCS portions of the interview. Respondents on both versions of the survey were included in the analysis. The unit nonresponse bias analysis takes into account nonresponses on both the NCVS and the SCS. For the 2017 SCS interview, Census' analysis of unit nonresponse bias found race/ethnicity and census region variables showed significant differences in response rates between different race/ethnicity and census region subgroups. Respondent and nonrespondent distributions are significantly different for only the race/ethnicity subgroup. However, after using weights adjusted for person nonresponse, there is no evidence that these response differences introduced nonresponse bias in the final victimization estimates.

For the 2015 NCVS interview, Census found evidence of unit nonresponse bias within Hispanic origin, urbanicity, region and age subgroups. Within the SCS portion of the interview, race, urbanicity, region and age subgroups showed significant unit nonresponse bias. Further analysis indicated that respondents in the age 14 and the rural categories had significantly higher nonresponse bias estimates compared to other age and urbanicity subgroups, while respondents who were Asian and from the Northeast had significantly lower response bias estimates compared to other race and region subgroups. Based on the analysis, Census concluded that there are significant nonresponse biases in the 2015 SCS data. Readers should use caution when comparing responses among subgroups in the SCS.

Due to the low student response rates in in 2005, 2007, and 2009, unit nonresponse bias analyses were commissioned. In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other race/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting the nonresponse bias has little impact on the overall estimates.

In 2007, the analysis of unit nonresponse bias found evidence of bias by race, household income, and urbanicity variables. Hispanic respondents had lower response rates than respondents from other races/ethnicities. Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $7,500. Respondents who live in urban areas had lower response rates than those who live in rural areas. However, when responding students were compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting the nonresponse bias has little impact on the overall estimates.

The analysis of unit nonresponse bias in 2005 also found evidence of bias for the race, household income, and urbanicity variables. White, non-Hispanic and other, non-Hispanic respondents had higher response rates than Black, non-Hispanic and Hispanic respondents.

Respondents from households with incomes of $35,000- 49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500-14,999, $15,000-24,999, and $25,000-34,999. Respondents who live in urban areas had lower response rates than those who live in rural or suburban areas.

Top

Item nonresponse. Item response rates for the SCS have been high. In all administrations, most items were answered by over 95 percent of all eligible respondents, with a few exceptions. One notable exception was the household income question, which was answered by about 80 percent of all households in 2007; about 74 percent of all households in 2005; and about 78, 80, 86, 90, and 90 percent of all households in 2003, 2001, 1999, 1995, and 1989, respectively. Due to their sensitive nature, income and income-related questions typically have relatively lower response rates than other items.

Beginning with the 2009 SCS, detail on the reasons for nonresponse was collected. Where data were once coded collectively as residue, using 8's or a combination of 8's and 9's, data categories are now available to indicate specific types of missing data. Potential responses to the SCS include: valid values; explicit don't know; blind don't know; blind refusals; residue; out of universe/off path. Users should note that this type of detail is only available on the SCS supplement, not for the main NCVS .

Measurement error. Measurement error can result from respondents' different understandings of what constitutes a crime, memory lapses, and reluctance or refusal to report incidents of victimization. A change in the screener procedure between 1989 and 1995 was designed to result in the reporting of more incidents of victimization, more detail on the types of crime, and presumably more accurate data in 1995 than in 1989. (See “Data Comparability” below for further explanation.) Differences in the questions asked in the NCVS and SCS, as well as the sequencing of questions (SCS after NCVS), might have also led to better recall in the SCS in 1995.

Data Comparability

The SCS questionnaire has been modified in several ways since its inception, as has the larger NCVS. Users making comparisons of data across years should be aware of the changes detailed below and their impact on data comparability. In 1989 and 1995, respondents to the SCS were asked two separate sets of questions regarding personal victimization. The first set of questions was part of the main NCVS, and the second set was part of the SCS. When examining data from either 1989 or 1995, the following have an impact on the comparability of data on victimization: (1) differences between years in the wording of victimization items in the NCVS as well as the SCS questionnaires; and (2) differences between SCS and NCVS items collecting similar data.

NCVS design changes. The NCVS was redesigned in 1992. Changes to the NCVS screening procedure put in place in 1992 make comparisons to 1989 data difficult.

Due to the redesign, the victimization screening procedure used in 1995 and later years was meant to elicit a more complete tally of victimization incidents than the one used in 1989. For instance, it specifically asked whether respondents had been raped or otherwise sexually assaulted, whereas the 1989 screener did not. See Effects of the Redesign on Victimization Estimates (Kindermann, Lynch, and Cantor 1997) for more details.

In 2003, in accordance with changes to the Office of Management and Budget's standards for the classification of federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is now followed by a question on race. The new race question allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander. An analysis conducted by the Demographic Surveys Division at the U.S. Census Bureau showed that the new race question had very little impact on the aggregate racial distribution of NCVS respondents, with one exception: there was a 2-percentage-point decrease in the percentage of respondents who reported themselves as White. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.

In 2007, three changes were made to the NCVS for budgetary reasons. First, the sample was reduced by 14 percent beginning in July 2007. Second, to offset the impact of sample reduction, first-time interviews, which are not traditionally used in the production of the NCVS estimates, were included. Since respondents tend to report more victimization during first-time interviews than in subsequent interviews (in part, because new respondents tend to recall events having taken place at a time that was more recent than when they actually occurred), weighting adjustments were used to counteract a possible upward bias in the survey estimates. Using first-time interviews helped to ensure that the overall sample size would remain consistent with that in previous years. Lastly, in July 2007, the use of CATI as an interview technique was discontinued, and interviewing was conducted using only CAPI.

SCS design changes. The SCS questionnaire wording has been modified in several ways since its inception. Modifications have included changes in the series of questions pertaining to "fear" and "avoidance" between all survey years, beginning in 1995; changes in the definition of "at school" in 2001; changes in the introduction to, definition of, and placement of the item about "gangs" in 2001; and expansion of the single "bullying" question to include a series of questions in 2005 and including the topic of cyber-bullying in 2007. For more details, see Student Victimization in U.S. Schools: Results From the 2005 School Crime Supplement to the National Crime Victimization Survey (Bauer et al. 2008) and Indicators of School Crime and Safety: 2008 (Dinkes, Kemp, and Baum 2009).

In addition, the reference time period for the 2007 SCS was revised from "the last 6 months" to "this school year." The change in reference period resulted in a change in eligibility criteria for participation in the 2007 SCS to include household members between ages 12 and 18 who had attended school at any time during the school year instead of during the 6 months preceding the interview, as in earlier surveys.

Comparisons with related surveys.NCVS/SCS data have been analyzed and reported in conjunction with several other surveys on crime, safety, and risk behaviors. (See Indicators of School Crime and Safety publications.) These include both NCES and non-NCES surveys. There are four NCES surveys: the School Safety and Discipline Questionnaire of the 1993 National Household Education Survey; the Teacher Questionnaire (specifically, the teacher victimization items) of the 1993-94, 1999-2000, 2003-04, 2007-08 and 2011-12 Schools and Staffing Survey; the Fast Response Survey System's Principal/School Disciplinarian Survey, conducted periodically; and the School Survey on Crime and Safety (SSOCS), conducted in 1999-2000, 2003-04, 2005-06, 2007-08, 2009-10, 2015-16, and 2017-18.

The non-NCES surveys and studies include the Youth Risk Behavior Surveillance System (YRBSS), a national and state-level epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health; the School Associated Violent Death Study (SAVD), a study developed by the CDC (in conjunction with the U.S. Departments of Education and Justice) to describe the epidemiology of school-associated violent death in the United States and identify potential risk factors for these deaths; the Supplementary Homicide Reports (SHR), a part of the Uniform Crime Reporting (UCR) program conducted by the Federal Bureau of Investigation to provide incident-level information on criminal homicides; and the Web-based Injury Statistics Query and Reporting System Fatal (WISQARS Fatal), which provides data on injury-related mortality collected by the CDC.

Readers should exercise caution when doing cross–survey analyses using these data. While some of the data were collected from universe surveys, most were collected from sample surveys. Also, some questions may appear the same across surveys when, in fact, they were asked of different populations of students, in different years, at different locations, and about experiences that occurred within different periods of time. Because of these variations in collection procedures, timing, phrasing of questions, and so forth, the results from the different sources are not strictly comparable.

Table SCS–1. Unweighted household, student, and overall unit response rates for the School Crime Survey: 2001–17

Year Household response rate Student response rate Overall response rate
2001 93.1 77.0 71.7
2003 91.9 69.6 64.0
2005 90.6 61.7 56.0
2007 90.4 58.3 52.7
2009 91.7 55.9 51.3
2011 90.7 63.3 57.4
2013 85.5 59.9 51.2
2015 82.5 57.8 47.7
2017 76.9 52.5 40.3
SOURCE: United States Department of Justice, Office of Justice Programs, Bureau of Justice Statistics, National Crime Victimization Survey, School Crime Supplement.

Top