Skip Navigation
small NCES header image
Students' Report of School Crime: 1989 and 1995

Methodology

I. Background of the School Crime Supplement

Purpose and Sponsorship of the Survey

Criminal activity at school poses an obvious threat to the safety of students and can act as a significant barrier to the education process. In order to study the relationship between victimization at school and the school environment, and to monitor changes in student experiences with victimization, accurate information regarding its incidence must be collected. Jointly designed by the Department of Education's and Department of Justice's Bureau of Justice Statistics, the School Crime Supplement (SCS) was developed to address this data need.

Sample Design and Data Collection

Created as an occasional supplement to the annual National Crime Victimization Survey (NCVS), the SCS was fielded in 1989 and 1995. The NCVS collects data on the incidence of criminal activity against households and household members from a nationally representative sample of households (47,000 households in 1989 and 49,000 households in 1995). In both 1989 and 1995, households were sampled using a stratified multistage cluster design.4

NCVS interviews were conducted with each household member who was 12 years old or older. Once all NCVS interviews were completed, household members between the ages of 12 and 19 were given an SCS interview. Only those 12 -to 19-year-olds who were in primary or secondary education programs leading to a high school diploma, and who had been enrolled sometime during the 6 months prior to the interview, were administered the SCS questionnaire. Students who were home schooled were not included.

The SCS questionnaire was designed to record the incidence of crime and criminal activity occurring inside a school, on school grounds, or on a school bus during the 6 months preceding the interview. There were 10,449 SCS interviews completed in 1989 and 9,954 in 1995.

Data were collected by the Department of Commerce's Bureau of the Census. In both 1989 and 1995, SCS surveys were conducted between January and June, with one-sixth of the sample being covered each month. Interviews were conducted with the subject student over the telephone or in person. In both years, efforts were made to assure that interviews about student experiences at school were conducted with the students themselves. However, under certain circumstances, interviews with proxy respondents were accepted. These circumstances included interviews scheduled with a child between the ages of 12 and 13 where the parents refused to allow an interview with the child, interviews where the subject child was unavailable during the period of data collection, and interviews where the child was physically or emotionally unable to answer for him or herself.

Telephone interviews accounted for 7,418 of the 9,954 interviews in 1995, and 7,407 of the 10,449 interviews in 1989. Proxy interviews accounted for 363 of the 9,954 interviews in 1995, and 252 of the 10,449 interviews in 1989.

Responses to both the NCVS and the SCS are confidential by law. Interviewers are instructed to conduct interviews in privacy unless respondents specifically agree to permit others to be present. Most interviews for the NCVS and SCS are conducted by telephone, and most questions require "yes" or "no" answers, thereby affording respondents a further measure of privacy. By law, identifiable information about respondents may not be disclosed or released to others for any purpose.

Unit and Item Response Rates

Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. In the 1989 and 1995 SCS, the household completion rates were 96.5 percent, and 95.1 percent, respectively. The student completion rates were 86.5 percent and 77.5 percent.5 Multiplying the household completion rate times the student completion rate produced an overall SCS response rate of 83.5 percent in 1989 and 73.7 percent in 1995.

The rate at which respondents provide a valid response to a given item is referred to as its item response rate. Item response rates for items used in this report were high. Most items were answered by over 95 percent of all eligible respondents. The only exception was the household income question which was answered by approximately 90 percent of all households in both years. Income and income-related questions typically have relatively low response rates due to their sensitive nature.

II. Notes Regarding Items Used in the Report

Differences between the 1989 and 1995 NCVS Victimization Items

Respondents to the SCS were asked two separate sets of questions regarding personal victimization. The first set of questions was asked as part of the ongoing NCVS and included data on up to six separate incidents of victimization reported by respondents to the NCVS. These questions covered several different dimensions of victimization including the nature of each incident, where it occurred, what losses resulted, etc. Earlier research on student victimization at school has relied on NCVS items to develop incident rates.6 However, changes to the basic NCVS between 1989 and 1995 make cross-year comparisons using these items difficult. The 1995 NCVS used a different screening procedure to uncover victimizations than did the 1989 NCVS.

The new screening procedure was meant to elicit a more complete tally of victimization incidents than did the one used in the 1989 NCVS. For instance, the 1995 screener specifically asked whether respondents had been raped or otherwise sexually assaulted, whereas the 1989 screener did not. Therefore, NCVS item based cross-year changes in reported victimization rates, or lack thereof, may only be the result of changes in how questions were asked and not of actual changes in the incidence of victimization. For more details on this issue, refer the BJS report, "Effects of the Redesign on Victimization Estimates".7

Because NCVS questionnaires were completed before students were given the SCS questionnaires, it is likely that changes to NCVS victimization screening procedures differentially affected responses to the 1989 and 1995 SCS victimization items. While the assumption is not possible to test, it is nonetheless reasonable to expect that by providing a more detailed victimization screening instrument in the 1995 NCVS, that 1995 SCS respondents had better victimization recall than 1989 SCS respondents.

Differences Between NCVS and SCS Items

A less detailed set of victimization questions, which was not modified between 1989 and 1995, was asked in the SCS. These items are more generally comparable across the two years and form the basis of the victimization section of this report. Readers should be aware that these items indicate a higher rate of victimization at school than do the six items included in the NCVS. For instance, using the NCVS items, BJS estimated that 9 percent of students experienced some form of victimization at school during the period covered by the 1989 SCS.8 The 1989 SCS items, asked of the same students, indicate that 14.5 percent of them had experienced some sort of victimization at school.

One contributing factor to the difference may be the sequencing of the NCVS and SCS. Respondents were first asked the NCVS items and then asked the SCS items. Prompted by the NCVS to think about incidents of victimization, respondent recall may have improved by the time the SCS victimization questions were asked. A second contributing factor may be differences between the victimization questions asked in the NCVS and the SCS. In the NCVS, respondents were asked about an incident and where it occurred in separate questions. The SCS items asked respondents about victimization and whether or not it occurred at school in one question. This may have prompted respondents to report incidents that had occurred at school that may have been forgotten during the NCVS set of questions. Because of differences in the way the SCS and NCVS items were asked, it is recommended that rates developed from the SCS items not be compared to rates developed from the NCVS items.

Derived Variables

Several variables used in this report were derived by combining information from two or more questions. For the most part, the derived variables and the items that went into them were the same in both the 1989 and 1995 SCS.

The variable, violent victimization, was derived by combining two questions dealing with incidents at school. The first asked whether or not the respondent had had anything taken directly by force (question 20a in the 1995 questionnaire and question 26b in the 1989 questionnaire). Not counting incidents where the respondent had had anything taken directly by force, the second question asked if the respondent had been physically attacked at school (questions 22a and 28a in 1995 and 1989, respectively). If the respondent said yes to either, he or she was counted as having experienced some form of violent victimization.

Any victimization was derived from the violent victimization item and a question asking whether or not the respondent had had anything stolen at school (question 21a in 1995 and 27a in 1989). The question about having something stolen excluded incidents where something had been taken by force. If the respondent said something had been stolen, or had experienced some form of violent victimization, he or she was considered a victim in the any victimization item. All victimization items were dichotomous. Either the respondent had experienced a given form of victimization or had not.

The items drug availability: 1995 definition and drug availability: 1989 definition were also derived. In 1995, respondents were asked about the difficulty of obtaining marijuana, crack, cocaine, uppers/downers, LSD, PCP, heroin, or other illegal drugs at school (questions 18b through 18i in the 1995 questionnaire). If students reported any of these were easy to obtain or were hard to obtain, they were counted as believing drugs to be available in the drug availability: 1995 definition variable.

The same process went into constructing the drug availability: 1989 definition item. However, because the 1989 questionnaire (questions 22b through 22e) did not ask about the availability of LSD, PCP, or heroin, only the availability of marijuana, crack, cocaine, and uppers/downers was considered. This variable allowed comparisons to be made about perceptions of drug availability across the two SCS. For both derived drug availability variables, respondents had to say that all of the drugs covered were impossible to obtain to be counted as believing no drugs to be available.

A large number of respondents indicated that they were not sure if one or more of the listed drugs were available, or were not sure what one or more of the drugs were. These cases make up the difference in the tables between the number believing drugs to be available, believing no drugs to be available, and student population totals. The drug variables were trichotomous in form. Respondents were coded as believing drugs to be available, not available, or other.

The final derived variable, student's race/ethnicity, was a combination of two variables (both from the NCVS but included on the SCS files). The first question asked the race of the student and the second asked whether or not the student was of Hispanic origin. Respondents who answered yes to the second question were counted as Hispanic. Students who said they were white or black, but not of Hispanic origin were counted as white/non-Hispanic or black/non-Hispanic. Those of other races who were not Hispanic were counted as other/non-Hispanic.

III. Weighting and Statistical Analysis Procedures

Weighting

The purpose of the SCS data is to make inferences about the 12-to 19-year-old student population (see above for a more complete description of the population). Before such inferences can be drawn, it is important to adjust or weight the sample of students to assure they are similar to the entire population of such students. The weights used in this report are a combination of household level and person level adjustment factors. In the NCVS, adjustments were made to account for both household and person non-interviews. Additional factors were then applied to reduce the variance of the estimate by correcting for differences between the sample distribution of age, race, and sex, and known population distributions of these characteristics. The resulting weights were assigned to all interviewed households and persons on the file.

A special weighting adjustment was then performed for the SCS respondents. Non-interview adjustment factors were computed to adjust for SCS interview non-response. This non-interview factor was then applied to the NCVS person level weight for each SCS respondent.

Standard Errors

The sample of students selected for each SCS is just one of many possible samples that could have been selected. It is possible that estimates from a given SCS student sample may differ from estimates that would have been produced from other student samples. This type of variability is called sampling error, or the standard error, because it arises from using a sample of students rather than all students.

The standard error is a measure of the variability of a parameter estimate. It indicates how much variation there is in the population of possible estimates of a parameter for a given sample size. The probability that a complete census count would differ from the sample estimate by less than 1 standard error is about 68 percent. The chance that the difference would be less than 1.65 standard errors is about 90 percent, and that the difference would be less than 1.96 standard errors, about 95 percent. Standard errors for the percentage estimates are presented in the appendix tables.

Standard errors are typically developed assuming that a sample is drawn purely at random. The sample for the SCS was not a simple random sample, however. In order to help adjust the standard errors to account for the sample design, the Census Bureau developed three generalized variance function (gvf) constant parameters. The gvf represents the curve fitted to the individual standard errors calculated using the Jackknife Repeated Replication technique.9 The three constant parameters (a, b, and c) derived from the curve fitting process were:

Year              a             b             c
1989              0.00001559    3,108         0.000
1995              0.00006269    2,278         1.804

To adjust the standard errors associated with percentages, the following formula is used:

where p is the percentage of interest expressed as a proportion and y is the size of the population to which the percentage applies. Once the standard error of the proportion is estimated, multiply it by 100 to make it applicable to the percentage.

To calculate the adjusted standard errors associated with population counts, the following applies:

where x is the estimated number of students who experienced a given event (e.g., violent victimization).

Statistical Tests

For the most part, statistical tests done for this report rely on Student's t tests which are designed to determine if estimates are statistically different from one another. The only exception occurred when student characteristic variables had more than two categories and all of the categories could be rank ordered. These variables were student's age, grade, and household income. When comparing these items to indicators of crime, a different set of tests was used. Initially, to determine if a relationship existed between these demographic indicators and the crime indicators, adjusted chi-square tests were employed. If a statistically significant relationship was found, trend tests (weighted logistic regressions) were used to estimate its strength and direction.

Differences discussed in this report are significant at the 95 percent confidence level or higher. Where a lack of difference is noted, the significance of the difference is below this threshold. Differences between pairs of estimated percentages were tested using the Student's t statistic. This t statistic can be used to test the likelihood that the differences between two estimates are larger than would be expected simply due to sampling error.

To compare the difference between two independent percentage estimates, Student's t is calculated as:

where p1 and p2 are the estimated percentages to be compared and se1 and se2 are their corresponding adjusted standard errors.

As the number of comparisons on the same set of data increases, the likelihood that the t value for one or more of the comparisons will exceed 1.96 simply due to sampling error increases. For a single comparison, there is a 5 percent chance that the t value will exceed 1.96 due to sampling error. For five tests, the risk of getting at least one t value over 1.96 due to sampling error increases to 23 percent. To compensate for the problem when making multiple comparisons on the same set of data, Bonferroni adjustments were made.

Bonferroni adjustments essentially deflate the alpha value needed to obtain a given confidence interval. Bonferroni adjustment factors are determined by establishing the number of comparisons that could be made for a given set of data. The alpha value for a given level of confidence is then divided by the number of possible comparisons. The resulting alpha value is then compared to the table of t statistics to find the t value associated with that alpha value.

The effect of modifying comparison of estimates to account for standard errors and Bonferroni adjustments is to occasionally make apparent differences statistically not significant. This helps explain why differences of roughly the same magnitude are statistically significant in some instances while not in others.

Because of the computational complexity associated with weighted logistic regressions (used as trend tests in this report), chi-square tests were used to determine if a relationship existed between student's age, grade in school, or household income and indicators of crime at school. If a chi-square test indicated a significant relationship, a follow-up test was conducted using a weighted logistic regression.

Fellegi adjustments were applied to the chi-square tests to account for effects of standard errors on the estimates.10 A Fellegi adjustment is typically developed in two stages. The first stage adjusts the variances associated with an estimated cell percentage as follows:

where p1 is the estimated weighted percent of cases in a given cell and var1 is the variance of this estimate. N denotes the unweighted population total. Before Fellegi adjustments were made, the cell variances were modified to account for the sample design using the gvf parameters.

Once the variances are adjusted, they are summed across all cells and the resulting sum is then divided by the number of cells. The chi-square estimate based on the weighted cell percentages is then divided by this quotient before determining if it is significant. The equation for the adjustment is:

where I is the number of cells in the cross tabulation and n is the unweighted sample size.

Weighted logistic regressions used in this report were also developed in several stages. The crime report indicators were dichotomized such that students who gave an affirmative response to the indicator being tested (e.g. responding yes to knowing another student who had brought a gun to school) were coded as ones and all other students were coded as zeros.11

The resulting logistic regression models took the following form:

where Y is the dependent variable and X is the independent variable (b1 is the intercept term and ui is the residual term). To assure that particular categories of the independent variable were not given undue weight, the entire equation was weighted by the inverse of the estimated variance of the independent variable in the model as follows:

where i2 represents the estimated variance term.


FOOTNOTES:

[4] For more information regarding the sampling approach used in the National Crime Victimization Survey, refer to U.S. Department of Justice, Bureau of Justice Statistics, "Criminal Victimization in the United States, 1994", NCJ-162126 (Washington, DC: 1997).

[5] It is assumed that the response rate for households with students between the ages of 12 and 19 is the same as that of all households. The reported unit response rates are unweighted.

[6] L. Bastian and B. Taylor, School Crime: A National Crime Victimization Survey Report.

[7] C. Kindermann, J. Lynch, and D. Cantor. Effects of the Redesign on Victimization Estimates.

[8] L. Bastian and B. Taylor, School Crime: A National Crime Victimization Survey Report.

[9] A more detailed description of the generalized variance function constant parameters developed for the NCVS and SCS can be found in the previously cited report "Criminal Victimization in the United States, 1994"

[10] Fellegi, I.P. "Approximate Tests of Independence and Goodness of Fit Based on Stratified Multistage Samples." Journal of the American Statistical Association, 1980, pp. 273-279.

[11] Note that the crime indicators in the chi-square tests were dichotomized in the same manner.


  NCES Help Page Summary List of Figures Table of Contents List of Tables Appendix A

Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics - http://nces.ed.gov
U.S. Department of Education