- Surveys & Programs
- Data & Tools
- Fast Facts
- News & Events
- Publications & Products
- About Us

Statistical Standards Program
Table of Contents Introduction 1. Development of Concepts and Methods 2. Planning and Design of Surveys 3. Collection of Data 4. Processing and Editing of Data 5. Analysis of Data / Production of Estimates or Projections 5-1 Statistical Analysis, Inference, and Comparisons 5-2 Variance Estimation 5-3 Rounding 5-4 Tabular and Graphic Presentations of Data 6. Establishment of Review Procedures 7. Dissemination of Data Glossary Appendix A Appendix B Appendix C Appendix D Publication information For help viewing PDF files, please click here |
ANALYSIS OF DATA / PRODUCTION OF ESTIMATES OR PROJECTIONS |

For a In correlation analysis, Cohen's (1988) convention for interpreting effect sizes may be used.
Empirical evidence has shown that for t tests
or
- Do not report on this test.
- Report that statistically significant differences or effects were not detected.
- If the significance is between .05 and .10, and the observed differences
s are believed to be real, based on research or other evidence, but are not
significant at the .05 level, possible associated with small sample sizes and/or
large standard errors, this may be noted.
- If the estimate is "unreliable," the reader may be informed that
the standard error is so high that the observed large differences are not statistically
significant.
- If a statistically significant difference for a total group under study is
observed, but similar subgroup differences of the same magnitude are associated
with smaller sample sizes and/or larger standard errors and are not statistically
significant, this may be noted.
- If there are large apparent differences that are not significant, possibly
associated with small sample sizes and/or larger standard errors, this may be
noted.
- Use a 95 percent confidence interval to describe the magnitude of the possible difference or effect.
Agresti, A. (2002). Benjamini, Y. and Hochberg,Y. (1995). "Controlling for the False Discovery
Rate: A Practical and Powerful Approach to Multiple Testing." Binder, D.A., Gratton, M., Hidiroglou, M. A., Kumar, S. and Rao, J.N.K. (1984).
"Analysis of Categorical Data from surveys with Complex Designs:
Some Canadian Experiences." Cohen, B.H. (2001). Cohen, J. (1988). Cohen, J. and Cohen, P. (1983). Draper, N. R. and Smith, H. (1998). Hays, W. L. (1994). Hochberg, Y. and Tamhane, A.C. 1987. Hoenig, J.M. and Heisey, D.M. (2001). "The Abuse of Power: The Pervasive
Fallacy of Power Calculations for Data Analysis." Holt, D., Smith, T.M.F., and Winter, P.D. (1980). "Regression Analysis
from Complex Surveys." Jones, L.V., Lewis, C., and Tukey, J.W. (2001). Hypothesis tests, multiplicity of. In N.J. Smelser & P.B. Baltes, Eds., International Encyclopedia of the Social and Behavioral Sciences. London: Elsevier Science, Ltd., pp. 7127-7133.) Kish, L.and Frankel, M.R. (1974). "Inferences from Complex Samples."
Kleinbaum, D.G., Kupper, L.L., Muller, K.E., and Nizam, A. (1998). Lehtonen, R. and Pahkinen, E.J. (1995). Moore, D.S. (2000).
Neter, J., Kutner, M., Nachtsheim, C., and Wasserman, W. (1996). Skinner, C.J., Holt, D., and Smith, T.M.F. eds. (1989). |