Restricted accessResearch articleFirst published online 1988-09
A Computer Program for Determining the Reliability of Dimensionally Scaled Data when the Numbers and Specific Sets of Examiners may Vary at Each Assessment
Using a variant of the intraclass correlation coefficient (ICC), this program computes the reliability of dimensionally scaled variables when both the number and specific set of judges vary from one assessment to the next.
Get full access to this article
View all access options for this article.
References
1.
Bartko, J. J. (1966). The intraclass correlation coefficient as a measure of reliability. Psychological Reports, 19, 3-11.
2.
Bartko, J. J. (1974). Corrective note to: The intraclass correlation coefficient as a measure of reliability. Psychological Reports, 34, 418.
3.
Bartko, J. J. and Carpenter, W. T. (1976). On the methods and theory of reliability. Journal of Nervous and Mental Disease, 163, 307-317.
4.
Cicchetti, D. V. (1984). On a model for assessing the security of infantile attachment: Issues of observer reliability and validity. Behavioral and Brain Sciences, 7, 149-150.
5.
Cicchetti, D. V. , Heavens, R., Didriksen, J., and Showalter, D. (1984). A computer program for assessing the reliability of nominal scales using varying sets of multiple raters. Educational and Psychological Measurement, 44, 671-675.
6.
Cicchetti, D. V. and Sparrow, S. S. (1981). Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86, 127-137.
7.
Cole, J. and Cole, S. (1981). Peer review in the National Science Foundation: Phase two of a study. Washington, DC: National Academy Press.
8.
Fleiss, J. L. (1981). Statistics for rates and proportions. New York: Wiley (2nd ed.).