A FORTRAN program is described that calculates the probability of an observed difference between agreement measures obtained from two independent sets of raters.
Get full access to this article
View all access options for this article.
References
1.
Berry, K. J. , & Mielke, P. W. (1988). A generalization of Cohen's kappa agreement measure to interval measurement and multiple raters. Educational and Psychological Measurement, 48, 921-933.
2.
Berry, K. J. , & Mielke, P. W. (1990). A generalized agreement measure. Educational and Psychological Measurement, 50, 123-125.
3.
Cicchetti, D. V. , & Heavens, R. (1981). A computer program for determining the significance of the difference between pairs of independently derived values of kappa or weighted kappa. Educational and Psychological Measurement, 41, 189-193.
4.
Cohen, J. (1960). Acoefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46.
5.
Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213-220.
6.
Fleiss, J. L. , & Cicchetti, D. V. (1978). Inference about weighted kappa in the non-null case. Applied Psychological Measurement, 2, 113-117.
7.
Fleiss, J. L. , Cohen, J., & Everitt, B. S. (1969). Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72, 323-327.
8.
Mielke, P. W. , & Iyer, H. K. (1982). Permutation techniques for analyzing multi response data from randomized block experiments. Comununications in Statistics: Theory and Methods, 11, 1427-1437.