The observed degree of agreement between judges is commonly summarized using Cohen's (1960) kappa. Previous research has related values of kappa to the marginal distributions of the agreement matrix. This manuscript provides an approach for calculating maximum values of kappa as a function of observed agreement proportions between judges. Solutions are provided separately for matrices of size 2 x 2, 3 x 3, 4 x 4, and k X k; plots are provided for the 2 x 2, 3 x 3, and 4 x 4 matrices.
Get full access to this article
View all access options for this article.
References
1.
Brennan, R. L. and Prediger, D. J. (1981). Coefficient kappa: Some uses, misuses, and alternatives. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 41, 687-699.
2.
Cohen, J. (1960). A coefficient of agreement for nominal scales. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 20, 37-46.
3.
Collis, Glyn M. (1985). Kappa, measures of marginal symmetry and intraclass correlations. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 45, 55-62.
4.
Hanley, J. A. (1987). Standard error of the kappa statistic. Psychological Bulletin, 102, 315-321.
5.
Landis, J. R. and Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159-174.
6.
Social Scisearch: File 7 (Machine-readable data file) (1988). Institute for Scientific Information (1972-). Philadelphia, PA: Institute for Scientific Information (Producer). Palo Alto, CA: Dialog Information Services, Inc. (Distributor).