This paper is concerned with the measurement of agreement between two observers classifying items into nominal categories, with one of the observers being viewed as the "standard". An asymmetric version of Cohen's Kappa is proposed as an appropriate measure. Properties of this measure are outlined, and a numerical example is given.
Get full access to this article
View all access options for this article.
References
1.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46.
2.
Fleiss, J. L. (1981). Statistical methods for rates and proportions. New York: Wiley.
3.
Hollenbeck, A. R. (1978). Problems of reliability in observational research. In G. P. Sackett (Ed.), Observing behavior: Data collection and analysis methods, vol. II. Baltimore, MD: University Park Press, pp. 79-98.
4.
Light, R. J. (1971). Measures of response agreement for qualitative data: Some generalizations and alternatives. Psychological Bulletin, 76, 365-377.
5.
Meister, D. (1985). Behavioral analysis & measurement methods. New York: Wiley.
6.
Reynolds, H. T. (1977). The analysis of cross-classifications. New York: Free Press.
7.
Reynolds, H. T. (1984). Analysis of nominal data, (2nd ed). Sage University Paper series on Quantitative Applications in the Social Sciences, 07-001. Newbury Park, CA: Sage Publications.
8.
Wackerly, D. D. , McClave, J. T., and Rao, P. V. (1978). Measuring nominal scale agreement between a judge and a known standard. Psychometrika, 43, 213-223.
9.
Williams, G. W. (1976). Comparing the joint agreement of several raters with another rater. Biometrics, 32, 619-627.