Cohen's kappa for measuring agreement between two observers using a discrete nominal scale is extended to measuring agreement over time for continuous nominal scales. The continuous kappa coefficient avoids problems encountered by the arbitrary division of real time durations into presence/absence frequencies in discrete intervals. The extension is simple but issues of independence and number of observations pose problems for significance testing.
Get full access to this article
View all access options for this article.
References
1.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46.
2.
Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213-220.
3.
Conger, A. J. (1980). Integration and generalization of kappas for multiple raters. Psychological Bulletin, 88, 322-328.
4.
Conger, A. J. (in press). Statistical Considerations. In Hersen, M., Michelson, L., and Bellack, A. S. (Eds.). Issues in psychotherapy research. New York: Plenum.
5.
Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378-382.
6.
Hubert, L. (1977). Kappa revisited. Psychological Bulletin, 84, 289-297.
7.
Light, R. J. (1971). Measures of response agreement for qualitative data: some generalizations and alternatives. Psychological Bulletin, 76, 365-377.