Kappa coefficient of agreement on a single rating category, KSC, with its associated test of statistical significance, is proposed as a measure of the interrater agreement where a single item or object is rated by multiple raters. A goodness-of-fit approach is used to test the statistical significance of all Kappasc beyond chance agreement.
Get full access to this article
View all access options for this article.
References
1.
FleissJ. L. (1981) Statistical methods for rates and proportions. New York: Wiley. P. 217.
2.
SzalaiJ. P. (1993) The statistics of agreement on a single item or object by multiple raters. Perceptual and Motor Skills, 77, 377–378.