Abstract
In this paper we consider the problem of assessing agreement between two raters while the ratings are given separately in 2 - point nominal scale and critically examine some features of Cohen's kappa statistic (K C ), widely and extensively used in this context. We point out some undesirahle features of K C and, in the process, propose two modified kappa statistics. Properties and features of these statistics are explained with illustrative examples.
Get full access to this article
View all access options for this article.
