Abstract
Because nominal-scale judgments cannot directly be aggregated into meaningful composites, the addition of a second rater is usually motivated by a desire to estimate the quality of a single rater's classifications rather than to improve reliability. When raters agree, the aggregation problem does not arise. Nevertheless, a proportion of this agreement is likely to have occurred by chance, so reliability issues remain even if the focus is on nominal judgments upon which raters concur. In this article, the reliability of nominal judgments upon which raters agree is addressed in the framework of a latent class model that includes a systematic agreement parameter. It is shown that if attention is limited to agreement cases, the value of this parameter increases as expected. Circumstances under which agreement case reliability is important are discussed.
Get full access to this article
View all access options for this article.
