Abstract
A meta-analysis of studies examining the interrater reliability of the standard practice of peer assessments of quality of care was conducted. Using the Medline, Health Planning and Administration, and SCISEARCH databases, the English-language literature from 1966 through 1991 was searchedfor studies of chance corrected agreement among peer reviewers. The weighted mean kappa of 21 independent findings from 13 studies was .31. Comparison of this result with widely used standards suggests that the interrater reliability of peer assessment is quite limited and needs improvement. Research needs to be directed at modifying the peer review process to improve its reliability or at identifying indexes of quality with sufficient validity and reliability that they can be employed without subsequent peer review.
Get full access to this article
View all access options for this article.
