Abstract
Plan quality evaluation researchers typically evaluate plans in relation to whether they contain certain desirable features. Best practice dictates that plans be evaluated by at least two readers and that researchers report a measure of the extent to which the readers agree on whether the plans contain the desirable features. Established practice for assessing this agreement has been subject to criticism. We summarize this criticism, discuss an alternative approach to assessing agreement, and provide recommendations for plan quality evaluation researchers to follow to improve the quality of their data and the manner in which they assess and report that quality.
Keywords
Get full access to this article
View all access options for this article.
