Abstract
Performance data need a context to meaningfully interpret the data. One method of providing contextfor an individual unit's performance is to compare it with other similar units. This study compares three methods for selecting similar units: cluster groupings, index groups, and benchmark groups. Each of the three methods is evaluated on a number of criteria, primarily the minimization of within-group variance. Benchmark groups are the best at reducing the variation within the selected groups, and they resist attempts to "label" the groupings. Cluster groups are a close second to benchmarks in the minimization of variability within groups and are considerably easier to compute and administer. However, clustering allows labeling that could stigmatize the groups and threshold effects that might influence judgments about performance. Index groups, while simple, do not perform well on any of the other criteria.
Get full access to this article
View all access options for this article.
