Abstract
Many rank order tasks, developed as experiential exercises in learning situations, use a dispersion scoring method based upon the absolute difference between a respondent's answer and a solution key. This article shows that when the respondent is a team it is not appropriate to compare it directly to the sum of the scores of the average of individuals, in other words, a simple statistical aggrega tion of averages. A bias is present which is an effect of the size of the team. A quick and accurate scoring algorithm is presented to adjust for this bias.
Get full access to this article
View all access options for this article.
