Abstract
During the early stages of designing user interfaces within the software industry, it is common for several alternatives to emerge before arriving at a final design. Although formative testing can be used to assess user preference among the alternatives, resource constraints imposed by the industry can easily render such testing impractical. This paper introduces comparative differential rating (CDR) as a simple, efficient formative testing method for assessing user preference among design alternatives. Users assign preference ratings to each alternative by making paired comparisons of the alternatives on a differential scale. Alternatives rated lower than the most highly rated alternative are discarded while those remaining are candidates for the final design. Unlike more complex paired comparison methods that use multiple criteria to obtain ratings, such as analytic hierarchy process (AHP), CDR is intended for simple, frequent cases early in the design process where multiple alternatives emerge and user preference is the only criteria of interest in determining the final design. In further contrast to AHP, CDR typically requires fewer comparisons, is measured on an interval scale, and is analyzed using inferential statistics.
Get full access to this article
View all access options for this article.
