Abstract
Heuristic evaluation method comparison is important for developing new heuristic sets, to ensure effectiveness and utility. However, comparing different sets of heuristics requires a common baseline upon which a comparison can be made, usually some set of usability problems from a particular interface. This is often accomplished by having evaluators perform system evaluation to produce a set of usability problems for each method in question. A problem arises in that different methods produce different sets of problems, thus introducing validity concerns and ambiguity in resolution of disparate problem sets. We address this problem by illustrating a new comparison technique in which predetermined usability issues are presented to the evaluators up front, followed by assessment of thoroughness, reliability, and cost for the target methods. Comparison of method effectiveness is simplified, and validity concerns are ameliorated.
Get full access to this article
View all access options for this article.
