Abstract
Providing decision support to operators in command and control contexts requires careful assessment of its impacts on task performance. Here we describe a human-in-the-loop experiment using a naval air defense testbed to compare three conditions: 1) a baseline interface (DSSBASE; 2) one that displays the temporal proximity of radar aircraft (DSSTEMP); and 3) one which adds a change history panel (DSSCHEX). Threatevaluation accuracy and cognitive models of participants’ judgments did not significantly differ across conditions. Eye-tracking data showed that DSSTEMP lead to a reduced verification of track attributes. Confusion matrices also differed across conditions: DSSTEMP lead to more errors that are “two-categoriesaway” when erroneously classifying hostile aircraft as non-hostile or uncertain. We conclude that in addition to looking at standard metrics of task accuracy and response times, critical insights can be obtained by assessing how different designs for decision support may alter strategies and cognitive processes.
Get full access to this article
View all access options for this article.
