Abstract
Many of today's complex environments require operators to interact with sophisticated systems that are multi-modal and automated. Despite considerable advancements in systems design, performance is not always optimal. Research suggests that one reason for non-optimal performance may be due to inaccuracies in operators' understanding of how these complex systems work. These inaccuracies may lead to: (a) a bias towards use of automation, (b) over-trusting the automation, or (c) diffusing responsibility to the automation. Unfortunately, few theoretically-derived strategies exist to guide training design that builds an appropriate understanding of how the system works in addition to how to work the system. In this paper we take a training perspective to describe how and why automation may or may not be used to support decision making. Specifically, the paper will briefly: (1) discuss the training problems with human-computer teams, (2) describe a number of theoretical underpinnings for the problem, (3) provide an overview of challenges for addressing the problem, (4) forward two potential research areas that may mediate the challenges, and (5) describe expected payoffs for research in these areas.
Get full access to this article
View all access options for this article.
