Abstract
Automation has become increasingly prevalent in modern day society. With this progress, the shift from operators serving as active controllers (directly involved with the system) to supervisory controllers (indirect management of a system) has become more common. Accompanying this evolution of the operator from their original role, there is a need to explore the components that influence effective cooperation between operators and semi-autonomous agents. Two key factors moderating this relationship are operator trust in the agent and the complexity of the task itself (i.e., number of agents an operator monitors). This work examines trust and automation theory as it applies to an operator monitoring a complex, two agent, simulated search-and-rescue task. The effect of source characteristics of the two automated systems will be evaluated across reliability conditions for their impact upon reliance and perceived trust of automation. The purpose of this research is to extend knowledge in the theory of human-agent trust interaction and offers potential applied benefits in leveraging the aspects of system design that lead to optimizing human-agent interaction in a complex and possibly imperfect system.
Get full access to this article
View all access options for this article.
