Abstract
The trust placed in automated diagnostic aids by the human operator is one of the most critical psychological factors that influences operator reliance on decision support systems. Studies examining the nature of human interaction with automation have revealed that users have a propensity to apply norms of human-human interpersonal interaction to their interaction with ‘intelligent machines’. Nevertheless, there exist subtle differences in the manner in which humans perceive and react to automated aids compared to human teammates. The present review is focused on comparing the process of trust development in human-automation teams with that of human-human partnerships, specifically in the context of dyads that constitute a primary decision maker and either a human ‘advisor’ or an intelligent automated decision support system. A conceptual framework that synthesizes and contrasts the process of trust development in humans versus automation is proposed. Potential implications of this research include the improved design of decision support systems by incorporating features into automated aids that elicit operator responses that mirror responses in human-human interpersonal interaction.
Get full access to this article
View all access options for this article.
