Abstract
Autonomous systems are rapidly gaining the capacity to recognize their own errors and utilize social strategies to mitigate the trust deficit that accompanies those errors. While previous research has catalogued the effects of trust repair attempts in human-human relationships, much remains unknown about the consequences of similar strategies when administered by autonomous systems such as self-driving vehicles. While we tend to treat computers like social actors, autonomous systems may have a wider spectrum of perceived human-likeness and may be subject to different interpretations of the purposefulness of their errors. This paper seeks to understand the consequences of these factors on the effectiveness of trust repair attempts administered by self-driving cars, and the results highlight the importance of considering human-likeness and purposefulness in the design of autonomous systems.
Get full access to this article
View all access options for this article.
