Abstract
This study investigated how people respond to driving errors committed by human drivers versus algorithm-controlled autonomous driving systems, focusing on how the driving agent and error severity shape error tolerance and trust. Drawing on Social Identity Theory and the Perfect Automation Schema, we proposed that autonomous driving systems operated by algorithms are perceived as out-group entities held to extremely high performance standards; consequently, even minor errors caused by autonomous vehicle algorithms may elicit disproportionately negative reactions. We conducted an online experiment employing a 2 (driving agent: algorithm vs. human) × 2 (error severity: fatal vs. minor) between-subjects design. Participants (N = 800) were randomly assigned to read one of four driving error scenarios, and then evaluated the error as well as the driving agent’s trustworthiness. Results revealed a significant interaction between driving agent and error severity. When the error was fatal, both driving agents received highly negative evaluations on both error tolerance and trust. By contrast, when the error was minor, participants were more tolerant of the human error and expressed greater trust in the human driver than in the algorithm. These findings suggest that even minor, noncritical malfunctions in autonomous driving systems can undermine users’ confidence and exacerbate negative evaluations. Overall, this study highlights social and cognitive biases in how people perceive and judge autonomous driving systems, deepening our understanding of the algorithm aversion phenomenon.
Get full access to this article
View all access options for this article.
