Abstract
Artificial intelligence (AI) is increasingly integrating into high-risk domains where error consequences vary. Understanding how consequences affect trust in AI is important for safe collaboration. In an AI-assisted experiment, 35 participants completed trials under low- and high-consequence conditions. Eye tracking, trust ratings, and performance data were collected. Under high-consequence conditions, participants spent less time fixating on AI’s correct suggestions. Participants also spent more time fixating on manual verification areas when AI was incorrect, indicating reduced reliance on AI under high-consequence conditions. Notably, participants avoided initial errors in high-consequence conditions, suggesting increased vigilance. These findings show that error consequences significantly influence trust and decision-making with AI. Future research should further examine how consequence levels shape trust dynamics, particularly after initial human errors, by analyzing specific patterns in human-AI interaction. Understanding these dynamics can inform the design of reliable AI systems that support safer decision-making in high-risk industries.
Keywords
Get full access to this article
View all access options for this article.
