Abstract
System transparency plays a critical role in user trust and perceived reliability in human-AI decision making scenarios. However, there is no clear consensus on the optimal level of transparency needed for adequately calibrated trust in the system, influenced by the user’s perceived reliability of the AI, confidence in the AI, and ease of understanding of AI output. Participants (n = 216) engaged in a decision-making task across four AI-assisted decision-making scenarios. Each participant was randomly assigned to one of three transparency levels: low, medium, and high. Results show that higher transparency levels improved trust (β = .667, p < .001), perceived reliability (β = .595, p < .001), confidence in the AI accuracy (β = .553, p < .001), and ease of understanding (β = 1.161, p < .001). These findings indicate that increasing the amount of information presented to the user increases understanding, trust, perceived reliability, and confidence.
Get full access to this article
View all access options for this article.
