Abstract
AI is set to take over some tasks within the decision space that have traditionally been reserved for humans. In return, human decision-makers interacting with AI systems entails rationalization of AI outputs by humans, who may have difficulty forming trust around such AI-generated information. Although a variety of analytical methods have provided some insights into human trust in AI, a more comprehensive understanding of trust may be augmented by generative theories that capture the temporal evolution of trust. Therefore, an open system modeling approach, representing trust as a function of time with a single probability distribution, can potentially improve modeling human trust in an AI system. Results of this study could improve machine behaviors that may help steer a human’s preference to a more Bayesian optimal rationality which is useful in stressful decision-making scenarios.
Get full access to this article
View all access options for this article.
