Abstract
Understanding how trust spreads in Human-Human-AI (HHA) teams is critical to designing adaptive AI teammates. While prior research demonstrates that one human’s trust can influence another—a phenomenon known as trust contagion—the mechanisms in human-AI teams remain unclear. We examined how trust cues affect decision-making in triadic teams consisting of a participant, a confederate, and an AI agent. The confederate’s expressed trust in the AI (high, low, neutral) was experimentally manipulated, and team interactions were analyzed using Grounded Theory (GT) and Structural Topic Modeling (STM). GT captured rich, contextual themes, while STM identified semantic patterns at scale. Results showed that high-trust teams reached rapid consensus with minimal discussion, whereas low-trust teams engaged in deliberative trust calibration. We found conceptual convergence between GT and STM, demonstrating the value of integrating human-driven and computational methods to understand trust contagion and proving design implications of AI teammates that adapt to trust dynamics.
Get full access to this article
View all access options for this article.
