Abstract
Trust research in human-AI interaction over the past decades has identified various factors influencing trust dynamics within dyadic relationships between a single human and an AI agent. The current study addresses the gap of limited exploration in non-dyadic HAI scenarios by examining trust dynamics across two referents: AI and other humans. Using a custom-developed simulated mass evacuation testbed, we focus on a multi-operator-single-AI (MOSA) scenario, where multiple individuals need to evacuate to a safe area with the assistance of an AI guide. Participants can also report roadblocks to help others at a personal cost. We investigate trust dynamics in both the AI and other humans, specifically examining how trust changes after passing each waypoint. Our goal is to understand the effects of information transparency and individual compliance and reporting behaviors (at time t) on trust dynamics (trustt+1 − trustt). The study highlights that trust dynamics vary significantly depending on the referent.
Keywords
Get full access to this article
View all access options for this article.
