Abstract
This study investigates how trust contagion in human-AI teams is reflected through linguistic alignment. In a resource allocation game, 42 participants collaborated with an AI and a confederate teammate trained to express high, neutral, or low trust in the AI. We analyzed lexical and structural alignment in participants’ responses. Results showed significantly higher lexical alignment when confederates expressed low trust, suggesting participants mirrored more words in response to uncertainty. Structural alignment did not vary across conditions. These findings suggest lexical alignment serves as a social adaptation to low-trust environments, potentially as a compensatory or affiliative response. Real-time tracking of lexical alignment could inform adaptive AI interfaces to detect and mitigate negative trust contagion. Future work should investigate non-verbal alignment and longer interactions to capture broader trust dynamics.
Get full access to this article
View all access options for this article.
