Abstract
Conversational AI chatbots based on large-language models offer promising applications, but their effective use requires they be accepted and properly trusted. Many chatbots provide users with citations as a form of explanation and social proof, but the effects of citations on user trust when response content contradicts a user’s prior beliefs are unclear. A 3 × 2 within-subjects survey analyzed 66 users’ trust in conservative, moderate, and liberal chatbots’ responses to political questions, with and without citations. Responses were categorized as confirming or contradicting existing beliefs based on self-reported political lean. A linear mixed-effects model showed that users have significantly higher trust in responses that are moderate or confirm beliefs, but that citations do not significantly affect trust. These results suggest that citations may not affect trust when addressing politically controversial topics, and that balanced or moderate responses are seen as trustworthy from users with a wide range of prior beliefs.
Keywords
Get full access to this article
View all access options for this article.
