Abstract
This study investigated the effect of users’ “belief in a just world” (BJW) on the persuasiveness of artificial intelligence (AI) agents’ recommendations. Our prediction that users’ preferences for human or AI agents vary according to their BJW levels was tested experimentally. The results revealed that individuals with high BJW rated human agents’ recommendations more favorably than those of AI agents, whereas those with low BJW preferred AI agents’ ones. This interaction was mediated by the perceptions of the agents’ benevolence and selfishness, which varied depending on the BJW levels and agent type. High-BJW individuals perceived human agents as more benevolent and less selfish, whereas low-BJW individuals showed the opposite pattern. In contrast, AI agents’ benevolence and selfishness perceptions were not influenced by BJW levels. This study provides theoretical insights by identifying BJW as a key factor affecting AI agents’ persuasive effects and suggests that perceived benevolence and selfishness are the psychological mechanisms behind these effects. These findings also offer practical guidance for designing more effective AI agent strategies tailored to consumer BJW levels.
Get full access to this article
View all access options for this article.
