Abstract
This study examines how artificial intelligence (AI) anthropomorphism influences user trust and decision logic in digital environments. Drawing on theories of social cognition, trust formation, and dual-process decision-making, a moderated mediation model is developed to explain how anthropomorphic design features (e.g., human-like voice, avatars, and expressions) affect users’ cognitive processing. A controlled experimental design, complemented by simulation-based validation, provides robust evidence for the proposed relationships. The findings reveal that anthropomorphic cues significantly enhance user trust, which in turn mediates their impact on decision logic. However, this effect is contingent on contextual factors: user experience strengthens the anthropomorphism–trust pathway, while task criticality attenuates the influence of trust on decision-making. These results underscore that anthropomorphism functions as a social heuristic, effective in routine or low-risk tasks but less influential in high-stakes contexts where analytic reasoning dominates. The study contributes to the fields of human–computer interaction and information systems by clarifying when and how anthropomorphism fosters trust and shapes cognition. Practically, it offers design guidelines for adaptive AI systems that dynamically adjust anthropomorphic cues to balance engagement, trust calibration, and decision quality.
Keywords
Get full access to this article
View all access options for this article.
