Abstract
Artificial intelligence (AI) agency plays an important role in shaping humans’ perceptions and evaluations of AI. This study seeks to conceptually differentiate AI agency from human agency and examine how AI’s agency manifested on source and language dimensions may be associated with humans’ perceptions of AI. A 2 (AI’s source autonomy: autonomous vs human-assisted) × 2 (AI’s language subjectivity: subjective vs objective) × 2 (topics: traveling vs reading) factorial design was adopted (N = 376). The results showed autonomous AI was rated as more trustworthy, and AI using subjective language was rated as more trustworthy and likable. Autonomous AI using subjective language was rated as the most trustworthy, likable, and of the best quality. Participants’ AI literacy moderated the interaction effect of source autonomy and language subjectivity on human trust and chat quality evaluation. Results were discussed in terms of human–AI communication theories and the design and development of AI chatbots.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
