Abstract
The use of Artificial Intelligence (AI) has grown rapidly in the service industry and AI’s emotional capabilities have become an important feature for interacting with customers. The current research examines personal disclosures that occur during consumer interactions with AI and humans in service settings. We found that consumers’ lay beliefs about AI (i.e., a perceived lack of social judgment capability) lead to enhanced disclosure of sensitive personal information to AI (vs. humans). We identify boundaries for this effect such that consumers prefer disclosure to humans over AI in (i) contexts where social support (rather than social judgment) is expected and (ii) contexts where sensitive information will be curated by the agent for social dissemination. In addition, we reveal underlying psychological processes such that the motivation to avoid negative social judgment favors disclosing to AI whereas seeking emotional support favors disclosing to humans. Moreover, we reveal that adding humanlike factors to AI can increase consumer fear of social judgment (reducing disclosure in contexts of social risk) while simultaneously increasing perceived AI capacity for empathy (increasing disclosure in contexts of social support). Taken together, these findings provide theoretical and practical insights into tradeoffs between utilizing AI versus human agents in service contexts.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
