Abstract
Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human–technology interaction and the way businesses operate. However, technologies based on generative artificial intelligence are known to hallucinate, misinform, and display biases introduced by the massive datasets on which they are trained. Existing research indicates that humans may unconsciously internalize these biases, which can persist even after they stop using the programs. In this study, the authors explore the cultural self-perception of LLMs by prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from the GLOBE (Global Leadership and Organizational Behavior Effectiveness) project. The findings reveal that LLMs’ cultural self-perception is most closely aligned with the values of English-speaking countries and countries characterized by economic competitiveness. It is crucial for all members of society to understand how LLMs function and to recognize their potential biases. If left unchecked, the “black-box” nature of AI could reinforce human biases, leading to the inadvertent creation and training of even more biased models.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
