Abstract
Large Language Models (LLMs) have emerged as unprecedented drivers of cultural homogenization, operating at scales and speeds that exceed all previous technologies. This paper examines how LLMs reduce cultural diversity across three cultural domains. Drawing on recent empirical studies, we demonstrate that LLMs disproportionately reflect a narrow demographic, primarily western, liberal, high-income, highly educated, male populations from English-speaking nations, while marginalizing not only non-Western cultures but also diverse groups within Western societies, including older adults, religious communities, and minority populations. Unlike earlier technologies that primarily transmitted cultural content, LLMs actively shape communication styles and knowledge systems, creating a feedback loop where AI-generated content becomes training material for future systems, progressively standardizing human expression with each generation. These homogenizing effects extend beyond representation to behavioral influence, reshaping how users communicate and make decisions. We propose targeted policy interventions across the LLM development pipeline and emphasize the critical need for standardized benchmarks to evaluate how well LLMs understand and represent diverse cultures across all stages. These interventions require coordinated action among AI developers, policymakers, social scientists, and diverse cultural communities to ensure that cultural diversity becomes a non-negotiable requirement rather than an optional enhancement. Without such efforts, AI risks eroding humanity's cultural plurality, replacing diverse traditions with homogenized norms shaped by a narrow subset of the global population.
Get full access to this article
View all access options for this article.
