Abstract
Artificial Intelligence (AI) is a social structure cloaked in technical blackboxes and polarizing narratives. Sociology is the study of society from a structural perspective, delineating the interplays between individuals, organizations, institutions, and cultures. Sociology’s disciplinary lens is necessary to understand AI’s infrastructural force, reflecting and shaping politics, economies, knowledge, and interpersonal spheres. Yet to date, the field of AI research has been dominated by computer science and engineering, underutilizing the explanatory power of sociological theories and methods honed by the discipline over more than a century. Building on a collection of papers that illustrate AI as both a product and driver of social patterns and processes, we demonstrate that the diffusion of AI necessitates sociological analyses now more than ever, positioning sociology at the forefront of AI studies’ next era.
Artificial intelligence (AI) has evolved from a collection of loosely connected ideas, narratives, and technical innovations into a social infrastructure—mediating social relationships, economic structures, politics, and the production of knowledge (Sloane, 2026). This infrastructural turn imbues AI with the power to organize social life, for good and for ill, making it a key force of societal production, an object of contestation, and mechanism for social change (Davis & Williams, 2026).
The social construction of “AI” as a fluid and shifting concept, AI’s material effects on individuals and institutions, its cultural effects at local and global scales, its reliance on human labor and on human data, its infusion with politics and values, and entwinement with status and power together make AI fundamentally sociological. These themes are illustrated across the articles that comprise this special issue, prompted by the question what is sociological about AI?
Our purpose here is to build from AI’s evident sociological foundations to make a clear and compelling case for the rise of sociology at the forefront of AI research.
Why AI Research Needs Sociology and Sociologists
Historically, AI has cycled through periods of intensive development and stagnation, fragmented research agendas, disciplinary boundaries, and fluctuating public attention. AI’s past decade, however, has reached a social and technical peak. Related advances in neural networks and transformer models have expanded AI techniques and applications. Such developments have spurred massive investment in the AI industry, centralizing power in corporate research labs and invigorating longstanding geopolitical tensions as part of an “AI arms race” (Hao, 2025). Led by computer science and engineering, these developments have also animated hype-fueled speculations about AI as a sure gateway to the 4th Industrial Revolution, economic prosperity, and political and military dominance, alongside panicked assertions about the end of humanity and assurance of doom (Acemoglu & Johnson, 2024; Bender & Hanna, 2025; Lindgren, 2024; Vallor, 2024).
These extreme competing visions—of AI as the road to prosperity and progress or the cause of humanity’s decline—are at once reductive and overdetermined. They exaggerate AI’s capabilities, oversimplify the human condition, and underestimate the power of people to shape their social worlds. The time to mobilize this latter power is now, as AI’s peak coincides with, and is implicated in, the production and amplification of structural inequities (Davis et al., 2021; Joyce & Cruz, 2024; Joyce et al., 2021; Sloane & Moss, 2019) democratic erosion (Coeckelbergh, 2024; Kreps & Kriner, 2023), geopolitical conflict (Deeks, 2025; Erskine & Miller, 2024; Suchman, 2024), monopolistic corporate capture (Acemoglu & Johnson, 2024; Hao, 2025), and environmental crisis (Climate Action Against Disinformation, Check My Ads, Friends of the Earth, Global Action Plan, Greenpeace, & Kairo, 2024; Luccioni et al., 2025).
Harnessing the potential for deliberate intervention and co-determining futures requires careful observation and analyses. Such skills and methods exceed the scope of purely technical expertise. Understanding how macrophenomena are entangled with new technologies and their rapid societal integration is instead, the primary concern of sociology. Given the global state of AI, it seems obvious that we can no longer ignore what sociology, and sociologists, have to offer the AI discourse.
The field of AI studies now calls for sociology as a cornerstone and bedrock, elevating sociological theory and methods as not only part of AI knowledge production, but the field’s guidepost and leading edge.
Sociology is the study of how social systems inform and inflect each other, spanning micro, meso, and macro-level analyses. As a discipline, it deploys myriad methods—qualitative, quantitative, experimental, and interpretive. The job of sociology and of the sociologist is to unearth the patterns that create a social order, making that order observable, analyzable, and thus malleable rather than inevitable.
As applied to a specific field site or domain, sociologists work to demonstrate the processes, practices, and relations that constitute a dynamic whole, parsing how the elements and how the whole manifest individually, interpersonally, organizationally, and institutionally. Sociologists can decipher AI’s hidden processes and relations, just as they have done with complex social forces since the discipline’s inception—from Durkheim’s structural theory of suicide, to Weber’s link between religion and economy, to W.E.B. Du Bois’ writings on the psychological experience of Blackness in a White society (Du Bois, 1903; Durkheim, 1987; Weber & Parsons, 1930). Revealing social facts where they are otherwise nonobvious and exposing social patterns to understand their causes and effects is sociology’s forte. “Blackboxes” are not new to sociologists, but the starting place from which theories and methods stem.
As AI becomes an infrastructural and organizing force (Sloane, 2026), sociology must move to the center of scholarly and policy conversations about how this infrastructure is constructed, maintained, contested, and transformed. Infrastructures operate in the background, typically only becoming visible upon breakage—an experience that distributes unevenly across society and warrants a reorientation towards repair (Davis & Williams, 2026; Sloane, 2026). Sociological methods allow researchers to elucidate AI infrastructures prior to their disruption, and to explain breakages once they have occurred. In turn, sociological theories and concepts give these observations depth and nuance, departing from politicized extremes that too often dominate AI discourse (Bender & Hanna, 2025; Sloane et al., 2024).
Sociology’s clarity of insight and rich theoretical traditions are exemplified in the collection of articles that make up this special issue. On their own, each piece advances the field of AI research through empirical study of specific AI systems and sites of engagement, conceptual explications, or agenda setting statements. Together, these articles build the case for sociology as a focal lens on AI in society, rendering AI infrastructures knowable and subject to change.
Issue Overview
We open this special issue in New York City’s tech scene with Bohner and Vertesi’s (2025) interrogation of AI-hype. Rooted in microeconomic sociology, the authors use grounded, qualitative methods to show how East Coast AI practitioners mobilize AI-hype as a boundary object, defining themselves against the “hype-beasts” of San Francisco. Yet, this distinction is largely symbolic, as New York’s tech elites reinforce and profit from the same extravagant imaginaries of AI’s transformative character and interminable rise. Moving between lenses of culture and economy, the authors show how people and processes create fiscal realities and monetary flows. Bohner and Vertesi at once illuminate a local scene, unravel microeconomic phenomena, and move towards a theoretical program for the sociological study of hype (Bohner & Vertesi, 2025).
The issue goes next to Latin America, with López et al.’s (2025) study of publicly funded AI projects in Chile. The authors identify five dominant narratives across these projects: AI for productivity gains, as a transformative force, as a literacy need, for smart surveillance, and as situated and creative inquiry. These narrative frames set economic, intellectual, and policy agendas, crafting social and technical priorities through resource distribution. López et al. (2025) show how public grants are not just instrumental, but actively performative in ways that shape innovation and the future of technology development.
Continuing the theme of performativity, Moradi et al. (2025) explore three cases of human-technology encounters in frontline work, analyzing automated self-checkout, commercial vehicle inspection with electronic driving logs, and prescription drug monitoring programs (PDMPs) in pharmacy professions. They apply Erving Goffman’s canonical concepts of “face-work” and “cooling” to reveal how individuals, organizations, institutions, and material technologies constitute a burgeoning interaction sphere, illuminating the processes by which AI technologies change the nature of frontline work. These authors apply a central sociological theory of interaction in everyday life to novel technological forms, while drawing generalizable insights across multiple sites of study—demonstrating the expansive utility of sociological inquiry.
Baert et al. (2025) take a reflexive turn, with the authors recognizing themselves as culture producers in the narrative construction of AI. In doing so, they actively juxtapose their approach, which presents a dialogue between six authors on related but distinct issues—intellectual property, intimacy, evidentiary power, race, reproduction, and work—against the smooth and homogenizing character of generative AI. The deliberate style of the paper is as much a part of its contribution as the analyses presented therein. These analyses spread across multiple empirical questions and sociological theories, addressing agency, authorship, identity, visibility, inequality and hype as they constitute and are (re)created by, an infusion of large language models (LLMs) and other generative AI systems across personal spheres and social institutions.
Extending this focus on reflexivity, Harvey (2025) positions social science as a mediating influence that shapes AI culture and practice. This is at once a story about how sociology molds that which it studies, and a meaningful contribution to the sociology of knowledge. This piece prompts those who study AI and other technologies to critically reflect on the way research not only reveals but also constructs the social and material conditions that constitute the social world. Though this paper focuses on studies of AI in creative industries specifically, the point translates across domains, inducing critical self-awareness as an integral element of sociological methods.
Alvarado (2025) brings us back in history to the work of computer scientist Alan Turing in the 1930s. The paper draws attention to Turing’s cultural and linguistic focus when defining machine intelligence, showing how this has shaped the way LLMs have been developed and deployed. Bridging the past to the present, Alvarado shows how LLMs’ imitative character is a core feature of their technical design, underpinned by assumptions about human intelligence as inherently social, because it is mimetic.
Furthering the exploration of LLMs, Lepp and Alvero (2025) clarify how these systems shape social relations. Drawing on a structural framework of technological affordances, they present linguistic affordances as a conceptual tool for both understanding the social effects of LLMs and considering how LLMs are—and can be—used in research. They illustrate this concept and its manifold functions through case examples of LLMs in university admissions and scientific publishing, offering theoretical advances that can underpin and augment emergent methods by answering questions about how LLM design shapes both opportunity structures and the production of legitimate knowledge.
Underneath AI technologies are vast datasets. Our imagination of AI often involves data abundance, whereby all data are swept up and processed into programs. But Orr (2025) draws on the rich sociological tradition of studying absences and omissions to illuminate the social role of what is missing in data, and to what effect. The work makes a vital methodological point about observing the invisible, and a sociological point about the human choices that become reified and erased. Paying attention to these “ghosts in the data,” as Orr describes them, is necessary to understand how and why AI systems operate as they do.
Expanding the body of work addressing opacity in AI models, Hardcastle et al. (2025) refute the now axiomatic assumption that AI systems are blackboxed and impenetrable. The trouble, they argue, is the field’s focus on technical methods of observation, rather than attending to the social assemblages that produce AI outputs. In earlier work, a subset of these authors showed how AI systems can be observed through sociomaterial conditions, like organizational norms and interpersonal procedures as they intersect with machine learning models (Gutierrez Lopez & Halford, 2024). Here, they extend attention to sociotemporalities, using the case of targeted advertising to demonstrate the explanatory power of time.
Zanger-Tishler and Zhang (2025) bring attention to prediction as AI is increasingly enrolled in myriad high-stakes decision tasks. The authors trace the sociological processes that drive the development of prediction problems which are inherent in AI, the design of predictive models, and the effects of decision outcomes when humans consult with (or are reliant on) machines. Their work highlights the potential of sociological analysis for the study of AI while challenging sociologists to expand their remit, considering the dynamic and multifaceted technical choices that create AI systems, the programmatic and policy choices embedded therein, and the links between AI development and its downstream social effects.
The issue closes with a short piece by Smith and Southerton (2025), meditating on “AI slop” and the nature of machine-mediated aesthetics. The authors argue that mass-produced AI art is fundamentally alienating and extractive, undermining the human element of creativity while generating profit from human-created artistic works. Analyzing examples of AI artwork, the authors provoke novel questions about the interplay of technology, creativity, material culture, emotion and economy in contemporary societies.
These works collectively demonstrate the sociological nature of AI and related need for sociological analyses. Technological developments, no matter how advanced or innovative, cannot pierce the barrier of AI infrastructures as social phenomena. But sociology can, exposing the AI’s multifaceted structural underpinnings to enable systemic intervention and change (Davis & Williams, 2026; Sloane, 2026). Computer science and engineering have thus far been at the helm of AI studies. Now, as social context foregrounds and technical matters recede, we find sociology at its moment of ascendance.
Footnotes
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
