Abstract
Recent years have witnessed a surge of interest in artificial intelligence (AI). To address AI’s growing visibility and importance, the authors edited a special collection that showcases cutting-edge sociological research on AI in areas such as health and medicine, work and labor, new research methodologies, and policy. The articles collectively position AI as a sociotechnical system, recognizing that AI integrates heterogeneous elements that intertwine the social and the technical. Each article brings different aspects of sociotechnical systems to light. The authors highlight three contributions sociologists are poised to make to the study of AI: (1) critical analysis of AI hype, promotion, and adoption; (2) empirical study of AI’s co-constitution with processes of social life; and (3) identification of avenues for structural change in creating equitable AI futures. The authors call for the development of novel research methods to study distributed AI platforms and for future research on inequalities, power, and data justice.
Recent years have witnessed a surge of interest in artificial intelligence (AI). Media headlines emphasize how AI technologies will transform the way we live, learn, and work, and numerous audiences approach advances in AI and machine learning (ML) with a mixture of optimism and caution. Investments in AI have already generated a stunning array of software applications ranging from clinical algorithms and autonomous vehicles to generative AI (GAI) tools such as ChatGPT. AI has also attracted significant attention from policymakers, advocacy groups, and activists, reflected in growing calls for greater public involvement in AI development and deployment. AI is no longer simply a technology of the future, nor is it limited to the confines of the lab or Silicon Valley: AI is here, embedded in our daily lives and social institutions in the United States and around the globe.
This special collection takes stock of these recent advances by outlining an emerging sociology of AI. We build on our earlier call for sociological research into AI and inequalities to feature scholarly work in this important area of our discipline. In 2021, we highlighted early scholarship on the politics of data, algorithms, and code and the social shaping of AI in practice (Joyce et al. 2021). We also called for deeper sociological engagement with AI, recognizing the unique contributions sociological theories and methods could bring to our understanding of AI. At that time, scholars in anthropology, philosophy, and communication dominated the conversation on data, algorithms, and AI, but where was sociology? What does the discipline of sociology have to offer to the critical study of AI? How might sociologists play a role in imagining and shaping AI futures? This special collection of
The publication of this special collection is particularly timely with the rise of GAI platforms such as OpenAI’s ChatGPT. When we issued the initial call for proposals in February 2023, GAI tools and large language models had yet to fully burst on the national scene. Released to the public in Fall 2022, ChatGPT and other forms of GAI would soon dominate media headlines. Reports of ChatGPT use in the classroom and workplace stirred spirited discussions about how we learn and work (e.g., Christian 2023; Tong 2023). It was also clear sociological analyses of these developments would be needed, given what we know about inequalities built into search engines (Benjamin 2019; Noble 2018), face recognition software (Buolamwini and Gebru 2018; Grother, Ngan, and Hanaoka 2019; Kantayya 2020) and predictive analytics (Abram et al. 2021; Obermeyer et al. 2019; Vyas, Eisenstein, and Jones 2020). Our special collection strives to balance the recent turn to GAI while showcasing sociological contributions to the critical study of AI more broadly. We are especially pleased to partner with
What Is AI?
The term
The vagueness and ambiguity surrounding AI has social consequences. The term
For the special collection, we worked with a broad sociological understanding of AI as a
Why Sociology?
Although sociology shares many theories and methods with other social sciences, sociology offers three important approaches that will advance the investigation of AI. First, sociology as a discipline has a long history of critically analyzing hype and propaganda. Investigating the day-to-day experience of people working with or using technoscientific innovations provides an on-the-ground perspective that challenges the uncritical promotion and adoption of new technologies. This critical perspective opens the space to ask, “What values are embedded in the design of a particular AI platform? Whose interests are served by its use? Who is empowered by its use? Who is negatively affected by its use? How do people use AI platforms in everyday life and at work?” AI practitioners and boosters are adept at making moral claims to shore up their work’s value. Such moral claims might assert that the new technologies will save lives, improve accuracy and efficiency in decision making, reduce costs, and/or promote the social good. Sociologists are trained to critically examine these claims, drawing on empirical methods to bring the lived experiences of AI in practice to light.
Second, sociological theories and methods identify how AI is coproduced with processes of social life in arenas ranging from medicine, criminal justice, the future of work, sexual practices, and public policy. As a sociotechnical system, AI (its code, design, and use) is always constituted by both the social and the technical. AI does not and cannot exist outside of social contexts, values, and priorities. Sociological perspectives help bring social contexts, values, and priorities to the fore. Sociologists study and explain, for example, how sociotechnical systems produce new social norms, moral codes, experts, and professions. Sociological perspectives also show how technological use is deeply intertwined with one’s sense of self and how people experience everyday life. By recognizing data about humans is also always data about social inequalities, sociologists are also well trained to evaluate how AI platforms relate to inequalities. Sociological theories and methods are poised to examine how AI sociotechnical systems may challenge or recreate systems of power, thereby creating new forms of social life or maintaining existing ones.
Finally, our disciplinary focus on social structure and its relation to inequalities brings important insights into calls for change. Rejecting the effectiveness of change primarily targeted at the individual level (i.e., we should make better choices), a sociological perspective recognizes that social change must include structural change. AI practitioners are well aware of important harms caused by AI and the need to build “more equitable” systems. For example, scholars and researchers located in corporations (e.g., Microsoft, Google) and nonprofits (Data & Society, the AI Now Institute) have both drawn attention to bias in AI. The language of bias emphasizes how ML systems may cause disproportionate harm to particular groups, stemming from unfair representation in training data and the decisions of individual programmers (Barocas et al. 2017; Crawford, Miltner, and Gray 2014). Existing efforts to mitigate bias include improved training of AI practitioners (e.g., Google’s fairness module in its ML Crash Course; Dean 2019) and the mathematical inclusion of fairness metrics within AI systems (Caton and Haas 2024). These efforts, however, rely primarily on intervening upon individual technologists or models in lieu of policy and institutional change. Sociologists, in contrast, recognize how social inequalities are problems of social structure, and that structural change is needed to build equitable and just futures.
Special Collection Overview
Building on the U.S. Office of Science and Technology Policy’s (2022) “Blueprint for an AI Bill of Rights,” which emphasizes issues of access, transparency, and public engagement and was led by sociologist Alondra Nelson, and our original call for sociological research on AI (Hoffman et al. 2022; Joyce et al. 2021), the Sociology of Artificial Intelligence special collection creates space for new scholarship that addresses the sociological dimensions of AI sociotechnical systems. It is the first special collection to highlight sociological perspectives on the design, use, and impact of AI. The articles cover a wide range of topics, from sex technologies to the creation of new scientific fields such as data annotators and the role of sociology in shaping AI policy. The articles collectively demonstrate the power of classical sociological methods such as ethnography, content analysis, and surveys to investigate AI as well as offer suggestions for how to use GAI in sociology.
Health and Medicine
Three articles take up attempts to introduce AI to the medicine and health arena. Mira Vale’s article “Moral Entrepreneurship and the Ethics of Artificial Intelligence in Digital Psychiatry” follows research psychologists and psychiatrists as they navigate the new terrain of digital mental health. Referring to the state of digital psychiatry as the “Wild West,” research psychologists and psychiatrists position themselves as ethical experts who aim to define the moral boundaries of issues such as how to ethically work with patient training data and clinician liability as they worked with these new data. Under the banner of ethics, these clinicians create an in-group that defines the parameters of appropriate digital psychiatry use, which both enables and constrains topics for consideration as moral. Through her analysis, Vale demonstrates how research clinicians become moral entrepreneurs.
Shira Zilberstein’s article “Ethical Dilemmas and Collaborative Resolutions in Machine Learning Research for Health Care” analyzes how academic AI researchers navigate the societal impact of ML platforms in the making. Dividing the world into a hypothesized technical world of ML and a “real” world of clinical practice, Zilberstein shows how ethical considerations are a collaborative, interactive affair, one in which AI practitioners rely on clinicians to define the benefits and harms of particular “real world” medical projects. Like Vale’s article, identifying ethical impact is considered to be expert knowledge, decided by clinicians who are entrenched in the medical model of illness. Prioritizing patient outcomes and clinical data, other factors such as upstream interventions or the social determinants of health are rendered invisible. This way of viewing AI positions clinical medicine as “social,” protecting what is understood as “technical” from scrutiny.
Moving away from academic labs, Katrin Lehner, Vera Gallistl, and Roger von Laufenberg focus on a commercial lab working on a deep learning system that aims to predict falls for older adults. Their article “Vulnerability Assemblages—Situating Vulnerability in the Political Economy of AI” examines how AI practitioners are savvy actors who make moral claims about supporting vulnerable populations—in this case old people in long-term care facilities—to justify their work. Standing on the moral high ground of protecting a vulnerable group, AI practitioners generated synthetic data to fill in missing training data instead of using data from actual elders. The synthetic data were created from automated software and AI practitioners’ imitation of falling while wearing motion capture suits. The decision to use synthetic data generated many false alarms but was perceived as desirable because it meant less interaction with long-term care facilities and was less expensive than collecting data in real-world settings.
In all, these three articles demonstrate how the social construction of morality is central to AI projects and, as such, should be central to sociological analyses. Although we often do not think of AI practitioners as creators of moral economies, they are. All three articles also point to the importance of participatory design. In all three cases, patients and elders—the people who are the objects of the training data and will be affected by the system’s output—were excluded from decision making in the research process.
Work and Labor
Two articles in the collection address issues related to labor and AI. Drawing on a comparative case study of four landmark image datasets, Zhuofan Li’s article “When Being a Data Annotator Was Not Yet a Job” documents the creation of a new scientific profession—data annotators—showing how the position went from an in-laboratory, expert task to an outsourced, decentralized position in global capitalism. Li introduces the concept “repertoires of control” to call attention to how scientists choose between, make use of, and recombine available technological, organizational, and cultural models to do their work. Through analysis of scientists’ repertoires, Li shows how scientists helped drive the changes in the social organization of data annotators.
Eric Dahlin’s article “Who Says Artificial Intelligence Is Stealing Our Jobs?” introduces the concept “AI exceptionalism” to explain the belief that AI will replace white-collar workers (e.g., journalists, copyeditors, computer scientists, clinicians). If this occurs, it will be a marked change—exceptional, Dahlin argues—from previous patterns of automation, which tended to displace blue-collar workers. Developing a typology of weak or lite AI exceptionalism (i.e., AI platforms are adopted and will affect white-collar workplaces) and strong or strict AI exceptionalism (i.e., AI will replace white-collar employees, leading to unemployment) to distinguish between possible views and outcomes, Dahlin uses survey data to better understand how people perceive the threat of AI to their jobs. He finds support for weak or lite exceptionalism among respondents, as white-collar workers were more likely to report job loss from the adoption of AI in their workplaces. Although most participants expressed concern about job loss, the most vulnerable social groups (e.g., lower incomes, people of color, people who are 18–24 years old) reported the most concern about potential job loss even when not in the positions predicted to be affected by AI. Both articles draw on sociological theories and methods to show the relationship between AI and labor is far more complex than simple narratives of labor replacement suggest.
Research Methods
Two articles examine the opportunities and challenges of incorporating AI within social science research methods. Thomas Davidson’s “Start Generating: Harnessing Generative Artificial Intelligence for Sociological Research” outlines several potential applications of GAI to the sociological study of text and images. Large language models draw on probabilistic representations of language (e.g., autocomplete in search engines), and social scientists may harness these capabilities for the coding of large volumes of textual data such as documents or social media posts. In addition to textual analysis, GAI may also be used to study and produce visual data. Researchers may generate descriptions of images or even create synthetic images of people or scenarios with prespecified attributes. At the same time, Davidson cautions sociologists to be aware of recognized concerns over interpretability, transparency, reproducibility, and bias. One avenue for sociologists to explore in the future is the development and use of open-source GAI over closed-source commercial models. Overall, Davidson calls on sociologists to embrace GAI and its powerful capabilities for sociological research, recognizing AI technologies will not replace but augment existing researcher capabilities.
Crystal Peoples, Paige Knudsen, and Melany Fuentes’ article, “The Use of Facial Recognition in Sociological Research: A Comparison of ClarifAI and Kairos Classifications to Hand-Coded Images,” offers a critical assessment of the use of ML algorithms within sociological research. The authors evaluated the performance of two popular facial recognition programs and the hand coding of race and gender information by a team of sociologists. All three coding systems yielded different results. Hand coding of an image dataset resulted in a 62.5 percent Black and 37.7 percent Latinx sample, whereas ClarifAI classified the same dataset as only 19.2 percent Black and 0.6 percent Latinx (47 percent unable to classify), and Kairos identified the sample as 42.5 percent Black and 14.4 percent Latinx (6 percent unable to classify). The dramatically different results of these three coding methods raise important questions about the risks of relying on commercial facial recognition programs and ML algorithms for sociological research. The authors argue for greater research into the social implications of generating knowledge using AI tools, particularly as these pertain to race and gender, representation, and social inequality.
Tech and Identity
AI is central to how individuals construct their sense of selves and their sexualities. Kenneth Hanson and Hannah Bolthouse’s article, “‘Replika Removing Erotic Role-Play Is Like Grand Theft Auto Removing Guns or Cars,” examines Reddit posts on AI chatbots and the sex tech sector. Replika is a conversational text-based application that allows users to specify the desired personality and appearance of custom avatars. The platform originally featured an erotic role-play (ERP) option until its abrupt removal following legal bans from Italy. Hanson and Bolthouse analyze online discourse about the removal of sexuality from the AI platform, showing that chatbot users attributed such changes to macro level social forces (e.g., sex negativity, third-party interests). Their analysis demonstrates that sex tech, like any other tech, must navigate international laws and practices. Although Reddit posters overwhelmingly supported the inclusion of ERP as part of AI companionship, the company’s desire to compete in the global chatbot market led to the replacement of sexually explicit content with nonexplicit topics. Although most users viewed Replika’s ERP feature positively, some questioned the morality and values behind a private company’s profiting from vulnerable customers (including marginalized groups such as sexual minorities and people with disabilities). Emphasizing that there is no single construction of AI and sexuality, Hanson and Bolthouse show how the merging of sex with technology may generate controversy among AI users and supporters across global contexts.
Policy Implications
Tina Law and Leslie McCall’s article “Artificial Intelligence Policymaking: An Agenda for Sociological Research” rounds out the special collection by calling for sociologists’ participation in shaping equitable governance of AI. Policy consideration of AI is a growing domain following high-profile efforts from the U.S. Office of Science and Technology Policy and the National Institute of Standards and Technology. Law and McCall show that U.S. AI policies highlight either risks and safety or matters of equity. Corporate political power from tech companies influences much of the current debate on AI, often pushing policy toward a narrow focus on risks and safety. Tech companies draw political power from the brokerage of data, deployment of tactics such as strategic obfuscation and aligned action, and reconfigured political relations among investors, workers, and consumers. Sociologists can counteract these forces by reframing AI as a matter of equity and public interest, asking questions such as the following: How does AI interact with social structure? How do different social groups define and advocate for inclusion and equity in AI? What constitutes public information versus proprietary data? How can democracy be practiced and affirmed through AI governance? The authors call upon sociologists working in the academy, the private sector, and the public sector to coordinate their efforts in centering equity within AI governance and policy.
Building AI Futures
Moving forward, we encourage sociologists to advance the sociology of AI in two important ways. First, we highlight the need to develop new ways of studying AI in practice. Because of the distributed nature of AI, in which multiple programmers may contribute to it at once or over time, our current methodological toolkit will need to expand to imagine novel ways of studying AI as well as consider new sources of data (e.g., records AI practitioners produce in documenting decisions for model building). AI work may also happen in corporations, making outside researcher access difficult to achieve. Several articles in the special collection demonstrate creative ways to work around these barriers by examining sources such as social media posts and historical documentation. Others synthesize qualitative data gathered from multiple sources (e.g., interviews with practitioners and users, multisited fieldwork). These offer promising directions in studying AI as a complex sociotechnical system; we encourage future work to continue expanding upon these methods to address the challenges of studying distributed, corporate systems. We also invite greater engagement between sociologists who approach AI as an object of study and those who consider it a methodological device, recognizing the full implications of AI for sociology reside at the intersection of these two approaches.
Finally, we continue our call for sociologists to keep issues of power, inequality, and social justice front and center in the design of future research projects. Sociologists are well attuned to the power dynamics created by social structures and institutions, and they recognize that technical advances in data, algorithms, and AI cannot alone undo durable systems of entrenched inequality. As sociologists, we have the expertise needed to critique existing AI sociotechnical systems. Our theories and methods enable us to examine how sociotechnical systems may produce inequalities as well as how new technologies may transform traditional relations of social power. Moving beyond critique, we also have the tools needed to help imagine more just, equitable AI futures. How can we broaden who benefits from AI? Which systems should we create for human growth and transformation? How can we prioritize the most marginalized in the policy and practice of AI, despite the social and technical challenges of doing so? We hope this special collection on the sociology of AI moves us beyond headlines and hype to spark innovative sociological work toward equity and justice.
Footnotes
Acknowledgements
We would like to thank the authors, reviewers, and editors at
Author Biographies
.
