Abstract
Massive investments in artificial intelligence (AI) have sparked a renewed debate over its impact on how we live, learn, and work. The last few years have also seen a burst of critical sociology about AI, pushing the conversation toward a deeper understanding of structural and intersectional inequalities. Here, we offer five big ideas that highlight what is distinctive about the emerging sociology of AI.
Recent headlines highlight the use of artificial intelligence (Al) where we live, learn, and work. In early 2021, for example, the New York Times ran several feature articles on Al, including “The Robots are Coming for Phil in Accounting,” “A.I. is Everywhere, and Evolving,” and “Who is Making Sure the A.I. Machines Aren’t Racist?” Like these headlines, most media coverage assumes that Al systems are transformational technologies poised to liberate or threaten humanity’s very existence.
Due to the COVID-19 pandemic, several universities rushed to implement biometric deterrent systems to predict academic misconduct while courses were delivered online. These applications raise privacy concerns for students, and the eye tracking and body movement techniques that drive inferences on student behavior tend to be discriminatory.
mikemacmarketing, Flickr
While AI is important and consequential technology in numerous ways—shaping news consumption, credit scores, warfare, and much more—hyperbolic framings like these tend to fuel publicity and, in turn, justify huge investments. In June 2020, for example, the United States President’s Council of Advisors in Science and Technology recommended that federal investment in AI grow by a factor of ten over the next ten years. Big Tech companies like Amazon, Google, and Microsoft have joined the frenzy by investing heavily in AI within their firms and partnership with federal agencies. The result? Computational systems are assembling vast amounts of data used to reshape an array of social worlds and professions, from university teaching to policing. And while it is possible that one could come for Phil in Accounting, the far more pressing problems have to do with ways that AI systems are exacerbating a host of structural inequalities.
Starting in the early 2000s, scholars in philosophy, psychology, anthropology, communications, and science and technology studies pioneered humanistic inquiry into AI and related algorithms. The main focus has been identifying errors and/or bias in system outputs, particularly by race or gender. Critical sociologists of AI are now entering this area, focusing not only on errors and bias but also how group disparities can result from entrenched structural and intersectional inequalities. The coronavirus pandemic, for example, has been highly profitable for technology firms selling exam proctoring software. Several universities rushed to implement biometric deterrent systems that commandeer the personal camera and microphone on a student’s computer and use matching algorithms to predict academic misconduct. These applications raise privacy concerns for students, and the eye tracking and body movement techniques that drive inferences on student behavior tend to be discriminatory. AI algorithms are “trained” to monitor faces and body motion with databases that primarily contain images and videos of white men. These programs have a track record of working poorly with darker skin tones or on women. They also assume exams are taken in an isolated room by students whose time is uninterrupted. Consequently, these systems tend to punish non-traditional students who violate these assumptions, like single parents who take their eyes off the screen to tend to a young child or students working in cramped quarters with their siblings during lockdown. Evidence is mounting that these applications disproportionately misidentify people of color, women, and the poor as cheaters. Some even require identification checks that “out” trans people and undocumented immigrants. Similarly, sociologists are documenting how and where AI technologies reinforce or amplify pre-existing structural inequities within schools, workplaces, hospitals, transportation systems, real estate, and police work.
The uncritical use of human data within AI systems tends not only to reproduce, but in many cases, amplify and widen preexisting social inequalities across race, gender, and class.
As AI developments accelerate in the United States and globally, sociologists can and should bring their insights on power, inequality, and social structure into these public debates. Therefore, we present five big ideas highlighting sociology’s hallmark contributions to current and future conversations around AI.
#1: Interrogate the Meanings of Human Data
AI broadly refers to using computational infrastructure and programming code to create socio-technical systems that mimic, augment, or replace humans. AI designers draw upon an enormous diversity of techniques, including cybernetics, logic and rule-based systems, statistical and probabilistic AI, pattern recognition, adaptive behavioral models, and symbolic knowledge representation such as expert systems or case-based reasoning. Each of these subareas has unique histories and divergent standards of success and failure. This diversity is often collapsed and erased in the use of the term AI.
Over the last several years, interest and development in machine learning (ML) has surged. ML refers to computer techniques that use algorithms and probabilities to generate inferences from patterns in large data sets, rather than relying on explicit instructions or rules. ML techniques have produced several impressive results across a variety of fields. A DeepMind Technologies system, AlphaGo, received a lot of news coverage when it beat several top-ranked Go tournament players in 2016 and 2017. Arguably more impressive is that ML systems have made much headway toward solving decades-old computational challenges in areas like robotic vision and image recognition, learning by imitation, and natural language processing. Recent ML applications stand out in the natural and medical sciences too. For example, another DeepMind system, AlphaFold, far exceeded expectations by rapidly predicting accurate estimates of the 3-D shape of protein folds based on their sequence of amino acids. This is considered a “grand challenge” within the field of biology, and the use of ML is expected to prove instrumental in producing new pharmaceuticals, biofuels, and other applications.
When AI systems use human data to make recommendations about policing, education, urban planning, and health care, problems arise. Many ML applications rely on massive amounts of digital data available through the healthcare, education, and policing sectors, along with “digital self” data collected from user interactions with smartphones, social media, online browsing, exercise apps, and security cameras. The gathering of so-called “big data” has become seamlessly interwoven into the conduct of daily life. This can be a problem when ML system designers, or their users, operate from a naive view of human data. Practitioners and users typically assume that the human data collected by computer systems are neutral and representative in their correspondence to the social world they came from.
When AI systems use human data to make recommendations about policing, education, urban planning, and health care, problems arise.
Joseph Redfield, Pexels
In contrast, sociologists have long recognized that what counts as data is socialized, politicized, and imbued with inequality. We know, for example, that arrest records reflect racialized inequalities that run far deeper than the raw data points themselves. Yet they are used to train risk-assessment algorithms that police use to direct resources and criminal surveillance tactics. Sociologists who examine crime prediction and risk assessment systems, such as Sarah Brayne, have pointed out that training data comes from past criminal records, themselves the outcome of racially discriminatory police practices, along with a widening array of databases collected in non-criminal justice settings. Past discrimination thus gets built into policy and decision-making about the future. The underlying problem occurs when system designers and their users ignore or miss that data purportedly about something neutral (e.g., criminal record, ZIP codes, location of highways) are also data about socially significant inequalities (e.g., class, gender, race inequalities, segregation, racist policing, etc.). Similarly, sociologist Taylor Cruz has shown that the electronic health records that AI systems use to assist in medical diagnosis do not merely provide information about patients’ past conditions, treatments, and outcomes—they also encode socioeconomic status and the particular ways that clinicians classified their cases to secure insurance reimbursement.
Scholars of racialized algorithms, like Ruha Benjamin and Safiya Umoja Noble, have shown that the uncritical use of human data within AI systems tends not only to reproduce, but in many cases, amplify and widen preexisting social inequalities across race, gender, and class. This is often because AI systems accept correlations between vulnerable groups and their life chances as causation and use this understanding of causation to make decisions about interventions in the future. This move, of course, goes against social science’s deep recognition that correlation does not automatically equal causation. Nevertheless, these data-driven systems get used to subject communities of color and poor communities to heightened surveillance, diminished access to health care, and to justify additional police violence.
#2: Uncover How Ai Myths are Used
The neo-institutionalist approach to the sociology of organizations has long focused on how organizational actors seek legitimacy by adopting the latest technologies and managerial trends of their aspirational peers. We suggest that the current push to quickly fund, build, and implement AI systems operates much the same. Sociologists are well-positioned to uncover how myths about the future possibilities of AI get used to generate hype and investment, as well as the social problems that can result from the frenzy.
Research and development in AI has long provoked outsized predictions and alternating cycles of attention and neglect. The field is amidst another cycle of attention—hyped, debated, and feared. A cottage industry of futurists and economists argue that ubiquitous AI systems have ushered in a “Fourth Industrial Revolution” that will fundamentally reorganize fields as diverse as health care, transportation, investment banking, education, and political governance. Sociologists examining workplace automation processes—Ross Boyd, Allison Pugh, Laurel Smith-Doerr, to name a few—question these extravagant claims. They paint a far more complex picture of uneven implementation and contradictory effects, especially as it concerns racial and gender equity within and across professions. Nevertheless, whatever one makes of its empirical accuracy, the mythology itself adds fuel to the AI arms race occurring among states, universities, and firms that believe it.
The central myth of AI is that it accomplishes human-level tasks without human intervention. This myth appeals to the belief that technologies are objective and neutral, in contrast to humans, who are positioned as subjective and biased. Moreover, it promotes what science and technology scholars call “technological determinism,” a way of understanding social change wherein machines, rather than humans, create the future. Sociologists of technology tend to be skeptical about such deterministic predictions because they make it harder to see the social actors shaping how, where, and why a technology is used in actual practice. The emerging critical sociology of AI conceives the myth of transformational AI, first and foremost, as a marketing ploy to push product and promote services. Sociologist of education Rebecca Eynon has documented how educational industry leaders use the ideological promise of AI as a selling point to draw in potential customers. Steve G. Hoffman’s study of an industry-oriented academic AI lab shows how professors and graduate students coyly played up cultural fears rooted in sci-fi dystopias to generate industry interest and investments in their commercial incubator projects. Lilly Irani suggests that the belief in an impending AI transformation can boost the valuation of tech firms “when investors perceive them as technology companies rather than labor companies” (728). The societal positioning of technology as good—and better than humans—enhances these companies’ value. Irani elaborates this point in the context of crowdsourcing platforms like Amazon’s Mechanical Turk, which allow software companies to promote their ventures as tech companies while simultaneously relying on gig laborers to get the work done.
Sociologists are well-positioned to uncover how myths about the future possibilities of AI get used to generate hype and investment, as well as the social problems that can result from the frenzy.
As noted by sociologist Lee Vinsel in an essay for Medium, the mythology of transformational AI raises a tricky dilemma for the sociologists who study it. Can social scientists document the social problems associated with AI without playing into a deterministic conception of technological change or managerial fads? To be clear, our point is not to suggest that new technologies are irrelevant to social change or that recent innovations in AI are nothing more than an empty fad. Rather, sociologists who study AI in actual social settings often show that introducing new technologies has complex, uneven, and sometimes contradictory effects that shape and get shaped by the all-too-human institutions they interact with. We will elaborate on this point in the following two sections. The important point here is that the myth of transformational AI, although a twice-told tale, shapes reality. Sociologists provide an important service when they critically interrogate the myths associated with AI and their relation to society, rooting out the array of symbolic politics used to generate interest and investment within and across educational institutions, firms, and governments and obscure relations of inequality. Investigating the political effects of such myths helps identify the unequal, intersectional distribution of benefits and harms.
#3: Expose Interlocking Systems of Durable Inequality and Structural Discrimination
Feminist sociologists, in particular, have set forth a rich understanding of interlocking systems of durable inequality across social positions such as race, gender, and class. Initially developed by Black feminists like Kimberle Crenshaw, bell hooks, and Patricia Hill Collins, this approach shows how intersectional inequalities and structural discrimination are produced at the interactional, organizational, and institutional levels of society.
When applied to AI and computing technologies more generally, intersectionality enables critical analysis to move beyond both utopian and dystopian visions of technology, which either view AI as a catalyst for linear economic gains and social betterment or similarly unidimensional forms of disintegration and social decay. Instead, an intersectional approach demonstrates that AI can create uneven harm across age, race, ethnicity, gender, sexuality, and class.
Anthropologists, philosophers, and communications scholars have often worked in teams with computer scientists and led the call for greater fairness, transparency, and accountability in AI applications. Much of this has emphasized attempts to root out bias through sensitivity training of computational scientists. Though laudable, there are limits to the individualistic emphasis on bias. As we argue in the journal Socius, “Sociologists, along with critical social scientists…can advance the conversation on bias, fairness, transparency, and accountability by transforming it into one of inequality, hierarchy, and structural social change” (6). Further, by interrogating the ideology of objective, neutral science, feminist scholars offer epistemologies that allow for a critique of AI’s historical foundations and racist, classist, and sexist systems of scientific categorization.
Can social scientists document the social problems associated with AI without playing into a deterministic conception of technological change or managerial fads?
Pavil Danyliuk, Pexels
At its core, intersectionality describes how power operates through the mobilization and construction of difference at all levels of society (micro, meso, macro). As noted above, AI systems are often built on data that categorize and enumerate those same differences. Political scientist Virginia Eubanks, for example, has shown that structural harms against poor and working-class people get supercharged when automated computing systems are used to decide who gets housing, food stamps, and cash benefits. Investigative journalists from ProPublica examined a Broward County, Florida, database on the deployment of a recidivism prediction software to determine “risk scores” in criminal court sentencing. The software resulted in Black defendants being wrongly flagged as future criminals at twice the rate as white defendants. In several cases, Black women were assigned higher “risk scores” and harsher subsequent sentences than were white men with much more extensive and violent criminal records.
More broadly, critics of surveillance technology, like Shoshana Zuboff and David Lyon, theorize the role of AI in the emergence of “surveillance capitalism,” where the design and use of AI systems are linked to anti-democratic social norms, the erosion of online privacy, and exploitive labor conditions. This work points to problems related to ideological coercion and the targeting of political misinformation. However, it can also play into deterministic myths when it accepts that AI techniques have the capacity on their own to transform economic and political relations. An intersectional framework can deepen these critiques by carefully documenting the intertwined legacies of enslavement, settler colonialism, patriarchy, and racialized industrial capitalism, taking seriously how these foundational historical processes are reanimated in the design and use of contemporary high-tech systems. That is, intersectional analyses can show which human actors and values drive design and use while also identifying how benefits and harms are experienced based on one’s intersectional position. An intersectional approach can also help us imagine new futures in which the benefits and harms are distributed more equally.
#4: Reveal Hidden Forms of Exploitation Within Global Tech Labor Markets
Although AI is conceptualized, designed, and used by people, much of this all-too-human work is hardly noticed because myths about AI emphasize the technology as an agent, not humans. We know, however, that automation technologies do not eliminate human labor but rather more effectively hide it from public view. Sociologists of work and organizations reveal the hidden forms of exploitation and expropriation that have arisen as firms use AI to reconfigure global labor markets.
Sociologists are well positioned to understand how demographic diversity among tech workers ameliorates or exacerbates existing inequalities in the systems they build and deploy. The tech industry’s limited gender diversity is widely assumed, as the many jokes on television shows like The Big Bang Theory and Silicon Valley attest. Using an intersectional lens, Sharla Alegria’s research affirms that the high-tech industry remains a gendered space but shows that its reward structures, promotion processes, and division of labor are also deeply inflected by race and class. Technical work is divided among workers with different specializations. People of color are less likely to be included in leadership and decision-making positions for critical design and engineering tasks.
Automation technologies do not eliminate human labor.
utentedeib, Flickr
These issues are further complicated by the rise of platform work and global labor markets. Platform companies like Uber, DoorDash, and TaskRabbit—private firms that control a platform for the on-demand exchange of services between users and service providers—lean heavily on an underemployed and highly racialized global labor force. Tressie McMillan Cottom points out that these firms also rely on the secrecy of their matching algorithms and the data they collect to shield them from democratic oversight. Benjamin Shestakofsky and Shreeharsh Kelkar draw attention to the essential role of “relationship labor” within platform-based arrangements, performed by human agents who interact with users to smooth out customer interactions with automated systems. Firms outsource or offshore much of this work to largely invisible armies of low-wage, globally dispersed, and often short-term workers who engage in a range of small tasks that help ensure automated systems’ accuracy and efficiency. Mary Gray and Siddharth Suri call this ghost work because most people do not see or understand the human labor needed to make AI systems function. In short, some of the most seemingly futuristic AI capabilities are only possible when designers keep problem-solving humans in the loop, held at arms-length through technical secrecy, labor contracts, and hidden behind platform interfaces.
Instead of creating systems that allow police to track residents in designated crime “hot spots,” what if we made systems that told residents where racist interventions by police were more likely to occur?
Workers who perform tasks in the gig economy answer to algorithmic managers and gamified systems insensitive to individual workers’ particular circumstances. Due to laws and practices that position gig workers as individual contractors (not company employees), workers have little collective bargaining power and face significant barriers to organizing. Critical sociologists of AI have joined labor scholars and advocates to question the policies and twenty-first-century management strategies that deepen the exploitation of workers.
#5: Imagine and Create Just Ai Futures
As we have discussed, sociologists of AI demonstrate that the impacts of large-scale social change are never entirely predictable. Nothing is inevitable about the kinds of AI technologies that will be developed or the uses to which they will be put. Comparative studies of AI in everyday use, like Angele Christin’s work on how internet journalists and legal professionals use algorithm-driven analytic and assessment tools, demonstrate that people conceptualize and use AI systems differently across organizational contexts. Initial designs may constrain future use, but they do not determine it. Rather than anticipating either a luminous or frightful future, sociologists of AI are asking a more grounded question: In what social contexts can emancipatory and just AI systems emerge? And importantly, what might just AI systems look like?
Although critique is fundamentally important to what sociologists bring to current debates about AI, the prominence of these technologies also presents us with an opportunity to rethink how these systems might help to forge a more just world. Algorithmic models can be designed to optimize efficiency and profit margins for a firm or the fairness and satisfaction of employees and users. Instead of creating systems to identify renters mostly likely to default on rent, what if we created systems that could identify landlords who are less likely to repair units or return the last month’s security deposit? Or, instead of creating systems that allow police to track residents in designated crime “hot spots,” what if we made systems that told residents where racist interventions by police were more likely to occur? While digital platforms are often run by profit-seeking corporations that exploit a precarious labor force, they can also be cooperatively run and governed in a way that empowers workers and community members. Regulatory systems are central to technological futures. Public policy profoundly shapes applications of AI and their impact on communities, workplaces, and groups. Sociological imagining of more just AI futures allows our discipline to contribute informed research that can spark inspiration and new possibilities.
National science agencies, firms, and various public-private initiatives promote AI development. Sociologists should become central players on funded research teams and grant boards. We can play a key role in limiting the harms and better sharing the benefits of tomorrow’s AI. There is ample opportunity. For example, the Social Science Research Council, led by sociologist Alondra Nelson, recently launched a Just Tech initiative to engender more equitable and inclusive technological futures. While the rise of AI invites us to look toward an uncertain future, we can make visible the decisions and assumptions embedded in AI sociotechnical systems and help develop technologies that challenge existing inequalities rather than replicate them. As institutions adopt AI systems, sociologists are poised to analyze their effects on social worlds critically.
Conclusion
Sociological expertise is needed to draw out a deeper understanding of AI’s social implications and promote more emancipatory practices. We have highlighted five big ideas for a sociology of AI. Connected and overlapping, these five ideas demonstrate that sociologists have much to offer to public debates and the burgeoning arena of AI research. This list of five big ideas is not comprehensive. This is a fast-growing area of scholarship, and we look forward to new big ideas generated by sociologists in the upcoming years.
AI systems reinforce existing inequalities and will continue to amplify them as automated processes increase in scale. We need sociologists to keep bringing their theories, methods, and imagination to the design and diffusion of AI. Social change is never set in motion by the introduction of new technologies. To have impact, technologies must be mobilized and take root at a society’s structural and institutional levels. Sociologists are poised to identify and challenge the structural inequalities being integrated into AI systems and to help people imagine different, more just futures.
Footnotes
Acknowledgements
This work was supported by the National Science Foundation under grants 1744356 and 2015845. All findings and views are the authors’ and do not necessarily reflect those of the NSF. The authors thank Jasmine Anthony and Aaron Yates for superb research assistance.
