Abstract
This article presents a sociological dialogue between six researchers who specialise in different sociological subfields. Each researcher explores the possible consequences of generative AI within their specific area of expertise. More concretely, the article develops insights around directions in social theory, the political economy of intellectual property, matters of identities and intimacies, evidence and evidentiary power, racial and reproductive inequalities, as well as work and social class. This is followed by a collective discussion on six interconnected themes across these areas: agency, authorship, identity, visibility, inequality, and hype. We also consider our role as cultural producers, understanding our reactions to generative AI as part of the empirical, theoretical, and methodological shifts this knowledge controversy engenders, as well as highlighting our duty as critical sociologists to keep the knowledge controversy about generative AI open.
Keywords
Introduction
Examining generative AI from a sociological perspective involves exploring how it interacts with societal structures, cultural norms, human agency, and institutions. Generative AI is not just a tool; it is a socially embedded phenomenon shaped by and shaping the social world. This essay delves into the sociological dimensions of generative AI by examining its impact on labor, power dynamics, culture, ethics, and identity.
Sounds plausible, right? We authors had joked that we ought to ask ChatGPT, ‘What is sociological about generative artificial intelligence?’ If the reply was good enough, we scoffed, we should include the generative AI tool as a co-author. The above quote was from ChatGPT’s introductory paragraph, following a sentence in which it called its ilk ‘a transformative technological innovation with profound sociological implications’.
The six of us, all sociologists working on sociologies of knowledge in different ways at the same department, had just been talking about the stylistic and conceptual coherence of this piece. We collaboratively run a ‘Sociology of AI’ research group, in which we read and think together, sharing an imperative about keeping the knowledge controversy about generative artificial intelligence open. Knowledge controversies happen when new participants, new methods, new data, and/or new norms confront established knowledge-making practices and unsettle them. As much as this unsettling can be uncomfortable and decelerating, it can also be highly productive in creating opportunities for reflexivity and change around previously-sedimented norms and power relations (McPherson et al., 2020; Whatmore, 2009). Generative AI is an all-of-the-above knowledge controversy for all kinds of knowledge production, introducing big tech actors, large language models, mashups, hallucinations, and promises of efficiency into taken-for-granted ways of knowing. Its novelty inspires inspection and reflection, and its clash or match with how we already know awakens us to who and what shapes our knowledge.
Back in the Sociology Seminar room, the group of us collectively wondered: how were we going to stylistically bring together our disparate contributions for this article on how generative AI intersects with sociologies of knowledge? How would we meld each contribution, written in our own unique styles, on our own areas of expertise, and about the key questions we each thought should headline the sociological agenda? We vacillated around homogenisation – whose style would prevail? Should we write in first or third person? With a grin, one of us prompted ChatGPT.
The joking stopped; a sort of pallor descended on the room when we read ChatGPT’s product, manifesting on the screen like ticker tape. The moment was what autoethnographers call an ‘epiphany’ (Ellis et al., 2011). Though the writing style was milquetoast, ChatGPT had chosen to structure its essay around many of the issues and concepts we had also chosen for our article draft. Why? Was it because the generative AI tool had trained on our own writing and created a pale reflection of us, of our discipline? Or… was it because we were not nearly as distinctly, creatively, humanly original as we thought we were?
The moment was a mini-crise about what we, as sociologists, bring to the production of knowledge that sets us apart from the machine. The machine was closer than we expected. Experiencing this banal proximity, we moved sharply away from homogenisation.
In doing so, we deliberately joined a tradition in the creative disciplines of stylistically underlining our humanity in the face of perceived existential threats. Western artists jeopardised by the mechanical reproduction of the industrial revolution responded by cultivating the idea of individualistic artistic genius and inimitable art (Becker, 1982). Critical scholars concerned about commercial culture’s standardising and hypnotic effects on the masses produce wild texts to rouse readerships (i.e. Horkheimer & Adorno, 1972). Writers whose experiences and epistemologies are excluded by the mainstream academy use other ways to write them into existence, such as Black feminist thinkers who express themselves in poetry, song and literature, as well as queer writers who queer academic writing (Collins, 1989; DiGrazia & Boucher, 2005). After all, as orthodox words not only represent but also contain orthodox ideas, critiquing the status quo in striving for a better world often requires using unorthodox turns of phrase (Butler, 1999).
For our part, we argue that generative AI can never generate critique, a cornerstone of our sociologies, since it reproduces orthodoxies. Whilst it may be able to analyse a phenomenon, it does this by remixing that which has frequently come before. In other words, it iterates the already known. We contrast this with critical sociology and neo-pragmatist philosophy of social science, broadly understood (Baert, 2005; Bernstein, 2010). They both differ from AI in a variety of ways. First, their practitioners conceive of critique in terms of imagining a better future, assessing the present in how it falls short and proposing ideas to reconcile the gap between the two. To achieve this, sociologists use our sociological imaginations, not only as Mills (1959) posits it with respect to connecting the everyday with powerful structures, but also in terms of imagination as a liberatory practice – the capacity to open up space in the world for the emergence of radical difference (Benjamin, 2019a). Secondly, by juxtaposing an imaginary future with the current state of play, sociologists are able to rearticulate both the present and the past, confronting hitherto taken-for-granted presuppositions and the hidden workings of social structures. With the analytical lens of structure, sociologists seek to make visible the invisible, to say that which has heretofore been unsaid. Thirdly, underpinning critique is dialogue, whether the collaborative work done in the seminar room or the conversations between researchers and informants performed in immanent critique to identify normative ideals (Stahl, 2013). Dialogue produces ideas that are more than the sum of the parts brought to the conversation. Furthermore, critique often goes hand in hand with care, whether care for those caught up in the critique or the care that drove the critique in the first place (Puig de la Bellacasa, 2011).
And so, to our argument that generative AI cannot critique: a technology oriented to the past rather than the future cannot imagine. A technology that can only see what has been shown hitherto is badly placed to render visible alternative futures. Generative AI might do dialogue, but its nature as passive and prompt-reactive – rather than truly generative – cannot give dialogue its critical due. Generative AI can simulate care, but it cannot be driven by care.
Reminded of our humanity and of our discipline, how we should approach this article became clearer. We engaged in sociological dialogues starting with our individual, empirically grounded critical imaginings about how generative AI intersects with sites of knowledge production. These purposeful, playful, and interactive dialogues are shaped by cultural and material structures, but their outcomes are not predetermined. They contain multiple variables; in this case, our conversational variables included each other and our expertises, the technologies of generative AI, and the writing we have done together and apart. In our exchanges, we considered the cultural production of knowledge in each of our subfields and imagined how the rise of generative AI raises big questions for these subfields. We understand generative AI as a technological phenomenon, as a bundle of tools capable of producing realistic cultural products – such as text, images, and videos – that would have traditionally required human labour, but also as a cultural phenomenon, a site of meaning-making and symbolic struggle (Bail, 2024).
Instead of flattening the idiosyncratic textures of our six critical imaginings as generative AI does, we let each stand as stylistically distinct and individually authored. Patrick Baert, who specialises in social theory and the sociology of intellectuals, asks: What are the broader theoretical issues involved in generative AI? Meredith Hall, who examines the political economy of intellectual property, questions: Will generative AI be the end of IP? Shannon Philip, whose research focuses on the construction of femininities, masculinities, and sexualities, wonders: How does generative AI shape the identities and intimate everyday lives of young people in the Global South? Ella McPherson, who studies the contestation of truth-claims in the digital age, asks: What does the rise of generative AI mean for evidence and evidentiary power? Isabelle Higgins, who investigates the reproduction of racial, reproductive, and digital inequalities, considers: How are methodological and ethical questions about ‘power’ in sociological research affected by the ‘generative’ functions of AI systems? Robert Dorschel, who researches digital labour and social class, queries: What happens to work when it does not disappear due to generative AI? We write based on our own experiences, as generative AI cannot do. We write independently but also in dialogue, and our piece includes a collective reflection on six themes that cut across our critical imaginings: agency, authorship, identity, visibility, inequality, and hype. We also consider ourselves as cultural producers and see our reactions to generative AI as part of the empirical and theoretical shifts this knowledge controversy engenders.
The six themes that link our individual critical imaginings bring us back to our framework of generative AI as a knowledge controversy. These themes, we argue, are key entry points for analysing not only this knowledge controversy, but any knowledge controversy. We identify these themes so as to keep the debate around generative AI open. We are pushing back against the commercially lucrative sense of inevitability that accompanies generative AI’s steady creep into our everyday lives. We are critiquing the closure that such inevitability hastens, a closure we believe would cement power relations around generative AI in favour of big tech without giving space for the citizens and consumers it targets to imagine differently. In other words, the practice of critique requires the knowledge controversy to remain open. In contrast, technological determinism and other forms of fatalistic discourse close the conversation about future possibilities.
Six Critical, Sociological Imaginings About Generative AI
What are the Broader Theoretical Issues Involved in Generative AI?
Firstly, AI is relevant to some of the core theoretical debates in sociology that centre around the scope and limitations of human agency and its relationship to external constraints. This debate has historically been articulated in terms of dichotomies – agency versus structure, the individual versus society, freedom versus determinism, and so on – and it has, for a long time, focused on the constraining effects of non-technical artefacts. Whilst, in the last few decades, critics had already pointed out the significance of technology or indeed the difficulties in disentangling the social from the material (Latour, 2007), this has, within sociology, on the whole remained a minority view, especially when compared to science and technology studies. AI has arguably altered this configuration in an unprecedented fashion in that, as a technology, it has the potential to compromise significantly notions of human agency and freedom due to its remarkable potential to monitor, nudge and guide people’s behaviour. In other words, it can enable various actors, ranging from the state and tech-companies to businesses and possibly the AI-model itself, to limit the scope of individuals to operate autonomously and deviate from specific norms. This is most likely to affect how sociologists conceptualise agency-related questions and might further undermine assumptions of the primacy of the social within the discipline. At the same time, some sociologists might wish to question the residual technological determinism that often seeps into these well-rehearsed arguments about the impact of AI and which are indeed also part of the hype generated by the key spokesmen of the industry itself.
The second issue relates to agency and authorship. For several centuries, people have, especially in the West, become accustomed to the notion of a fixed subject, a human creator responsible for artistic, scientific or literary products. Historians have pointed out that it has not always been like this – up until the Renaissance, the human agent played a less central role because the religious realm was considered to override other spheres, which explains, for instance, the absence of signatures on paintings and, in some historical contexts, the uncertainties as to the authorship of texts. Indeed, it is only roughly since the 16th century that we have been able to trace the human subject responsible for most creative products (Burckhardt, [1860]1990), although some historians see this transition as less abrupt (Huizinga, [1919]1996)). There might have been attempts by philosophers, especially of a poststructuralist bent, to question the fixity of the creator – summarised in the provocative and widely used dictum ‘death of the subject’ (Barthes, 1967) – whilst sociologists have been quick to point out the significance of the collective over the individual (the social nature of the subject or teamwork), but outside the academic realm people have been operating with the idea that fixed subjects are connected to creative products. With AI, this has changed rapidly in that it has become more difficult to identify a fixed subject. Think of AI-generated art, the role of AI in scientific discovery or its help for writing – these are all examples that show how our deep-seated assumptions about the fixed subject have been questioned. This will also have broader societal ramifications, not least in relation to copy-right and legal responsibility.
Thirdly, AI might pose further challenges to our capability as human beings to identify the veracity of information we receive and to distinguish truth from falsehood. Nearly half a century ago, postmodern sociologists such as Jean Baudrillard ([1981]1994) argued that, in the era of late capitalism with its consumerist logic, the prevalence of simulacra (copies of the real thing) had led to a state of hyper-reality whereby individuals find it increasingly difficult to distinguish between reality and copies of the real. Whilst at the time critics of postmodern sociologists questioned their hyperbole and apocalyptic rhetoric and deplored the lack of systematic empirical evidence for the claims they made, the technological revolution of AI seems to provide empirical confirmation of these claims and, if anything, has shown Baudrillard’s theory to have been correct. Indeed, AI has arguably confirmed, if not intensified the phenomenon of hyper-reality. The widespread use of words that indicate the made-up reality is indicative of its contemporary significance: ‘hallucinations’ and ‘deepfakes’ (fabricated recordings or videos) have now become part of our everyday vocabulary. The technological progress made in this area in recent years has been rapid and has the potential to lead to the spread of uncertainty as to the veracity of claims in a variety of areas, notably in the political and judicial realms.
Will Generative AI be the End of Intellectual Property?
Sociology’s ambivalence concerning the ethics of IP stems from at least three factors. First, while IP is a central focus in legal and economic scholarship, it remains relatively uncharted territory within sociology (Ford, 2021; Hall, 2023). Second, academics themselves are deeply imbricated in dynamic and, at times, competing moral economies of creativity, grappling with how to balance originality and efficiency in their writing, research, teaching, and assessment practices (Bail, 2024; Eichhorn, 2006). Third, the rise of generative AI has amplified pressing concerns about the power of IP to foster, protect, or stifle innovation while providing few definitive answers.
This last issue is arguably the most beguiling. Put simply, generative AI poses an existential threat to the global IP rights regime. In shifting the perceived locus of control from humans to machines, this technology vexes both the foundational principles as well as the raison d’etre of the system. Under existing copyright and patent law, the classification of inventors and authors is explicitly limited to human agents (WIPO, 2024). The rationale for this restriction lies in the traditional justification of IPRs as incentives that promote ‘the Progress of Science and useful Arts’ by providing innovators with economic rewards that would otherwise be unattainable due to the non-rivalrous and replicable nature of knowledge products. However, as AI does not require such enticements, it eliminates the need for socially costly monopolistic protections.
This definitional threshold also means that individuals and companies that rely on generative AI to create new forms of artistic expression or achieve scientific breakthroughs would currently find their work ineligible for IPR protections. This exclusion is no trivial economic matter. The strategic value of IP to corporations has exploded, transforming the late modern economy into a ‘brave new world’ where ‘wealth mainly lives in intellectual property’ (Birch & Muniesa, 2020; Foroohar, 2017).
Although there may be broad consensus about IP’s immense value to the global economy, there is less agreement about whether this is a good thing for society. Critics contend that IPRs are tools of power and control that disproportionately benefit elites (Christophers, 2020; Pistor, 2019). This has enabled unprecedented wealth accumulation via knowledge extraction and market dominance, consolidating vast cultural, scientific, and technological resources, often to the detriment of small creators, workers, and ‘developing’ nations (Boyle, 2003; Chang, 2002; Rotta, 2024). At the same time, IPRs are also being used to safeguard smaller creators from the threats posed by emerging technologies like generative AI. In 2023, for example, a group of visual artists and illustrators filed a class-action lawsuit alleging that AI image generators trained on their artwork did so without consent or compensation (Nguyen & Mateescu, 2024).
The clash of AI and IP represents a striking manifestation of the crisis tendencies of capitalism presaged by Marx – a system beginning to cannibalise itself. Perhaps the most fitting analogy for this transformation is the ouroboros, the mythical serpent that consumes its own tail. Nevertheless, history also shows that legal frameworks can adapt to societal shifts and that capitalism is nothing if not resilient. Before the 20th century, intellectual property was generally regarded as inalienable from the individual worker in U.S. legal thought. Today, however, the legal fiction that corporations can hold authorship or inventorship status is essentially uncontested, as is the ‘work-for-hire’ doctrine, under which employees cede their IPRs to their employer in the terms of their contract (Fisk, 1998).
While crafting new legal frameworks that better assimilated AI-generated content would undoubtedly introduce new challenges and complexities, it is not inconceivable. However, granting AI the ability to hold IP rights risks exacerbating economic disparities by further concentrating creative capital rather than fostering the democratisation of innovation. At the same time, it becomes difficult to see how artists, writers, and inventors could sustain their livelihoods in a world without such protections.
These seismic shifts unsettle the economics and ethics of creativity; they also offer sociologists a unique opportunity to shape the policies governing generative AI. This is a task for which the discipline is well suited. Studies of art and science have long shown that decisions about creativity, ownership, and fairness are rarely straightforward, transparent, or free from historical context and social bias and often erase the roles of communities and institutions – collaborators, schools, families, and the state – that shore up the creative labour of individuals (Becker, 1982; Merton, 1968; Sekula, 1983).
Generative AI not only exposes the limitations of the systems that birthed it but also invites us to think critically about how we recognise and reward novel thought. Rather than being apologists for a seriously flawed IPR regime – contorting existing legal and policy frameworks to accommodate or reject AI – we are presented with the opportunity to confront urgent questions that tie our fraught histories of property and ownership to the future of intelligence. One way or another, this moment marks a turning point in the knowledge economy’s status quo. What it asks of sociology is no less revolutionary – a radical reimagining of the institutions that govern creativity and innovation in ways that centre the collective good and safeguard the public domain.
How does Generative AI Shape the Identities and Intimate Everyday Lives of Young People in the Global South?
To begin with, in urban contexts of the Global South like Delhi and Johannesburg, high levels of economic, social, and cultural inequalities profoundly shape everyday lives of young people. In such a context, one young working-class man in Delhi called Chintu explained to me, ‘I like the English prompts that the dating apps provide, because my English is not so good, and you have to write in English, these little prompts make it easier to talk to girls’. For several young men like Chintu who are anxious about their linguistic and social capital in communicating in English on dating applications, the automated prompts and suggestions generated through AI technologies are a helpful tool in furthering their sense of agency and confidence in communication for the purposes of intimacy.
Likewise, one young upper-middle class woman called Mary in Johannesburg explained to me, ‘The algorithms know me! I think it can work out who I like and don’t like, so I am finding that there are sometimes very good matches… it’s safer to date such guys I think’. In Mary’s case, the apps and their suggested matches filter profiles in keeping with her algorithmic data and recorded preferences, and this in turn allows her to interact with men who match her requirements for dating purposes. In so doing, the app reproduces inequalities of class; however, when viewed intersectionally from a gendered perspective in the context of Johannesburg with high urban inequality and gendered violence, these generated matched also create a sense of safety for women like Mary.
Specifically thinking about the generative aspects of dating apps, one young woman in Delhi called Rupal reflected on the ways in which image enhancing AI tools on dating apps objectified and racialised women’s bodies. Rupal explained, ‘The automatic beauty filters on the apps are about making Brown girls look white! That’s their whole point! If you use those features, it makes your lips redder and your skin whiter… I look like a totally different person…and of course I’ll get more dates with those kinds of pictures!’ In an already highly gendered, sexualised, and racialised sphere of dating applications and digital publics, these dynamics of objectification and racialisation are enhanced by generative AI features on dating apps, amplifying these inequalities in already highly unequal contexts of the Global South.
Similarly, a poor young Black gay man in South Africa called Blessing explained to me that in his township of Alexandra where electricity and internet access were scarce, the dating apps were detrimental to his intimate everyday life. Blessing explained, ‘Already if you are gay in South Africa, it is difficult to date, and now all the gays are dating online, but I don’t even have data in the township, so how will I ever date! I think these apps have ruined my chances really!’ In the context of basic resource scarcity and a deep digital divide in the Global South, dating apps potentially harm young men like Blessing by entrenching and reproducing inequalities in new and complex ways.
To offer some brief concluding remarks, in this section I have explored some of the many possibilities and dangers of generative AI in relation to identities and intimacies in an uneven global context. I would argue that being excessively techno-phobic or techno-optimist about generative AI is complicated when we look at the global margins as well as marginalised forms of gendered and sexual living. Dating apps and generative AI can offer both opportunities for gendered agency and wellbeing, as well as entrench forms of digitally mediated inequalities around various identities. Hence we need more research to keep critically engaging with generative AI, intimacies and identities which have complex and uneven effects in an uneven global context.
What does the Rise of Generative AI Mean for Evidence and Evidentiary Power?
Evidentiary institutions are governed by evidentiary epistemologies, namely methodologies for constructing and evaluating facts, and they regularly encounter counterevidence. Competing agents often struggle over the evidence that evidentiary institutions attend to and make, as different agents win or lose as a result of this evidence’s effects. Struggles occur around evidentiary epistemologies, too, as the choice of one or another methodology also has winners and losers. For example, evidentiary institutions have weathered debates around quantitative versus qualitative methods, subjectivity versus objectivity, big data versus small data, and civilian versus professional witnesses.
The generative AI knowledge controversy introduces new struggles all along the chain of the creation, communication, and contestation of evidence. Generative AI has been able to enter these evidentiary institutions in the first place because of the alluring promises it makes to labour-intensive sectors; creating evidence is complex and requires time and expertise, so any claims to efficiency and convenience can cause evidence-workers to prick up their ears. We need to look no further than the searchable page of custom versions of ChatGPT made by paying users for particular purposes. Among the ‘Top Picks’ are a GPT that helps us ‘do hours worth of research in minutes’ and another that lets us ‘effortlessly design anything’ (OpenAI, 2025). The sociology of AI, and the sociology of media and media studies before that, have shown us time and again how the normative promises of new technologies have been Trojan horses for other values that can undermine our own, from objectivity concealing racism and sexism to productivity only made possible through inequality (e.g. Mansell, 2010; Noble, 2018).
One of the problems for evidentiary institutions as they evaluate whether to incorporate generative AI is that the normative upside is clear, spelt out through marketing materials making promises of efficiency, while the normative downside is obscured, either deliberately or because of the fog of novelty (Benjamin, 2024). We see evidentiary institutions working out this balance real-time, such as the UK Department for Science, Innovation & Technology testing AI-supported versus exclusively human evidence reviews against metrics of speed and quality (Egan et al., 2024), or the UK Courts and Tribunals Judiciary advising judicial office holders to evaluate their potential use of generative AI on the grounds of accuracy, bias and security (Courts and Tribunal Judiciary, 2025). In the human rights sector, practitioners debate whether generative AI images can be used to ethically represent violations while protecting victims’ identities, or if these images’ fabricated nature undermines the sector’s evidentiary authority. The academy’s stances range from methodological optimism around the efficiency and accessibility of computational methods to caution about generative AI’s threats to ethics, the environment and eureka moments (McPherson & Candea, 2024).
In all of the above, evidentiary institutions are getting better at assessing what generative AI implies for the quality of their evidence; they are working their way through the knowledge controversy to a place where their knowledge production practices become more settled, with or without generative AI. Less present, however, is their analysis of evidentiary power. With the rise of genAI, who gets access to evidentiary power? And what happens to the power of evidence?
If we assume the primary motivation for evidence-workers’ adoption of generative AI is efficiency, this new technology impacts the quantity of evidence they can produce. In other words, people who can use generative AI have more chances to get more evidence taken up by evidentiary institutions, provided that these institutions’ evidentiary epistemologies either allow or cannot detect genAI. Given the political economy of dominant commercial genAI models, however, this opportunity is not equally available to everyone. ChatGPT is available in 50 languages only – out of the more than 7000 in the world. Parent company OpenAI’s website declares, ‘The model is skewed towards Western views and performs best in English’ (OpenAI Help Center, 2024). Globally, women are 25% less likely to use generative AI than men, which is not reducible to a digital divide problem but also reflects exclusionary design (Otis et al., 2024). Harmful content reliably alienates users, and minoritised users are much more likely to be its target (Amnesty International UK, 2018). This is inevitable without design features that disincentivise or eradicate this content. On ChatGPT, these features are only available to some users; its website states, ‘Some steps to prevent harmful content have only been tested in English’ (OpenAI Help Center, 2024). So, we can see that access to evidentiary power deriving from commercial genAI is neocolonial and gendered, as well as classed: $200 a month gives access to the most sophisticated version of ChatGPT (OpenAI, 2024).
Reimagining genAI according to design justice principles – which centre equality and community expertise – will reduce inequalities around access to evidentiary power (Costanza-Chock, 2020). But it will not eradicate even more fundamental threats to evidentiary institutions. One is from the challenge genAI poses to these institutions’ evidentiary epistemologies, and not just from genAI-enabled proliferations of digital fakery. The ‘dead internet theory’ posits that most internet content is AI-generated (Walter, 2025). Our overexposure to genAI-enabled digital fakery may mean our trust in evidence declines. Generative AI thus creates a verification tariff for evidentiary institutions, as trafficking in evidence becomes even more time and resource expensive due to higher barriers to belief. Furthermore, we may believe less in belief, and thus evidentiary power through evidentiary institutions wanes.
I can and must go one step further. GenAI knowledge controversies are consuming a lot of evidentiary institutions’ attention. The timing could not be worse. Academic freedom around the world is in significant decline, as is the rule of law, and attacks on human rights defenders are escalating (Amnesty International UK, 2025; Lott, 2024; World Justice Project, 2024). Rapid technological change that permeates whole sectors can feel like a ‘shock and awe’ campaign – and it can have the same effect. These sectors are so overwhelmed that they are powerless to resist other strategic attacks designed to eradicate them (Ullman & Wade, 1996). We cannot allow genAI knowledge controversies, important as they are, to distract from today's knowledge controversies that are existential for evidentiary institutions and the evidentiary power – indeed, counterpower – they enable.
How are Methodological and Ethical Questions About ‘Power’ in Sociological Research Affected by the ‘Generative’ Functions of AI Systems?
I turn illustratively to an example from my empirical research, which examines how internet design and use affect the adoption of children into families in the USA. State agencies share personal information about children deemed ‘hard to place’ in public ‘waiting child listings’ which are purported to increase the chance of a ‘successful’ adoption placement (Higgins, 2024). When prompted, ChatGPT can generate ‘waiting child listings,’ and the characterizations of children in these outputs vary depending on the prompts. The racial identity and types of trauma that are associated with the ‘waiting child’ connected to the geographic location written within the prompt – when asked to generate a listing for a child from a particular urban area, for example, ChatGPT generates text about a child of colour affected by what it terms ‘community violence’. Though there is no current evidence that government agencies are using generative AI to create textual descriptions of children, some photographs of children on these sites have, in the last years, been replaced with what look like AI generated images.
One way of ‘decoding’ such outputs is to contend that intersectionally racialised biases are ‘encoded’ into the generative AI model itself. The model is likely trained on pre-existing ‘waiting child listings’ scraped from the internet, which interact within deep learning algorithms which also have been trained on racially biased data about specific geographic locations. Scholars have documented similar racialized outputs in other contexts, labelling these as ‘algorithms of oppression’ (Noble, 2018) or ‘coded bias and imagined objectivity’ (Benjamin, 2019b). This critique is extendable to my research context. More broadly too, media and cultural sociologists are well versed in considering how discourses are ‘encoded’ and ‘decoded’ in popular media representations (Hall, 1980). But do sociological projects concerned with making visible what Hall (1992:285) terms ‘the trace’ of structural realities within cultural representations need to be updated to contend with generative machines? In these cases, the cultural representations, formulated as machine outputs, do not ‘pre-exist’ in cultural contexts but are generated in response to the input from the user of the system. In this case, the generation of ‘waiting child listings’ relied upon me, as part of my research practice, prompting the generative AI system. Thus, sociologists have to recognise that in analysing generated material that they themselves have prompted the AI system to create, that they are contributing to both the ‘encoding’ and the ‘decoding’ of meaning within generated content; this complicates the idea that sociologists might critically analyse forms of visual media and culture which have been created by others, and thus already pre-exist in defined empirical contexts.
These methodological questions are also fundamentally ethical. In this empirical case, the description of real children’s bodies and personal histories, already non-consensually created and shared online, are rearticulated, rewritten and recombined by generative, probabilistic LLM technologies, which hold no capacity for semiotic distinction or understanding. The profit generation motive of the companies that develop such technologies render these ethically ‘grey’ questions about how technologies use intimate and personal data about people living in the world more difficult to explore (Roberts, 2016). To complicate matters further, both the ‘black box’ nature of some machine learning systems and the discourse of AI as ‘a black box’, which reifies un-explainability (Pasquale, 2015), makes the relationship between specific pieces of training data and the outputs that such machines generate difficult, if not impossible, to trace, though researchers have found that some models do ‘memorize individual images from their training data and emit them at generation time’ (Carlini et al., 2023). This insight further raises the ethical stakes with regard to the AI generated ‘waiting child listings’, where real children’s personal information could be reproduced almost exactly by diffusion models such as Dall-E2 and Stable Diffusion.
Such realities lead to a related set of reflexive questions for sociologists. What is our ethical responsibility, when seeking answers about the function and effect of LLMs means that we must work with the technologies we aim to investigate? Do we conceptualize these technologies as producers of cultural artefacts, or as agents or interlocutors? How do we grapple with questions of consent and agency of the people whose data the machines have been ‘trained on’, if we recognise that individual pieces of training data within such systems cannot be identified or redacted? Such a reality renders current sociological notions of both ‘informed consent’ and ‘research participants’ difficult to engage with.
These questions highlight some of the multiplicitous ways that questions of sociological method and ethics are implicated in both the design and everyday use of generative AI technologies. As sociologists, we are well versed in productively interrogating the taken for granted relationalities implicated in everyday life. We may, therefore, be well placed to turn this lens towards the realities of interacting with AI systems, as either ‘prompters’ of those systems or as individuals who have non-consensually contributed our data to the ‘training’ of such systems. These insights will likely be of value both within and beyond our discipline, especially if we are able to highlight how specific assemblages contained within such AI technologies place groups of people and their personal information and prompting actions in relation to one another in new and unexpected ways.
What Happens to Work when It Does Not Disappear due to Generative AI?
One promising avenue for such inquiry is in examining the new occupations that underpin the thriving AI industry. Entrepreneurs and tech leaders typically do not elaborate on the various workers that their firms rely on; they prefer to present new technologies as the fruits of individual genius. This narrative serves not only their egos but also fulfils the function of attracting financial investments, which, in turn, raises their companies’ stock valuations (Irani, 2015). Yet, in a sociological sense, AI systems are never autonomous. They rely on a myriad of low- and high-paid workers to get built and to keep running.
Low-paid workers, for instance, are heavily employed for data annotation, a labour-intensive process involving the labelling and categorization of vast amounts of raw data, necessary to help machine-learning algorithms make sense of the data they process (Miceli & Posada, 2022). High-paid tech workers, on the other hand, are typically responsible for shaping and programming software and algorithms, designing the user interfaces for AI applications, and analysing big data (Dorschel, 2022). The emergence and growth of both of these occupational groups challenges the narrative of job scarcity in the age of AI. But the two groupings are also sociologically significant because of the unique forms of power they command. Firstly, AI workers hold inscription power: they inevitably imprint their worldviews and classification principles into the digital technologies they build. However, the extent of this power – and the ways it is exercised – differ greatly across low- and high-paid occupational groups. Secondly, AI workers wield infrastructural power: their labour is foundational to a wide range of industries that increasingly depend on AI technologies. Without the service and knowledge work performed by these digital labourers, the smooth flow of information contemporary capitalism is increasingly dependent on would abruptly come to a halt.
Of course, though, we cannot ignore the elephant in the room, which is the matter of how generative AI will impact existing occupations and professions. But this larger puzzle should not be reduced only to the question of automation. One way forward is to bring more cultural sociological perspectives to the study of the nexus of AI and work. This would allow us to address underexplored questions around how existing occupations relate to AI and how it affects their identities. What kind of subjectivation processes can we observe as generative AI technologies are integrated and adopted in the workplace? Are workers relating to AI chatbots as co-pilots, assistants, mentors, or something entirely new? These cultural issues remain starkly underexplored in current research even though they carry major economic implications. With the potential for workers (and users) to access a new form of intelligence, the question arises as to whether this paves the way for qualitatively new experiences of class power. Human-like AI assistants may reshuffle the experience and perception of class relations as they create a new category of intelligent actors that can be commanded. A cultural sociological lens thus invites us to empirically and theoretically reconsider the lived experience of inequality in the context of increasing interactions with smart AI assistants.
In conclusion, I have argued that sociologists should not make speculative predictions of job displacement their bread-and-butter business in analysing the nexus of AI and work. The rise of various roles on which AI innovations depend – ranging from precarious data annotators to professional tech workers – reveals the essential, yet often overlooked, human labour behind these supposedly ‘autonomous’ and ‘self-learning’ systems. Moreover, cultural sociological perspectives on AI and work can shed light on potentially influential new class dynamics that are unfolding. By addressing these issues, sociologists can provide insights into the evolving interplay of labour, technology, and class in our contemporary conjuncture that may just prove more compelling than recycled diagnoses of a soon-to-be workerless society.
Themes Across Our Critical Imaginings
After we had written our six individual pieces, we sat in a circle and discussed them, looking for themes that connect them and make something greater than the sum of their parts. These thematic areas are: agency, authorship, identity, visibility, inequality, and hype. As key sites of struggle in a knowledge controversy, they are as relevant for sociologies of generative AI as they are for sociologies of knowledge and indeed for our own scholarship practices. In identifying them, we do not wish to close down debates over what a knowledge controversy is or how to approach one, but rather to lay out some more analytical tools in an invitation to dialogue and collaboration with other scholars concerned with sociologies of generative AI.
Agency
Above, we raise the issue of the possible impact of AI on human agency, especially in the context of global divisions. Particularly telling was the complexity involved and the multiple and unpredictable ways in which AI affects agency, which Baert points to theoretically. Empirically, it is well-documented that the design of the technology is skewed towards certain parts of the world, but, as can be inferred from the previous discussion, it would be wrong to conclude that these global divisions translate into a reduced agency for everyone in the Global South. Indeed, as highlighted by Philip, the brief analysis of dating apps in India and South Africa taught us that some people experience the technology as a liberating and empowering tool, whilst others feel the exact opposite. Likewise, there are plenty of cases that show how large companies can use AI to reduce the power of their employees but, again, as Dorschel points out, the overall picture is anything but uniform and the power of tech companies is more fragile than we often assume. For instance, Hall’s examination of the interplay between AI and intellectual property rights showed that AI not only affects the agency and power of the individuals but might also impinge on the economic viability of production companies.
Authorship
The perspectives presented here represent a concerted effort to engage with both enduring and emerging questions around authorship in the age of generative AI. Another notable theme running through our analysis might best be described as the ‘agonies and ecstasies of influence’, to borrow from Jonathan Lethem’s (2007) famous meditation on the vexed relationship between creativity and appropriation. In this case, we note how generative AI purports to democratise creativity by enabling novel forms of collaboration and providing tools that push the boundaries of scholarly and artistic production. From intellectual property controversies (Hall) to algorithmic intimacy (Philip), our provocations encourage social scientists to reconsider how knowledge is inspired, produced, and mediated through the interaction of human and machine agency (Baert). At the same time, we caution that the operations of influence pose significant challenges. Our collective analyses, such as those highlighted by McPherson and Higgins, underscore ethical dilemmas related to the use of data without consent, the erasure or distortion of marginalised voices, and the amplification of structural inequalities embedded in AI systems. We also observe, such as in the work of Dorschel, how generative AI increasingly centralises creative and economic power within the top tiers of corporate hierarchies, sidelining small-scale creators and low-wage digital workers whose substantial contributions often remain invisible. Across various contexts – whether in sociological research or digital kinship – we see new ways that information and insight get commodified while obscuring the histories and labour embedded in its outputs. Ultimately, in highlighting the need to rethink authorship as a socio-technical process, these critical imaginations demonstrate the necessity of understanding generative AI’s transformative potential through a deeper engagement with intersections of consent, power, creativity and accountability.
Identity
As we have demonstrated in our contributions, generative AI is transforming social identities through the categories that it creates and conceptualises. New ideas of who or what a subject is, highlighted theoretically by Baert, are emerging through the power generative AI has to inscribe social life with identities, terminology and categories through which we construct ourselves. These inscriptions of generative AI are visible in the categories dating applications provide (as shown by Philip), or how racialised child selection strategies are emerging (as shown by Higgins), or the ways in which what is counted as ‘evidence’ or ‘proof’ or ‘truth’ itself are being reconfigured for ourselves (as shown by McPherson). These inscriptions and ways of thinking about ourselves and the world are shaping social identities of being a ‘man’ or a ‘woman’ or indeed ‘human’ at the most basic levels. Furthermore, as Dorschel argues, the rise of generative AI can be expected to impact class identities in new ways since the technology offers users novel experiences of control over smart agents. Yet at the same time, our identities also have deep continuities shaped by social forces of history intersecting with the politics of class, race, gender, age or sexuality. As we demonstrate, the foundations of sociology have always looked at the ways in which subjectivities and identities are constructed, and hence, new sociologies of generative AI have to contend with how these social identities and subjectivities are emerging and reproducing in relation to AI.
Visibility
Visibility is a coveted, contested and complicated resource. Generative AI’s effability power conjures new visibilities, both in its cultural products and in the ways it intersects with structural forces. Visibility can help, but it can also harm, and the less agency we have over our visibility, the more likely it is to harm (Benjamin, 2019b). Consider adoptive children’s histories, as highlighted by Higgins, which are potentially feeding the generative AI machine without their consent, or the woman of colour, as highlighted by Philip, whose dating apps impose whiteness on her representation. Visibility agency is about being seen how we want to be seen, including the translational power of making oneself legible. McPherson and Philip highlight that English prompts that facilitate users’ communication can enhance their legibility, while Hall demonstrates that artists using generative AI find themselves less legible under copyright law. Even as we struggle to bend generative AI to our own legibility purposes, the technology refuses to meet us halfway, as its inner workings become ever less legible and force us to take digital literacy leaps towards it. Visibility and truth are another site of struggle, as seen in the struggles over evidence that McPherson highlights, where the very visibility of generative AI-enabled digital fakery creates truth in a Foucauldian sense, in that it has effects on the world. Visibility is also enmeshed with care, as carework – like the low-paid labourers, highlighted by Dorschel, who clean our generative AI content – is often invisibilised, and an ethics of care approach to research is about acknowledging this caring labour within sociology’s wider imperative of making visible how power works invisibly (Puig de la Bellacasa, 2011).
Inequality
Questions of inequality emerge in different ways across our writing. Much valuable sociology of AI focuses on questions of power and structural injustice (see Joyce & Cruz, 2024), as well as critiquing the ways that the worldviews, beliefs and biases of technology designers are always-already part of AI systems (see Buolamwini & Gebru, 2018). Context specific scandals, contention and controversies provide empirical bases for these arguments (Marres et al., 2024). Other scholars explore the embodied and environmental effects of the design, building and maintenance of AI technologies, highlighting the unequal geographic distribution of resource extraction and harm that are often rendered invisible in public-facing, marketable representations (Tubaro et al., 2020; Valdivia, 2024). In a range of ways therefore, social scientists explore how unequal social relations are mediated through and refracted by machine learning systems. Our interactive dialogue in this article contributes to this body of work by highlighting some less theorised empirical locations in which relations of power and inequality are implicated when the object of sociological analysis is generative AI. Across our writing, questions of inequality are expressed in the experiences of specific populations – be they human rights fact finders (McPherson), artists and others working in a context in which the distinction between ‘human’ and ‘machine’ is blurred (Baert and Hall), dating app users in the global south (Philip), AI workers with ‘inscription power’ (Dorschel) or sociologists reflexively considering how we might use generative AI in research (Higgins). In all of these cases, part of the experience of using generative AI technologies is a reflexive grappling with the relations of power and inequality that such use entails. Such grappling depends, in part, on the differential subject positions and context specific motivations that users bring to their engagement with generative AI technologies. Though, as Baert argues, the questions generative AI raises about the fixed nature of the subject itself can render such questions multilayered and recursive.
Hype
Across our different sociological perspectives on generative AI, the matter of hype can be reconstructed as a final common theme. Celebrations of recent advancements in machine learning and LLMs undergird the social phenomena we all engage with in one way or another. Our sociological approach to practices of hyping has been to critically engage with techno-optimist stances while avoiding slipping into all-too-comfortable techno-pessimist worldviews. The critiques presented by Baert concerning technological determinism, along with Hall’s emphasis on capitalism’s resilience, illustrate the importance of examining who benefits from overstated expectations. Philip draws attention to how lived experience in the Global South challenges binary narratives around AI-mediated intimacies, while McPherson urges caution against the rapid, uncritical adoption of AI tools in academia, highlighting how hype reinforces existing biases. Finally, Higgins emphasises the need to probe beneath surface-level ethical discussions of fairness to expose deeper structural inequalities reproduced by AI systems, while Dorschel reminds us that speculative claims about AI-driven job displacement can enfold unintentional effects and often obscure more immediate, significant transformations in labour practices.
Together, we believe that one of the key strengths of the sociological tradition is its capacity to systematically examine the relationship between cultural positionings, on the one hand, and socio-structural positions on the other. Deploying this two-fold heuristic offers a unique lens for better understanding who has an interest in raising expectations about the potentialities of generative AI, and who does not. Along these lines, our heterogeneous perspectives on generative AI all call for systematic empirical research into the development and adoption of generative AI and its varied influences on agency, authorship, identity, visibility, and inequality. Developing intriguing and innovative critiques of generative AI that go beyond critiques that AI chatbots can generate themselves will require conceptually informed empirical research that is oriented toward developing new critical and theoretical insights, ones that centre care and structure and that keep open the possibilities for dialogue towards new imaginaries.
Conclusion
This collection of myriad sociological perspectives on generative AI is the result of a collective practice and departmental dialogue. What happens when a bunch of sociologists from within one department but with different interests and backgrounds come together to think about the relationship between generative AI and sociology? The result is not a coherent research agenda or departmental manifesto but rather a kaleidoscope of approaches, principles, reflections and theorizations. The kaleidoscopic perspectives we assembled are distinct but complementary; they shed different lights on the evolving social nature of generative AI, while being connected via a number of foundational themes around agency, authorship, identity, visibility, inequality, and hype. These themes, we argue, are ways into understanding a knowledge controversy in all its complexities. Our engaged conversations also show how we scholars, as cultural producers, are imbricated with all other production of knowledge sites in the knowledge controversies that generative AI’s development and adoption foment.
Based on our contemplations, we hope to have pointed to analytical, theoretical, and methodological ways of pursuing a sociology of generative AI that push us beyond the generative AI hype. Among other things, it is our job as sociologists to explain the social logics underpinning the emergence and diffusion of generative AI technologies. We live with AI and are surrounded by AI, but we are also, at least to some extent, outsmarting AI, gaming AI and agentically navigating it. In keeping with established sociological traditions of thinking beyond the hype, analysing carefully and with reflexivity, our dialogues with the sociologies of generative AI allow us to come to terms with the complex power relations that AI technologies are entangled with in new ways.
Our dialogues also revealed that we sociologists, differentiated into various subfields, need to systematically track and trace both the social changes that emerge with generative AI, but also the many continuities that remain. As we have demonstrated in this article, context and empirical realities matter profoundly when we think about the different ways in which generative AI is shaping social life, and the moments in which we in turn shape generative AI itself. This article calls for a stretching of our sociological imaginations about generative AI that allows us to unravel the contingencies around contemporary socio-technical developments. Confronted with the statistical and probabilistic operations of the latest AI technologies, we hold, collectively, that it is crucial to rely more on creative interactions and conversations to develop new sociological perspectives and reflections. Thus, while this article does not provide definitive answers on what new sociologies of generative AI ought to be, it does show us how different perspectives on this object of analysis that already exist within one department of sociology can be fruitfully assembled, discussed and combined.
Footnotes
Acknowledgements
We thank the members of the Cambridge’s Sociology of AI research and reading group for interesting and inspirational conversations, as well as our reviewers and the special issue editors for their insightful comments.
Author Contributions
We were all involved at all stages of the authorship process.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
