Abstract
The growing deployment of ‘artificial intelligence’ (AI) systems across society, and massive investments currently being made to develop systems capable of greater and greater agency, necessitates sociological reflection. Specifically, what do current shifts in automation mean for how we theorize social relations, and how should sociologists position themselves normatively? While the development of agential machines might be seen to validate ‘posthuman’ critiques and ontologies, such as those seen in actor-network theory and ‘new materialist’ thought, they have also necessitated a humanist response. This is because the sociotechnical transformations of AI are accompanied by discourses that devalue what it means to be human, and have consequences that increasingly marginalize human agency, corrupt human knowledge, threaten human interests, and are experienced as dehumanizing. Reflecting on posthumanist and humanist strains in social theory, this article considers aspects of both that can be valuable in coming to terms with the social shifts of automation and new interactive technologies, but ultimately argues for a humanist stance to counteract AI’s anti-human tendencies.
Introduction
That sociology is the study of human societies is a given. This has been true since the discipline’s beginnings and remains so today, but to what extent will it remain the case in the future? This question is worth raising given recent developments in ‘artificial intelligence’ (AI) technologies, and predictions about a coming wave of ‘agentic’ digital entities. The anthropocentrism of social theory (privileging particular conceptions of what it means to be human) has been subjected to numerous valid critiques, and posthumanist theories that decenter the human have become well-established as a result. While posthumanism may seem fit-for-purpose to address more agentic non-human systems, it is also often compatible with the visions promoted by industry-aligned boosters of these technologies. Meanwhile, critics describe new forms of automation as dehumanizing, anti-humanistic, and anti-human, and there are broadly felt anxieties about the distinctiveness of human abilities and intelligence arising from current sociotechnical developments. This creates a problem for sociology, raising the normative stakes of our commitment to either humanist or posthumanist theories.
The relationship between the ‘science’ of sociology, and the humanities, humanism, or ‘humanistic’ phenomena, has recurrently been a topic of discussion as sociologists sought to situate themselves in relation to other disciplines (Parsons, 1970; Znaniecki, 1927). However, it was during the late 20th century that the discipline first saw significant debate about how to include the non-human in sociological analysis, most notably in relation to actor-network theory (ANT, see Latour, 2005). Meanwhile, the 1980s also saw an early wave of ‘artificial intelligence’ (known as ‘expert systems’), which provided an ‘opportunity for reevaluating our [human-centric] preconceptions about behaviour, action, its origins and agency’ (Woolgar, 1985: 568). This led Steve Woolgar to ask whether we could ‘develop a sociological study of the human/mechanical language community’ that would incorporate not only algorithmic or human-like systems into ‘the social’, but also ‘bicycles, missiles, or food processors’ (Woolgar, 1985: 568). The call to develop such a ‘sociology of machines’ has more recently been taken up by Ceyda Yolgörmez (2021), who ‘proposes . . . taking the relations with and of the machines as pertinent to social relations’ (p. 159), but the idea of non-human agency in social relations is not as radical of a challenge as it once was. In addition to ANT’s influence, other ‘new materialisms’ have also encouraged sociologists to theorize relations between humans and a variety of non-human phenomena (Fox and Alldred, 2017). 1 The work of Rosi Braidotti (2013) in particular will help articulate the relationship between humanism and posthumanism in the sections that follow.
In this article, I adopt a nominalist definition of AI, accepting that AI is whatever people say it is. This is because I am interested in the social transformations that are culturally attributed to AI, rather than any specific technological features. AI is, first and foremost, a ‘marketing term’ (Bender and Hanna, 2025) for products that promise some form of automation. This means that current controversies over AI echo mid-20th-century concerns about automation and the ‘technological society’ (Winner, 1977). The technologies that have contributed to ‘AI hype’ over the past decade can be distinguished from previous attempts to automate human abilities through their reliance on ‘machine learning’ processes, with the most recent wave of generative AI that has followed the release of ChatGPT involving large language models (LLMs) that ‘learn’ to reproduce patterns found in vast sets of ‘training data’ composed of human cultural products (see Cadman et al., 2025). This has led to the ability to automate the production of various media: text, images, video, which we are now regularly relating to in our lives.
All of this has helped to effectively renew Woolgar’s (1985) 40-year-old argument: We are once again confronted with social relations between humans and machines that stand in for humans. Sociology’s traditional human-centric focus is again up for debate, as we consider how to broaden its scope to incorporate these machines. However, I would like to proceed even further and ask whether it is now possible to conceive of social relations without humans, or between humanity’s digital creations? In other words, what is the relevance of humans to our social ontologies, or sociological theories, in the age of contemporary AI systems? To answer these questions, we should first consider the sociotechnical changes we are currently living through and what sorts of changes we can expect in the coming years. I argue that we need to be skeptical and critical about the futures predicted by AI’s promoters, but that we should also take seriously the current consequences of automation across a number of social fields, and the vast amounts of capital being aligned to push these transformations further. Because many of these consequences are negative, harmful, and dehumanizing, this has produced a humanist backlash against AI. While posthuman theories may provide ways of bringing the artificial agencies of AI into our social ontologies, I argue that sociologists should engage with arguments and critiques based on humanist assumptions to oppose current anti-human tendencies.
Imagining the posthuman future
Let us begin by briefly suspending our critical thinking faculties and accept that the future that the promoters of ‘AI hype’ promise will come true. According to these AI boosters, we will soon be living in a world of ‘agential AI’. This means that there will be entities constructed out of software that will do things for us (e.g. plan our schedules, attend our meetings, do our homework), but also that these agents will be capable of highly autonomous action: determining and executing the best ways to achieve a goal, and possibly even formulating goals themselves. According to Microsoft, this will require the development of an ‘agentic web’ that these ‘AI agents’ can more readily navigate on our behalf (Shaw, 2025).
The timeline for this shift is uncertain; ‘software agents’ have been a topic of discussion and development since the 1990s, and there are many currently available products that are marketed as ‘AI agents’ (Rogers, 2025), although their capabilities fall short of what many AI boosters (and ‘doomers’) have in mind. Yoshua Bengio (2025) cites ‘huge commercial pressure to build AIs with greater and greater agency to replace human labor’, but says that in the rush to develop these, we ‘might be just a few years away or a decade away’ from agential AI that brings about ‘catastrophic risks’. However, there are very present risks and emerging challenges with current technologies: people are already forming meaningful social relationships with AI chatbots as therapists or romantic partners (Dupré, 2025; Kouros and Papa, 2024). Many of us already regularly encounter situations where we must try to differentiate whether something is human-generated, or AI-generated, and Sam Altman has been encouraging people to scan their iris with a metal orb to validate human identity (Henry and Roose, 2025).
In recent years, some have argued that segments of the social web are now populated mostly by bots rather than actual people. This ‘dead internet theory’ (Walter, 2025) has been a largely speculative and conspiratorial claim, but ‘social bots’ have long been a concern on social media (Assenmacher et al., 2020), just as Wikipedia has long been a site of ‘bot-bot’ conflict and collaboration (Geiger and Halfaker, 2017). With social media companies now promoting synthetic AI-generated accounts (Walter, 2025), and some workplaces now employing AI agents to supervise other AI agents (Lin, 2025), what does the increasing automation of social relations mean for sociology? Is it necessary to go beyond the question of whether sociology can include non-humans, to consider whether sociology can be relevant to social situations where humans are not included?
Of course, it is as difficult to imagine a future for our societies in which humans do not participate in some way, as it is to imagine a human society that does not include technologies which are in some ways external to us. Science fiction provides some dark outlines of a posthuman future, in which robots follow the vestiges of their programming on a ruined world where humans are effectively extinct, or societies where humans are kept sequestered (either as deskilled consumers, or automatons of a different sort). This is the bleak comedy of Adrian Tchaikovsky’s (2024) Service Model, or the film Wall-E (2008). In both, human society on Earth has collapsed, but autonomous technologies trundle onward performing now-meaningless tasks. This may be a temporary phase as decay and entropy ultimately destroy these leftovers of humanity, but these works of science fiction also envision automated systems capable of maintenance and self-repair, while others imagine future civilizations of machine intelligence.
For example, many influential leaders of Silicon Valley companies seem to think humans are just a stepping stone (or ‘biological bootloader’) for some ultimate form of superintelligent ‘digital life’ that will supersede us (Torres, 2025). What this entails could be some kind of a ‘merge’ between humans and machines (Altman, 2017). While the discourse of the ‘singularity’ imagined an all-at-once upload of ourselves into digital form, Altman sees a more gradual process of augmentation and machine enhancement that is already underway. However, the end result is still one in which humans are effectively transformed into something else. A darker argument holds that once superintelligence is created (an inevitability according to many AI developers), it will be so advanced and different from us, that humans will effectively go extinct, thus making way for the final progression to a machine civilization. Elon Musk (identifying as a ‘humanist’) has argued against this ‘extinctionist’ view, but Musk’s humanism is one that is still oriented to a transhumanist merger between our minds and AI. As Emile Torres writes, ‘Musk isn’t worried that we will be replaced, he’s worried about what might replace us’ – both sides imagine that dominant social actors in the future will be powered by AI; what happens to any leftover ‘legacy humans’ is an open question (Torres, 2024).
Automating human society
Such science fiction may lead us to some entertaining (or depressing) experiments in speculative social theory. For instance, what might fully automated robot capitalism look like? However, the possibility that some phenomenon might one day exist does not demand sociological attention. What matters is the current and immediate future, where automation is indeed increasing, but humans remain in the picture. Yes, we are ‘cyborgs’, in that we rely on various artifacts to extend our abilities and cognition (see Elder-Vass, 2017: 93–94), but this is not a recent development, and direct brain-machine interfaces (ie. Musk’s Neuralink) are a long way from transforming what it might mean to human. One can find many bots online carrying out automated actions, in relation to other bots or automated enforcement systems, but these technologies are optimized for particular goals by the people who design them and are subject to human supervision. Indeed, it still seems underappreciated just how much humans are part of AI across the multiple layers of the ‘AI value chain’ (Attard-Frost and Widder, 2025), or how human labor, knowledge, and creativity are the vital fuel for generative AI outputs.
If we shift the focus from AI and toward the more fundamental process of automation, the continuities of our present become more apparent. Lewis Mumford (1934) discussed automation in terms of the independence of ‘the machine’ from a human ‘operator’ – differentiating between ‘tools’ that work through ‘manipulation’ and ‘the machine’ that operates through ‘automatic action’ (p. 10). What Mumford refers to in his theory of machines is a very broad constellation of phenomena that, as Latour (1994) notes, are prefigured by complex organizations of obedient humans. Thus, humans ‘had become mechanical before they perfected complicated machines’ (Mumford, 1934: 3). Our automata may be ‘the understudies of human servants’ (Latour, 1994: 800), but the 20th-century discourse on automation is also characterized by the sentiment that these technologies are ‘out of control by human agency’ (Winner, 1977: 15). People are ‘dehumanized’ or ‘alienated’ from some human essence by being reduced to machines, and automated machines increasingly follow a path that humans seem powerless to control. Finally, ‘what has been learned from the nonhumans is then reimported to reconfigure people’ (Latour, 1994: 800), as humans are once again reshaped through their technological relations.
All of this may seem familiar when considering contemporary discourses regarding AI, and indeed, we should not overattribute our present problems to specific technologies underpinning AI. However, it is also important to note what makes this current moment distinctive, even as it echoes these historical tensions. When Mumford (1964) warned about the automation of human knowledge and culture as the impending ‘last word’ (p. 263), he was not imagining the proliferation of generative chatbots. The automation of language enabled through LLMs has been experienced as a rapid transformative shift, riding a broader wave of algorithmic automation, but opening up new sociotechnical possibilities. Generative AI systems that statistically reproduce the propensities of human culture into new arrangements are being marketed for whole categories of creative and knowledge-generating work that were once seen as exclusively human domains. Amid the billions of dollars now pouring into projects to make these systems more ‘agentic’, and the redesign of digital systems such as search engines and social platforms to foreground AI, millions of people now rely on generated media as their first source of knowledge and guidance, or have found themselves developing deep social connections with the synthetic personalities imagined to lie behind generated texts. Meanwhile, we are witnessing the emergence of new forms of interaction between automated systems, with digital environments made legible and responsive to the actions of algorithmic agents, and the tech industry’s pursuit of a future in which AI agents increasingly interact with each other on our behalf.
We need to attend to the human dimension in all these phenomena, but we also need to recognize the extent to which humans are becoming ‘incidental’ in a growing number of contexts (Sadowski, 2025c), or where these technologies exhibit a distinctly anti-human disposition. None of this refers to the existential threat of an uprising by intelligent machines keen to replace us or convert us into an energy source. The existential threats are more immediate: The use of military AI to target people for elimination (McKernan and Davies, 2024), the automation of vital government systems that millions depend upon (van Toorn et al., 2024), and the environmental impacts of data centers as the world struggles with climate change (Zewe, 2025). The dystopic phenomenon of ‘AI slop’ has already overwhelmed large portions of the Internet, and AI-generated content is increasingly treated as ‘a threat to A.I. itself’ (Bhatia, 2024), with AI models ingesting AI-produced texts of dubious veracity, resulting in a ‘corruption of knowledge’ (Markelius et al., 2024: 737). A proliferation of AI-generated resumes for ‘fake candidates’ requires review by automated hiring systems (Bindley, 2024), while teachers worry that they are increasingly evaluating the outputs of chatbots rather than their students’ learning (Koebler, 2025).
Those who consider new kinds of automation to be dehumanizing are referring to a lessening of what it means to be human, or how people are being treated as if they were less than fully human (ie. people as machines, or the mind as a computer, see Bender, 2024). This can also entail a reduction in people’s meaningful engagement with the world and each other, a narrow and reductive idea of what it means to be human and intelligent when we are compared to AI, and a lessening of human agency as we rely more on AI products. Humans remain as some purposive origin or the ‘prime mover’ of these algorithmic actants, but our role is reduced to feeding our intentions to algorithmic proxies, and presumably benefiting from their labor. Altman’s long-term vision for ChatGPT envisions these bots having access to a person’s ‘whole life’ (Bort, 2025); by knowing everything a person has ever done, said, or read, such a system could produce highly personalized life advice, highly personalized ads, or predict what actions to carry out on an individual’s behalf. In this future, human agency will increasingly be equated with intentionality, while actions are delegated to our digital ‘mirror’ (Vallor, 2024). But human intentionality will be far from autonomous, as computer systems can ‘pre-empt’ (Hildebrandt, 2015) and reshape our desires – a dangerous possibility as AI companies currently consider how to best integrate advertising into their products (Vara, 2025).
Humans out-of-the-loop?
Automated systems that have major consequences for human lives, whether as military targets, welfare recipients, or potential immigrants, are typically legitimated through the existence of a ‘human-in-the-loop’: a human who can check the machine’s work and issue (or approve) the final decision. We know that in many such cases, human judgment plays little role, as people defer to the machine’s decisions, or are used to put a human stamp on an algorithmic product. There is indeed a combination of machine and human agency in these hybrid systems, but human agency shifts to other parts of the process and is transformed by automation (Elyounes, 2021). People remain important as designers and developers of technological systems, and are positioned at the tail end of the process for review and approval, but are increasingly removed from everything in between. In the worst cases, the human-in-the-loop is reduced to a powerless ‘liability sponge’ – serving simply to provide a veneer of legitimacy for automated processes, while also taking the blame for the machine’s mistakes (Crootof et al., 2023).
Of course, as Bruno Latour repeatedly demonstrated, the human/non-human distinction is difficult to maintain ontologically, while others have argued that these technologies operate firmly within human society, with human origins and as a reflection of human dynamics (see Dehnert, 2022; King, 2023). But while humanity is set to remain part of this shifting web of sociotechnical relations, the point is that we are witnessing changes in what it means to be human – changes that are experienced as exclusionary and dehumanizing, with automation making the presence of the human being in a sociotechnical system less relevant. AI systems are sold as ways of expanding human agency and letting us do more, but they threaten to leave us ‘demoralized’, diminished and deskilled (Cottom, 2025). The vast amounts of investment capital fueling the generative AI boom are banking on the hope that users will become dependent on these tools, and that organizations will find cost savings and efficiencies from automating human labor. The alliance between the Trump Presidency and America’s champions of automation (Hao, 2025), including efforts to automate the US government (Barrett, 2025; Kelly et al., 2025), is illustrative of the dangers. These are not forces that care about the well-being of humanity, and sometimes the most productive role that a human can play in sociotechnical change is as an obstacle.
The obvious response to the anti-human tendencies discussed above is some kind of humanism; if human needs, concerns, and existence are becoming marginalized, then it makes sense to advocate for (re)centering the human. However, humanism comes bundled with a host of problems, from which anti-humanism and posthumanism have developed as a response. Because of these oppositions, these ideas must be considered in relation to each other.
Opposing and surpassing humanism
Humanism has taken quite a beating in social theory since the 1960s (i.e. Foucault, 1970), and justifiably so. However, we also need to recognize how diverse and capacious humanist approaches can be. Humanism continues to inform the unstated assumptions of most sociological analyses, and there are ‘residual’ aspects of humanism (Braidotti, 2013: 36) that remain even within posthuman critiques. Here I will focus specifically on Rosi Braidotti’s (2013) theorization of humanism, anti-humanism, and posthumanism, both because of its prominence and Braidotti’s ability map out these distinctions. Braidotti’s approach is informed by anti-humanism and articulates a ‘critical posthumanism’ that represents one viable path as a response to current trends in AI, with some form of humanism being the other possibility to be considered.
Humanism defies both straightforward definition and critique, given that is ‘complex and multi-faceted . . . complicitous with genocides and crimes on the one hand, supportive of enormous hopes and aspirations to freedom on the other’ (Braidotti, 2013: 16). Humanism’s contemporary origins in Renaissance and Enlightenment thought, its simultaneously exclusionary and universalizing attitude (built around a particular notion of European ‘Man’, idealized as humanity’s essence), provide a narrow view of the human that has justified multifarious kinds of violence against people, animals, and things deemed unworthy of consideration. However, Braidotti (2013) also recognizes how difficult it is to maintain an anti-humanist position with ‘a modicum of consistency’, given the value of humanist ideals such as the pursuit of freedom, equality, and knowledge, which are ‘deeply entrenched in our habits of thought’ (p. 29–30).
Because of these contradictions, Braidotti’s (2013) ultimate ‘relation to Humanism remains unresolved’ (p. 25), but she concludes that it is impossible ‘to disengage the positive elements of Humanism from their problematic counterparts’ (p. 30). Braidotti is less interested in wholly rejecting or defeating humanism than she is in articulating a ‘different discursive framework, looking more affirmatively towards new alternatives’ (p. 37). The critical posthumanism she develops is thoroughly relational and inclusive, decentering humans without being ‘indifferent’ to humanity – instead providing us with an ‘enlarged sense of community’ (p. 190). This community is inclusive of all life, but ‘technological mediation is [also] central to a new vision of posthuman subjectivity and . . . provides the grounding for new ethical claims’ (p. 90). Building on Haraway’s ‘cyborg’ and Deleuze and Guatarri’s ‘becoming-machine’, Braidotti argues that ‘the merger of the human with the technological results in a new transversal compound, a new kind of eco-sophical unity, not unlike the symbiotic relationship between the animal and its planetary habitat’ (p. 92). This ‘posthuman notion of the enfleshed and extended, relational self keeps the techno-hype in check by a sustainable ethics of transformations’, which ‘pleads for resistance to both the fatal attraction of nostalgia and the fantasy of trans-humanist and other techno-utopias’ (p. 90).
Braidotti articulates a posthumanism that is in many ways opposed to the project of AI’s leading promoters, especially when the hunt for ‘superintelligence’ is understood primarily as ‘intensified humanis[m]’ (Porpora, 2017: 360), or an expression of ‘a humanistic belief in the perfectibility of man through scientific rationality with a programme of human enhancement’ (Braidotti, 2019: 48). Indeed, while Braidotti’s posthumanism can take a critical view of capitalism, commodification, and political economy, the most readily available critique of AI from a posthumanist perspective is that these technologies actually represent a new form of humanism to be countered (see Cadman et al., 2025). Beyond this, ethical grounds on which to mount opposition can be conceptually slippery and tilted in favor of some of the arguments being used to promote AI. For instance, Braidotti (2013) presents such technologies as ‘normatively neutral’ (p. 45), echoing the ‘just a tool’ discourse of Silicon Valley (Kaiser, 2025). Remarking on an article from The Economist that promotes a new experimental approach to robot ethics that ‘does not assume a human, individualized self as the deciding factor’, Braidotti notes (2013) ‘that the advocates of advanced capitalism seem to be faster in grasping the creative potential of the posthuman than some of the well-meaning and progressive neo-humanist opponents of this system’ (p. 45). This enthusiasm for ‘forward-looking experiments with new forms of subjectivity’ rather than ‘nostalgic longings for the humanist past’ (Braidotti, 2013: 45) effectively cedes ground to those promoting these ‘tools’ as inevitable and exciting, painting their ‘well-meaning’ opponents as a regressive force.
Relatedly, Braidotti’s ‘monistic’ political ontology ‘rests on the idea that matter, including the specific slice of matter that is human embodiment, is intelligent and self-organizing’ (Braidotti, 2013: 35; see also Cadman et al., 2025). Whatever the value of attributing intelligence to the various elements of world around us, the implication is that human intelligence is not particularly remarkable – at a time when people are claiming to find human-level intelligence within self-organizing computer models. While Braidotti’s (2013) ‘merger of the human with the technological’ (p. 90) is rather different than the ‘hegemonic model of the posthuman as trans-humanism’ (Braidotti, 2019: 48) currently represented by Musk or Altman, it leads to a political position with some resemblances. Yes, Braidotti’s politics is inclusive of those people and subjects historically ‘missing’ from consideration, but this also includes ‘virtual’ people, whom we are either encouraged to become or enter into praxis with, forming a ‘transversal alliance [inclusive of] non-human agents, technologically-mediated elements, earth-others (land, waters, plants, animals) and non-human inorganic agents (plastic, wires, information highways, algorithms, etc.)’ (Braidotti, 2019: 51). This ‘complex singularity’ of new forms of ‘becoming’ may have been a less problematic position to argue for in 2019 (Braidotti, 2019), but today it is simply too close for comfort to all of the unborn virtual people that advocates of the transhumanist singularity imagine should be brought into being at humanity’s expense (Torres, 2024, 2025).
The case for humanism
Posthumanism as a broad discourse arises from a particular dissatisfaction with humanism in general, but as Braidotti (2013) notes ‘there are in fact many Humanisms’, and she is engaging with ‘one specific genealogical line’ (p. 50). There are formulations of humanism that are responsive to posthumanist critiques, without the ontological and normative baggage that can weigh in favor of AI hype. Rather than discounting humanism as a ‘nostalgia’ that needs to be surpassed by the right kind of posthumanist orientation, I argue there is good reason to articulate a humanism that is chastened by its prior failings, and which embraces its strengths – including the resonance that humanism finds in today’s leading critiques of AI.
While current developments have granted the issue added urgency, these are not new arguments for scholars who have been articulating a version of ‘digital humanism’ that is positioned critically in opposition to dominant trends in political economy (Fuchs, 2022; Prem, 2024). Digital humanists are mindful of the traditional critiques of humanism: that it privileges a particular conception of the human (ie. white, male, Eurocentric colonial and capitalist) and is based on untenable binaries and distinctions, including the tricky line between the human and non-human. Work in this vein grapples with some of these inherent tensions while addressing ‘inhumanity [as] the central problem of contemporary digital societies’ (Fuchs, 2022: 15).
Digital humanism should not be equated with ‘human-centered/human-centric AI’, or ‘user-centered’ approaches to AI (Rezaev and Tregubova, 2025), which have been variously formulated for more than a decade. While some aspects of these orientations may be relevant, a humanist reaction to broad trends of dehumanization goes beyond asking how technology can serve human needs, desires, and values, or how to keep human ‘users’ in mind when developing new technologies. Digital humanism’s critical edge ‘vehemently opposes a supposedly autarchic technological development of digital transformation. It opposes the self-depreciation of human competence . . . it opposes the subsumption of human judgment and agency under the paradigm of a machine’ (Nida-Rümelin, 2022: 74). The humanist response echoes Luddism not only through its rejection of very specific, dehumanizing manifestations of technology, but also through its critique of automation in the service of concentrated wealth and power (Merchant, 2023; Sadowski, 2025b). In short, digital humanism must grapple with issues of political economy. The humanist backlash will encourage more human-centered products, services, and environments, along with those that champion ‘authentic’ human social experiences and oppose AI as a technological mediator, but it also provides an impetus for political engagement in opposition to the influence of the tech industry.
As sociologists, we should consider what engaging with this humanist response would mean both theoretically and normatively. Digital humanism does not always entail a fully fleshed-out social ontology, but is typically premised around some version of the claim that people should not be analyzed as if they were machines, and machines should not be analyzed as if they were people (Fuchs, 2022; Nida-Rümelin, 2022; Prem, 2024). With that as a starting point, it is possible to align oneself with pre-existing humanist strains in social theory. Fuchs’s (2022) ‘radical digital humanism’ is grounded in critical theory, attending to dynamics of ‘alienation, exploitation and domination, and their interactions in the context of digitalisation and digital technologies’ (p. 54). Critical realism provides an alternative ontology that is similarly positioned against both ‘anti-humanist’ AI and the ‘non-human turn’ in social theory – the decentering of the human found across ANT, new materialism, critical animal studies, and other theoretical approaches located downstream of postmodernism (Porpora, 2017).
The challenge is how to incorporate the critical insights of posthumanism into a new kind of humanism that is ‘reflexive, self-critical and ecological’ (Vandenberghe, 2025b: 4), in order to ‘enlarge our vision of humanism beyond the human’ (p. 19). This is a task that Frédéric Vandenberghe (2025a) has recently undertaken by incorporating anti-humanist critique to re-articulate humanism as ‘the moral self-understanding of humanity in relation to its others’ (p. 9). This relational view is open to ‘multiple realizations of what it means to be human’ or ‘different relations to the world’ (Vandenberghe, 2025b: 19), while also maintaining that it is ‘important to maintain the categorical differentiation between humans, animals, and things for normative reasons’ (Vandenberghe, 2025a: 9). It is therefore possible to learn from Latour’s dismantling of distinctions, or new materialism’s emphasis on relationality, without a total flattening of ontology. Vandenberghe (2025a) argues that boundaries between ontological ‘regions’ can be interrogated and deconstructed (recognizing, for instance, that ‘humans, are also animals’ and that ‘the distinction between nature and culture cannot be upheld’), but that it is important to ‘affirm the dignity and value of human beings’ to avoid treating people (as well as animals) ‘as mere things’ (p. 9). We may not be the autonomous individuals affirmed by some versions of humanism, but we are ‘centred enough’ to think, to act, to love, and to suffer (Porpora, 2017: 361). Our relations define who we are, and what is in our interest as a species cannot be separated from the world around us, but it makes normative sense to put our interests before those of machines, and to oppose the automation of relations when this disempowers and harms us.
Humanism as AI critique
We may not yet see much evidence of a ‘humanist turn’ in social theory, but what has opened up is the urgency and relevance of a humanist critique – one that is focused specifically on digital relations. This kind of critique does not require a commitment to a humanist ontology, but should attend to power and political economy in ways that posthuman ontologies are often ill-suited. A ‘thin’ conception of humanism can suffice, oriented to resisting the dehumanizing effects of AI through concrete actions and changes in our lives as citizens and academics (see Bender, 2024). A humanist backlash is growing, both within the tech industry and outside it. Among the ‘tech humanists’ profiled by Greg Epstein (2024) is a consultant to prominent ‘business clients [who] sense that some of the tech around them is becoming inhumane but don’t know what to do about it in a profit-driven world’ (p. 244). Epstein (2024) also includes more oppositional voices as exemplars of this humanist resistance, such as Timnit Gebru and Chris Gillard, but the humanist backlash extends beyond direct critique of AI, to those opposing the governmental logics and corporate imperatives with which AI is associated.
AI critique is more prominent now than when Safiya Noble (2018) and Ruha Benjamin (2019) published their pathbreaking works, and sociology clearly has a role to play in highlighting social inequalities and providing structural analysis of current transformations (Joyce et al., 2021; Law and McCall, 2024; Zajko, 2022). This is particularly the case as researchers in fields more closely related to AI development (computer and data science) are likely to affiliate with a well-resourced and politically privileged industry (Hao, 2025). However, pushing back against AI can be challenging when many universities are also promoting and encouraging the integration of AI into teaching and research. Our students are told that these technologies are ‘here to stay’ and that their effects inevitable, while faculty are told to prepare students for future careers where their competitiveness will depend on their ability to ‘leverage’ AI as a tool. Sociology is a broad field encompassing perspectives from social science as well as the humanities, but at this moment it is worth emphasizing the latter – focusing on the human experience, and cultivating skills that cannot, or should not be automated.
The ‘hallucination problem’ is one noteworthy example: In the months after the release of ChatGPT, there was optimism that the tendency for chatbots to confidently assert ridiculous falsehoods would be ironed out. Now we are told to accept the inevitability of these generative untruths, and there is some evidence that the problem is actually getting worse with more advanced models (Hsu, 2025). Anthropic’s CEO Dario Amodei reassures us that these models actually ‘hallucinate less than humans’ – they just do so ‘in more surprising ways’ (Zeff, 2025), but this comparison is misleading. There is a qualitative difference between the output of a probabilistic ‘text extruding machine’ (Bender and Hanna, 2025) which happens to be false, and a person’s understanding of the world, whether that understanding is correct or not. AI developers have a ‘pragmatic’ understanding of truth, which is rendered probabilistically, following the logic of actuarial science and risk management (Sadowski, 2025a). Generating hallucinations, or plausible untruths, is simply fundamental to how these LLMs operate – they neither reason nor understand the world in any way that resembles human processes. Attempts by AI developers to lower our expectations of the veracity of their machines should remind us of the need to be critical of the abilities of these systems, even as they are being used to reshape the world.
Two paths for social theory?
The developments discussed lead to two possible directions for social theory. The first of these entails doubling down on posthumanism: tracing techno-human assemblages and hybridities, analyzing relations with non-humans (or where humans are not involved), or between humans and some agentic derivatives of human culture. There is significant value in such an ‘analytic’ posthumanism (Braidotti, 2013), particularly if it helps to account for various forms of relationality, sociotechnical extensions of our knowledge, abilities, and identities, and how these are currently shifting. However, the normative implications of this position are either too ambiguous, or directly contrary to the actual discourses of resistance that are currently being mobilized against the harms of automation. A posthumanist orientation may be quite compatible with the transhumanist vision promoted by the champions of AI, or at the very least, it fails to provide firm footing for resistance. This is a problem that could be addressed with a ‘reconfigured’ posthumanism that is more oriented toward critiquing current forms of AI (Cadman et al., 2025). However, when AI boosters claim to have developed a superior form of intelligence, or argue that humanity should be succeeded by new forms of ‘digital life’ (Torres, 2024), the most effective response, and the other path open to us, is to assert humanity’s distinctiveness and primacy.
The appeal to humanism therefore makes normative sense, even for AI boosters, who tend to emphasize some notion of human values when they appeal to public sentiments. However, to what extent should normative implications or critical purchase be a consideration for social theory? Sociologists vary in their responses to these questions, but we tend to be sensitive to the political implications of our work. Whether ontological commitments should be influenced by moral concerns is up for debate, and one could argue that theory’s analytic and explanatory potential should take primacy over its political utility, but the majority of sociological theory has been constructed around some humanist assumptions and remains defensible on these grounds. Where anthropocentrism has often been implicit in social theory, current questions about what it means to be human require us to address the topic directly.
Finally, as discussed above, posthuman and humanist theoretical orientations are far from incompatible; one can maintain a broad ontological scope while remaining primarily concerned about the humans within it. Vandenberghe argues for ‘a new anthropocentrism’ that is also relational and ‘eco-centric’ (Vandenberghe, 2025b: 5). Theories that are based around humanist assumptions do have the capacity to include non-humans, and posthumanist theory does not necessarily make humans and non-humans indistinct. For example, Sayes (2014) argues that while ANT avoids making such an a priori distinction, that does not mean that human and non-human agency are then treated as equivalent. Latour’s ontological flattening does not preclude us from differentiating entities or forms of agency. However, if ANT is to be understood primarily in methodological terms, and is largely uninterested in ‘explicit theory construction’, then it is ‘constitutively incapable of providing a general account of how humans, nonhumans, and their associations may have changed over time and might vary across space’ (Sayes, 2014: 144–145). In contrast, as a humanist meta-theory (Porpora, 2016), critical realism is open to various methods and conceptualizations, with Dave Elder-Vass (2015) being most notable for his engagement with ANT. However, when one’s main focus is on ‘social entities as wholes that have people as their parts’, it is easy to ‘neglect’ the non-human constituents of society (Elder-Vass, 2017: 97). In other words, humanist and posthumanist directions in theory can certainly cross or intersect, but they do not typically sit easily alongside each other.
Conclusion
In short, then, the story for social theory may be told in this way: In the latter decades of the 20th century, sociologists increasingly came to decenter the human and make room for non-humans in their analyses, sometimes going as far as avoiding any a priori distinction between the two. In subsequent decades of the 21st century, the idea of non-human social actors became increasingly commonplace in popular discourse and daily experience, and hybrid forms of human-AI agency became normalized across a growing number of domains. This shift can be seen as a validation of posthumanism, and sociologists with anthropocentric ontologies may be neglecting increasingly consequential ‘missing masses’ (Latour, 1992) in their analyses. However, these same sociotechnical changes have provoked a humanist backlash and a need ‘reclaim our humanity’ (Vallor, 2024). Decentering the human can impede our ability to engage politically with forces that threaten and devalue human interests and human agency.
It is perhaps on the topic of ecological and environmental impacts that many humanists and posthumanists will find shared cause for concern, given the interrelations of phenomena on this planet necessary for life, which are currently under threat. 2 Regarding other consequences of automation, the normative demands of today call for some kind of reconstituted humanism – primarily as a form of critique and resistance against the inhumanity and dehumanization brought on by sociotechnical developments. Ontologically, the ‘materialist turn’ remains relevant in broadening our social horizon, but so does the earlier materialism of critical theory, ‘stressing the productive, social and transformative capacities of human beings’ (Fuchs, 2022: 53) to create a world that is better than what is currently being produced through the political economy of automation.
Skepticism is warranted about many aspects of the AI industry and its products, and we should be in no rush prepare our theories for the imminent arrival of artificial beings, as promised by AI boosters. But the marginalization of humanity, or dehumanization through AI, is already in evidence, and this necessitates a response. Generative AI is a ‘parasite’ on the works of humanity, and while it is possible for us to ‘peacefully coexist’ with it, right now ‘its most compelling use case is starving the host’ (Cottom, 2025).
The humanist backlash against AI is a healthy immune response that sociologists should engage with and support, but there is nothing inherent about humanism that makes it a cure for AI-related harms. As discussed above, a product like ChatGPT can actually be criticized for reproducing humanist and anthropocentric assumptions and exclusions (Cadman et al., 2025), and discourses of humanism will certainly be used both to sell AI products and to oppose them. However, a reflexive, relational humanism that is informed by its critiques provides the best normative foundation for coming to grips with these social transformations.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
