Abstract
The emergence of large language models such as GPT3 has prompted intense debate about the transformative potential of generative AI to ‘revolutionise’ knowledge work and disrupt economic activity. This article represents an initial attempt to problematise this phenomenon within a Marxist analytical framework. Borrowing the term ‘algorithmic capitalism’ to refer both to the broad family of generative AI models and their anticipated effects on the social form, the article offers a two-stage analysis towards a potential political economy of this new technology. First, drawing on a classical Marxist analysis of labour-capital relations, the article investigates shocks to the exchange value of labour within knowledge work which are likely to arise from the widespread uptake of generative AI. Second, the article considers the semiotic effects of generative AI, and attempts to position these within the Marxist literature on the commodity fetish via Baudrillard’s concept of the hyperreal. In so doing, the article argues that the emergence of generative AI can be also understood in terms of a further advance of the commodity fetish, in which meaning itself is reified.
But machinery does not just act as a superior competitor to the worker, always on the point of making him superfluous. It is a power inimical to him.
Introduction
Generative AI is a catch-all term applied to a suite of machine learning models, together with their associated interfaces, that have recently been released both to business and (often in attenuated form) the general public. 1 In contrast to prior common implementations of machine learning, which typically have been used to classify data, predict future outcomes, or generally automate processes, generative AI offers the ability to create convincing, human-like simulations of text, image and audio outputs from open-ended, human language prompts. Indeed, so convincing have these (particularly textual) outputs become that some industry figures have claimed advanced generative AI algorithms may even be sentient. 2
It is beyond the scope of this article to judge whether generative AI dreams of electric sheep; however, what is clear is that businesses and individuals feel increasingly confident in using these models to outsource their creative and reasoning faculties. ‘Prompt-engineering’ – the ability to craft prompt inputs capable of eliciting precisely the desired output – is emerging as a recognisable industry skill, and not merely among computer programmers who use generative AI to code: artists, writers, journalists, academics, (students), indeed all manner of knowledge workers are integrating generative AI within their practice.
Capital is no exception to this trend; indeed, it is capital which is driving the process. The extraordinary computing resources necessary to train generative AI algorithms have restricted the development of newer models to only the most well-capitalised organisations. 3 And while some have argued that generative AI remains a technology in search of an application (at least from the perspective of increasing corporate profitability), many corporations are already integrating generative AI within their labour processes.
The purpose of this article is to begin at least the preliminary work of developing a Marxist political economy of this new technology. It has often been noted that Marx was particularly alive to the effects of technical advances on both the specifics of the labour process and the broader social form (though it should be noted that the direction of causality within that relationship has been often contested by Marx’s followers). In keeping with that tradition, this article attempts to situate generative AI and its effects within first a classical Marxist framework, and then (progressively) the works of scholars who have followed in Marx’s footsteps, especially Lukács, Braverman and Baudrillard.
The goal is to provide an account not only of how generative AI will – is – affecting the labour process and the social value of labour (particularly among knowledge professionals), but also how the outputs of generative AI, and their increasing ubiquity, reflect a further development of capitalism on the social form itself. Specifically, the article argues that we must understand the more profound effects of generative AI as lying in the further expansion of the commodity fetish to the reification of meaning. To anticipate: rather than reflecting a (real) social relation among people, and among people and (real) things, the ‘reality’ produced by generative AI constitutes instead a set of simulated, probabilistic relations among a closed system of simulacra, increasingly walled-off within privately-owned infrastructure.
To be clear, this process is the further development of an inherent tendency within capitalism – the commodity fetish – which long predates generative AI. However, in order to better characterise these developments as representing a new stage within capitalism, I borrow the expression ‘algorithmic capitalism’ to define it. 4 In doing so, I wish not only to more clearly differentiate this new stage from the stages of technical development that preceded it, but also to differentiate between the technology itself and the economic system which has called it forth into being and which is now being transformed by its creation. In this sense, as I hope it is clear, I follow the approach to technology more common in these pages, wherein the economic system ‘horse’ is put in the correct relation to the ‘cart’ of technical progress. As Marx himself noted (and is indeed noted below), critique must distinguish between the material instruments of production and the form of society which uses them. Our proper object of analysis here is not the former in isolation but rather in context with the latter.
Labour value and generative AI
As Marx observed of the technological transformations which underpinned the first industrial revolution, rather than increasing the value of labour, technology typically will diminish it: The whole system of capitalist production is based on the worker’s sale of his labour-power as a commodity. The division of labour develops this labour-power in a one-sided way, by reducing it to the highly particularized skill of handling a special tool. When it becomes the job of the machine to handle this tool,
In prior epochs of technical progress this process affected mostly manual labour (Acemoglu 2002). Generative AI, however, threatens those employed in the higher service industries: so-called knowledge workers (Blackler 1995) and the creative classes (Florida 2003), from managers to consultants, journalists to academics, to anyone indeed who must produce and communicate generic or specialised information in their work. 5
We might, with Aristotle, consider such knowledge work as a form of techne: skill acquired through prolonged exposure to that which has been previously said or written on a subject, which the worker then adapts to the production of specific outputs. 6 If so, then the knowledge worker’s means of production can be considered primarily as this techne. Yet, the data on which generative AI are trained represent a significant share of the combined output of contemporary humanity, among which is (or will soon be) contained the particular knowledge of all specialised professions. Knowledge workers therefore find themselves in the unusual position (at least for the bourgeois professions) of their means of production becoming superseded, and indeed captured, by capital.
This process merits further theoretical elaboration. Generative AI requires, for the data upon which its algorithms are trained, the assimilation of massive quantities of pre-existing human labour, much of which remains the intellectual property of its creators. Once seized by generative AI, such prior outputs (texts, images, audio, etc.) are no longer mere commodities. Rather, the algorithms created with these outputs require, in order to exist and be capable themselves of producing value (i.e. of becoming capital capable of self-valorisation) precisely such prior outputs as their own primary input.
To achieve such transformation requires, as a necessary component, the addition of statistical procedures; yet it would be wrong to define training data as a ‘raw material’ which the statistical procedures consume. Rather, statistical procedures combine with the training data to produce the algorithm: nothing is destroyed in the process, merely transformed. Prior outputs, qua training data, are therefore themselves become (constant) capital: that is, seized of by capital, and made capable of self-valorisation, of recombining to produce ‘new’ commodities and ‘new’ value.
Just as within industrial capitalism, however, such ‘new’ value is in fact value which has been lent by prior (i.e. ‘dead’) labour within the production process. 7 The ‘new’ value reflected in the commodities (outputs) created by generative AI is therefore merely the transfer of a portion of the existing value previously embodied within (a) the training data and (b) the statistical procedures. As in nature, there can be no self-perpetuating or self-motivating generator of value: that which is not consumed does not require replacement, and thus the algorithms themselves absorb additional labour – and create new value – only through the technical procedures (gathering of new data, improvement of underlying statistical procedures, etc.) required to further advance their capabilities. 8
Thus, those hoping to acquire great profits from the creation of generative AI may be disappointed: by its nature, generative AI will tend to reduce such profit to almost nothing. 9 As Marx noted, each additional technical improvement in the manufacturing process permits only a brief period in which the capitalist can profit in excess of the average return, because every technical improvement serves to reduce the average social value of labour associated with the ensuing commodity (Marx 1976: 530–531). 10 The same outcome will most likely occur with generative AI: to the extent that the labour required to produce any new output (text, image) will now tend towards zero, the average social value embodied within any such output will also tend towards the same.
If the labour theory of value is correct, then it is in the nature of capitalism that wherever technology has in one place significantly lowered the cost of production, the result will always and everywhere be a significant diminution in the value of labour-power, and hence of the social value of any commodity which ensues from the same process. This being so (and all other things being equal), we might thus expect the effect of generative AI on the knowledge worker to mirror that of mechanical automation on her industrial predecessor: When machinery seizes on an industry by degrees, it produces chronic misery among the workers who compete with it. Where the transition is rapid, the effect is acute [. . .] (Marx 1976: 557)
Research indeed has demonstrated that the expectation of such an outcome is prompting understandable anxiety within knowledge professions (Baek et al. 2022), while the emergence of generative AI has already been (in part) the prompt for industrial action, with both the Writers Guild of America and SAG-AFTRA, two unions associated with performing arts in the United States, striking in 2023. 11
However, the transformation to the labour process provoked by generative AI will not be merely quantitative (in terms of a diminution in the social value of such labour); rather, the nature of such work is likely also to undergo a qualitative change. Shorn of the requirement for techne, the autonomous creativity previously associated with much knowledge work will instead increasingly be reduced to a role of supervision. The algorithm will produce, the human will merely feed in the prompt (which, to return to our analogy, now constitutes the true ‘raw material’ in the process), and then amend any flaw in the output. As a process, such automation further echoes the fate of the industrial worker noted by Marx: [t]he lightening of the labour becomes an instrument of torture, since the machine does not free the worker from the work, but rather deprives the work itself of all content [. . .] The special skill of each individual machine-operator, who has now been deprived of all significance, vanishes as an infinitesimal quantity in the face of the science, the gigantic natural forces, and the mass of social labour embodied in the system of machinery, which, together with those three forces, constitutes the power of the ‘master’. (Marx 1976: 548–549)
As with manual labour in the first industrial revolution, the advent of generative AI represents the knowledge worker’s subsumption into the condition which capitalism tends to reduce all work capable of automation, reproducing it as ‘unskilled’ or ‘semi-skilled’ (Braverman 1974). Until now, the techne of knowledge work – the requirement for an extended accumulation of knowledge – presented for many of the bourgeois professions a structural bulwark against this process (regardless of whether the profit obtained from such techne represented the true social value of that labour, or instead an economic rent obtained through well-guarded barriers to entry). With the emergence of generative AI, however, that structural bulwark is permanently, and in all cases, superseded: the accumulated labour embedded within the individual knowledge worker’s techne is abstracted within the algorithms of generative AI, and now forms instead a part of the ‘mass’ of prior social labour embodied within the data space of the model.
The statistical machines underpinning generative AI are in this sense little different to the ‘lifeless mechanism’ of the factory (Marx 1976: 548). Through them, the knowledge worker is united with the manual worker: from both the use-value of their labour is alienated, captured by capital, and then returned to them by capital as the property of capital – part of the process of the self-valorisation of capital to which the worker is now incidental and external.
If this analysis holds, then the only limiting factor to the erosion of the social value of knowledge labour will be the extent to which that labour – now reproduced, qua Braverman, as ‘semi-skilled’ – is still required to supervise the outputs of generative AI, together with the marginal cost of any constant capital such as energy and hardware (including data and processing centres), and the marginal cost of variable capital associated with the development of the models themselves, including any highly-skilled labour required to maintain and improve the functioning of the algorithms.
It should be noted that this initial analysis is limited to the domain of purely economic logic, and we should not discount the possibility that other systems of value beyond the purely economic may operate to ‘shore up’ the social value of certain categories of knowledge work. For instance, it could be argued that some knowledge workers, as bourgeois professionals, will continue to be protected by barriers to entry and members’ privileges. Alternatively, corporations might choose to offer ‘responsible autonomy’ (Friedman 1977) to prized categories of knowledge worker, in which case generative AI might supplement (rather than replace) the techne of such workers (though it is difficult to see how such an outcome would not result in at least a diminution in the exchange value of such labour). Finally, we should not discount the operation of logic beyond the purely (formally) economic, for instance, in the differential logic of sign value or the logic of symbolic exchange noted by Baudrillard (2019); these will surely serve to retain a certain ‘sumptuary’ value for the output of a minority of creative workers.
The outline presented here might thus be better considered as the pessimistic end of a spectrum and perhaps more likely in the immediate term to hold for those roles and professions which already are more dominated, and hence less protected from the potential mitigating factors noted above. Detailed empirical analysis will be required to establish the magnitude of the effect within each profession; yet I would nonetheless argue that the tendency of capital within capitalism is to eventually subsume all labour that can be within ‘the mass of social labour’ embodied in automation. As Braverman noted, ‘the ideal toward which capitalism strives is the domination of dead labor over living labor’ (Braverman 1974: 227). When such a process occurs, the results are well-documented.
The commodity fetish in algorithmic capitalism
We turn from labour to the starting point of Marx’s own analysis of capitalism, the commodity. As Marx noted, the fundamental nature of the commodity in capitalism is that it confronts the various different use-values embedded in objects, and the qualitative or particular labour which they embody, as commensurable exchange-values which represent instead a quantity of ‘labour in general’, or social labour. The ‘secret’ of the commodity – which Marx described in terms of a fetish – consists [s]imply in the fact that the commodity reflects the social characteristics of men’s own labour as objective characteristics of the products of labour themselves, as the socio-natural properties of these things. Hence it also reflects the social relation of the producers to the sum total of labour as a social relation between objects, a relation which exists apart from and outside the producers. Through this substitution, the products of labour become commodities, sensuous things which are at the same time suprasensible or social. (Marx 1976: 164–165)
The social relations which define the division of labour within a society come to be objectified in the relations between commodities in the form of their exchange-values, which, in the final analysis, reflect ‘the labour-time socially necessary to produce them’ (Marx 1976: 168).
According to Marx, this process of objectification has the further effect of rendering the human subject passive, of producing ‘a social formation in which the process of production has mastery over man, instead of the opposite’ (Marx 1976: 175): The value character of the products of labour becomes firmly established only when they act as magnitudes of value. These magnitudes vary continually, independently of the will, foreknowledge and actions of the exchangers. Their own movement within society has for them the form of a movement made by things, and these things, far from being under their control, in fact control them. (Marx 1976: 167–168)
This process – the rendering passive of the human subject and active of the object – is termed reification, and was further elaborated by Lukács (1967), who considered it the central structural problem of modern capitalism.
12
As Lukács noted Just as the capitalist system continuously produces and reproduces itself economically on higher and higher levels, the structure of reification progressively sinks more deeply, more fatefully and more definitively into the consciousness of man. (Lukács 1967: 93)
We have seen that generative AI can be viewed as the appropriation by capital of human techne in the general sense. In this way, capital gains the capacity to apply the ‘dead labour’ embodied in the algorithm to the creation of new outputs or commodities, as opposed to the mere reproduction of pre-existing commodities which simple automation allows. Algorithmic capitalism is therefore quite clearly a further instance of the capitalist system reproducing itself on a higher economic level. If so, then in keeping with Lukács it also marks the deepening of the ‘structure of reification’ within human consciousness. It is the precise nature of this deepening which remains to be investigated.
By transforming human-created text, images and audio into data, and thence into a (reconstructed) system of relations which can be used to create new text, images and audio, generative AI in effect produces an ontological break with (and within) that initial system of relations. The nature of that break is well-specified by Baudrillard in the concept of the hyperreal: It is all of metaphysics that is lost. No more mirror of being and appearances, of the real and its concept. No more imaginary coextensivity [. . .] The real is produced from miniaturized cells, matrices and memory banks, models of control – and it can be reproduced an indefinite number of times from these [. . .] In fact, it is no longer really the real, because no imaginary envelops it anymore. It is a hyperreal, produced from a radiating synthesis of combinatory models in a hyperspace without atmosphere. (Baudrillard 1994: 2).
Baudrillard wrote in the early 1980s. While he could not have foreseen the specific technical form which generative AI would take (in terms of large language models, transformer architecture, etc.), he nonetheless captured in the methods and capabilities of contemporary technology implications which are now seeing their full fruition with the advent of generative AI.
Any generative AI output is essentially produced through a ‘synthesis of combinatory models’: large language models such as GPT-3, for instance, work on a stochastic basis to predict the most likely next word or phrase, using billions of parameters to optimise the prediction (Neelakantan et al. 2022). Internally, the data underlying such models consists of texts broken down into units of meaning (tokens) and expressed, in a geometrical sense, as vectors within high-dimensional (i.e. hyper) space. Yet, these models are essentially ‘without atmosphere’: the data space they are built on is disembedded both from the reality it aims to describe and which first produced it. As such, a generative AI algorithm cannot understand any word as relating to a real concept in itself (Bender et al. 2021: 610); it can only relate a given token to the system of other tokens within the word embedding matrix in a purely probabilistic manner.
It is this modus operandi which prompted Bender et al. (2021) to describe large language models as ‘stochastic parrots’. Yet, generative AI does more than simply parrot; through its ‘radiating synthesis of combinatory models’ it is capable of creation. Frequently, such creations take on the quality of the bizarre or surreal, or present as fact that which is untrue. The common industry definition for such outputs is ‘hallucinations’. Yet, the term ‘hallucination’ misconstrues the nature of the underlying process. A hallucination is the sensory perception of something real which is not. Generative AI has no sensory perception, and what it produces is, by definition, real output. Generative AI does not, therefore, hallucinate: it simulates, and the outputs it produces – qua Baudrillard – are simulacra.
To again quote Baudrillard, through generative AI the ‘whole system [of meaning] becomes weightless’: it is no longer anything but a giant simulacrum – not unreal, but a simulacrum, that is to say never exchanged for the real, but exchanged for itself, in an uninterrupted circuit without reference or circumference. (Baudrillard 1994: 5-6)
It has been widely observed that the outputs of generative AI create the appearance of a reality that is not, in fact, real. Generative AI will simulate not only news reports, but also the links to those reports (Moran 2023); it will simulate passages of Proust in French (Batuman 2023); it will simulate in a way that is convincing and coherent because, for generative AI, to be convincing and coherent are not qualities in themselves, but merely emergent properties of the texts or images within its corpus that are reproduced alongside their other qualities.
Baudrillard’s postmodern theorisation of the hyperreal was inspired by the reproduction and replication of electronic media: videos, cassettes, photocopies, faxes. Texts, images and audio, when transformed into digital copies, could be endlessly replicated and repeated through these media. However, with generative AI the next step is taken: what is simulated is not the original creation, but the act of creation itself, the techne which precedes the product. In order to achieve this stupendous feat, generative AI requires – essentially – the absorption within its data space of all dead labour embedded within prior media. Yet, the process of disembedding which this absorption (qua digital reconstruction) necessitates results in a creative capacity which can produce only simulacra: commodities which are ‘never exchanged for the real’, but only for themselves, i.e. within the disembedded system of relations represented by the data space of the model. As Baudrillard noted of the hyperreal, the content which such a process produces is no longer ‘really the real’, because ‘no imaginary envelops it anymore’. The system of meaning which language, and signs more broadly, represents, and which is (or once was) the reflection of an objective reality of relations between existing things, is exchanged instead for a stochastic simulation of that meaning, i.e. an artificially reconstructed, probabilistic simulacra of that system of relations, in which exist only signifiers.
Through the expansion of generative AI and the instantaneous, practically costless simulacra it has enabled, one can foresee a world – not too far removed from our own – wherein such simulacra outnumber the products of the human mind, and the stochastic generation of text, image and audio thus forms the greater share of the total corpus. It is this impending outcome which forms the crux of the present analysis. Whereas at the material level the rendering passive of the human subject (qua the transformation of knowledge work to an act of supervision) appears at the same time both the most substantial and (for capitalism) the most typical transformation we may expect from the emergence of generative AI, in reality such transformation is no new phenomenon, and merely completes (or perfects) the general process of the reification of labour within capitalism observed by Lukács and Braverman (1974: 229).
The second effect of generative AI lies beyond the extension of the alienation of labour within the process of production. Rather, it lies in an expansion of the commodity form within the field of consumption to encompass the reification of meaning. The advent of generative AI represents a definitive advance by capitalism within this terrain of social life, and its appropriation by (and within) the capitalist mode of production. This is a different order of process to, for instance, the simple encroachment of the capitalist ‘logic’ to ever expanding spheres of social activity (for instance, into government, health and social care, and academia). Rather, it a semiotic effect. At root, generative AI subsumes meaning within the commodity relation. In the same way that the commodity fetish in industrial capitalism reified an abstract relation of things in place of the social relation between producers, the commodity fetish in algorithmic capitalism reifies a relation of abstracted signs, disembedded from reality, in place of the system of actual representation to which language previously referred. Where previously meaning could be understood as reflecting a social relation among people, and among people and things, it now will increasingly reflect the simulated, probabilistic relation among a closed system of simulacra, themselves without further reference, and enclosed within a privately-owned infrastructure.
Seen in this way, Baudrillard’s analysis seems prophetic. His emphasis in describing the hyperreal is precisely on the precession of simulacra: the simulation precedes the ‘real’, in the sense that, once arrived in the hyperreal, there is no ‘reality’ beyond the simulation. Algorithmic capitalism, in this sense, is the concrete manifestation of the totalising aspect of capitalism noted by Marx and Lukács within the concept of reification: since the advent of capitalism, the (reified) commodity relation produced by capital has preceded the ‘social characteristics’ of the labour which produces commodities; now, in algorithmic capitalism, the hyperreality created by capital precedes the social relations between actors which produced the initial object-sign relation. In short, reality is reified: no longer real or unreal, but hyperreal. To borrow once more from Baudrillard (1994: 1), ‘[t]he simulacrum is never what hides the truth – it is truth that hides the fact that there is none’.
If this seems a strong claim, consider what is already occurring. Within social science, large language model outputs have been proposed as a proxy for the study of (actual) human sub-populations (Argyle et al. 2023), a method dubbed ‘silicon sampling’. 13 For a period in 2023, the first result on Google image search for ‘tank man’ was an AI-generated image of a Chinese protester taking a selfie in front of a tank. 14 A recent experiment found over one-third of Mechanical Turk workers (i.e. those humans paid to act as the human ‘gold standard’ against which AI models are typically judged) were themselves using LLMs to complete the work (Veselovsky et al. 2023). In turn, a pre-release version of GPT 4 in 2023 was able (admittedly with some human assistance) to hire a human through a gig-economy platform to solve a CAPTCHA test on its behalf. 15
And this process of reification will continue to unfold, perhaps in ways none of us can foresee. Whereas until now the various generative AI algorithms have incorporated techne in separate textual, visual and audial capacities, these capacities now are being combined: GPT-4o, Open AI’s ‘flagship’ model launched in May 2024, accepts any combination of text, audio, image and video as an input, and will output any combination of the same. 16 The domain of these models will likely soon expand to encompass all media. 17
Conclusion
This relatively brief discussion of generative AI has attempted to set out some of the main implications of this technology from the perspective of a Marxist-inspired political economy. It has focused primarily on two aspects: effects on the social value of labour, and indeed on the nature of knowledge labour; and semiotic effects relating to the outputs of generative AI, and the relation of these to the process of reification inherent in capitalism.
The analysis drew, by analogy, from transformations to the social form which occurred in the first industrial revolution and which were noted by Marx. In keeping with a praxis-oriented scholarship, we might ask what can be done to resist the worst effects of algorithmic capitalism. If there is to be a more successful resistance than occurred in industrial capitalism, then it will come not from any self-denying ordinance against the development of the technology itself, but in learning – as Marx noted – ‘to distinguish between machinery and its employment by capital’, and therefore to transfer our critique ‘from the material instruments of production to the form of society which utilizes those instruments’ (Marx 1976: 554–555).
In the spirit of such a critique, it should be noted that the auguries for the utilisation of generative AI are not good. As noted, the wholesale (and legally dubious) seizure of extremely large amounts of data to train generative AI is an expropriation of the knowledge worker’s means of production – her techne – whose effects stand to be every bit as brutal as those of the first industrial revolution. Added to this, the fact that one of OpenAI’s cofounders in March 2024 sued the company for allegedly breaching its mission of building artificial intelligence systems ‘for the benefit of humanity’ (arguing it was instead working on ‘proprietary technology to maximise profits’) 18 is a further indication of things to come. It is impossible, within a capitalist system, for a new instrument of production not to be appropriated for the accumulation of capital.
Yet, of perhaps larger concern is the deeper sinking of the structure of reification precipitated by the emergence of generative AI, in which the commodity fetish is expanding into the constitution of socially experienced reality. Indeed, short of replacing workers’ brains with computers, it is difficult to think of any way for the capitalist mode of production to sink deeper into human consciousness. The only question must be why, as a sentient species, we would subject ourselves to such an outcome. And here, of course, capitalism itself offers the answer: because the expansion of capitalism into this latest sphere reflects nothing less than the workings out of the inner contradictions of capitalism itself, in which the reification of labour, so far embedded in the mind of the worker, presents to us our enemy as our own salvation.
