Abstract
This article sets out to bring E. F. Schumacher’s social theory of technology into dialogue with recent advances in the field of generative artificial intelligence (AI). By generative AI, we are here referring to a new constellation of machine learning technologies that aim to simulate and, subsequently, automate human creativity, with a particular focus on OpenAI’s GPT-3 family (ChatGPT and DALL-E). Often overlooked in contemporary debates on machine learning and AI, we argue that Schumacher’s 1973 book, Small is Beautiful, offers a series of insights and concepts that are increasingly relevant for the development of a humanist politics under conditions of computation. With a particular focus on Schumacher’s account of ‘intermediate technology’, we suggest that his emphasis on the social role of human creativity and identification of scale as a crucial concept to deploy in critiquing technology together provide a unique framework within which to (a) address the rise of what we call ‘pathologies of meaning’ and (b) offer a powerful way to consider alternatives to the gigantisms of the FAANG (Facebook, Amazon, Apple, Netflix, Google) and Silicon Valley-style ideologies of digital transformation.
Keywords
The decision to revisit Schumacher’s social theory is by no means an obvious one. Once a prominent critic of technology and a central figure in the environmental movement, his writings have all but been forgotten. Strangely enough, this is as true of his field, social theory, as it is of digital media studies, where one is hard-pushed to find even the most cursory reflection on Schumacher’s often idiosyncratic yet philosophically rich critique of technologization. Of course, this point of neglect is not particularly surprising. We live in a technological milieu very different from the one in which Schumacher wrote Small is Beautiful. Computers, in the mid-twentieth century, were severely limited, and many of the technologies that we take for granted, from personal computing and the internet to our current obsession with artificial intelligence (AI), did not exist.
That said, Schumacher was certainly familiar with computers and aware of their potential to aid with generative activities such as planning (1975, p. 223). As an economist influenced by Keynes, Schumacher worked for 20 years (1950–1970) as the United Kingdom’s chief economic adviser to the National Coal Board (NCB) before becoming the president of the Soil Association, and, likely, his understanding of computers was heavily influenced by their early use within this context. The NCB’s first computer, for instance, was located at the coal mine of Chatterley Whitfield, where, in 1958, an IBM 650, which used vacuum tubes and punched cards, was installed to calculate wages and was nicknamed ‘Iron Jack’. Shortly after, in 1963, the NCB installed an IBM 1401, which used early versions of transistors and magnetic memory more suited to complex processing. These computers were heavy, their weight measured in tonnes, complicated to programme and very large in size.
1
They were not, in short, networked computers in our contemporary understanding of the term. While reasonably well-suited to some of the tasks required by large organizations, such as payroll, accounting and general calculation, they were particularly limited in their ability to extrapolate patterns from data, which explains Schumacher’s early concerns regarding their use in prediction: [M]aybe it was useful to employ a computer for obtaining results which any intelligent person can reach with the help of a few calculations on the back of an envelope, because the modern world believes in computers and masses of facts, and it abhors simplicity. But it is always dangerous and normally self-defeating to try and cast out devils by Beelzebub, the prince of the devils (Schumacher, 1975, p. 122, see also pp. 224, 233, 238, 249).
Indeed, Small is Beautiful is predicated on the notion that gigantism not only results in difficulties in steering complex institutions, economies and states but also that it leads to unintended global environmental and social consequences that flow from a scientific image of the world. This ‘scientific image’ is deeply entwined with rational means-end thinking, which, once elevated to the status of a hegemonic form of organization, threatens to devalue, hollow out and replace human values. Technology, in response, is called upon to demonstrate its benefits to society, not through its capacity to accelerate the economic extraction of value, but rather through what, with Marcuse, we might call the ‘materialization of values’ (2001, p. 57). To this end, a reset in ideas prevalent in society is needed, including both a ‘profound reorientation of science and technology’ and a shift towards what Schumacher calls the principle of subsidiarity – or decentralization: one should not ‘assign to a greater and higher education what lesser and subordinate organisations can do’. Of course, getting the subsidiary level correct was here key, and his mental model appeared to be, strangely enough, the NCB which had used subsidiarity as a key aspect of its organizational structure (1975, pp. 64, 245). This, he argued, would produce a more human-sized experience as ‘loyalty can grow only from the smaller units to the larger (and higher) ones, not the other way round – and loyalty is an essential element in the health of any organisation’ (1975, p. 244). Or, as he further explained concerning the means-end-oriented rationality prevalent in industrial society: [S]cientific or technological ‘solutions’ which poison the environment or degrade the social structure and man himself are of no benefit, no matter how brilliantly conceived or how great their superficial attraction. Ever bigger machines, entailing ever bigger concentrations of economic power and exerting ever greater violence against the environment, do not represent progress: they are a denial of wisdom. (2011, p. 20)
In the first section, Technology with a Human Face, we sketch the contours of Schumacher’s humanist philosophy of technology, with a particular focus on the role ascribed to human creativity in the production of broader systems of meaning. It is here that we situate Schumacher’s critique within the broader tradition of social pathology and begin to argue that recent developments in AI have served to imbue his work with renewed critical utility. Section two, A Brief History of Machine Learning, introduces some of the fundamental ideas and concepts that have been integral to recent developments in AI, providing the reader with the necessary technical background for later discussions, as well as a sense of how scale is an integral component of current transformations. The third section, Pathologies of Meaning, brings these sections together to demonstrate the role that Schumacher can play in helping social theorists critically comprehend the depletion of broader systems of meaning in the context of generative AI. Finally, we conclude by briefly summarizing the article and spotlighting potential avenues for future research.
Technology with a human face
The idea that modern technology serves to ‘degrade’ both environmental and social structures is crucial to Schumacher’s analysis, and he was particularly concerned about the negative impact of mechanization on human sensibility and the life of the mind. Citing the Roman pontiff, Pius XI, that ‘from the factory dead matter goes out improved, whereas men there are corrupted and degraded’ (2011, p. 23), Small is Beautiful interrogates the social implications of the elimination of the ‘human factor’ within processes of production and the subsequent depletion both of human creativity and social meaning (2011, p. 57). An advantage of turning towards Schumacher’s thought is that Small is Beautiful is guided by an existential judgement, and it therefore challenges the reader to take a position on the unbalanced growth that technologies accelerate. Indeed, Schumacher highlights the fact that the modern industrial system does not treat people as if they mattered, rather it tends to treat them as human resources. Crucial to Schumacher’s critique is the belief that modern technology strips the world of work bare of all qualitative dimensions that stand in the way of growth-led production based on scientific management and the output of standardized products. Beholden to the ‘economic calculus’, the industrialist is forced, he argues, ‘to eliminate the human factor’ – the idiosyncrasies and deficiencies in efficiency and exactness coterminous with the creative subject. This, however, gives rise to a new situation where rather than working with technology, productive activity is deprived of all creativity (‘useful work with hands and head’) and reduced to a fragmented, ‘inhuman chore’. Thus, instead of supporting creative technical practice, modern technology is predicated on the submission of work to an ‘enormous effort at automation’ because machines ‘do not make [the same] mistakes [as] people do’ (2011, p. 57). For Schumacher, this situation not only undermines the possibility of ‘work-enjoyment’ but also impacts broader ‘psychological structures’. ‘[N]ext to the family’, he writes, ‘it is work and the relationships established by work that are the true foundations of society’. Yet in the age of modern technology, human beings are confronted not with ‘excellent tools’ for the ‘good of man’s body and soul’ but with a ‘gigantic’ apparatus that increasingly appears beyond their ken and control (2011, pp. 23, 120, 124, 159).
Of particular interest in the context of our critique is how Schumacher situates human creativity at the centre of his technical politics. Describing this ‘requirement’ as ‘perhaps the most important of all’ (2011, p. 23), Small is Beautiful can be read as a sustained reflection on the social role of human creativity in the production of broader systems of meaning (in particular, those related to work, politics and education), as well as an account of the technological fragmentation and emptying out of human subjectivity through the deprivation of creative, joyful and other useful forms of productive activity. Reminiscent of both Horkheimer and Adorno’s Dialectic of Enlightenment (1997) and Heidegger’s critique of ‘enframing’ (1977), this dehumanized reality is rooted, for Schumacher, in the rise of a reifying, technical-scientific metaphysics or instrumental rationality, whereby the domination of nature rebounds back upon society in the domination of human beings (2011, p. 3, see also pp. 60–80). Yet rather than seek a ‘free relationship’ to technology, à la Heidegger, or offer what Feenberg refers to as the ‘bare emphasis on reflection’ and its ‘mutilated capacities’ put forward by Horkheimer and Adorno (2013, p. 610), Schumacher places his hopes in the ‘humanization of work’ and the related creation of a decentralized technological apparatus that would remain responsive both to communities and the natural limits of their environmental ecologies, what might today be called people-centred development (2011, pp. 3, 120).
Predicated on the distinction between ‘human’ and ‘inhuman’ technologies (2011, pp. 120–131), Schumacher’s philosophy of technology can thus be read as an attempt to map the technical-scientific unfolding of modernity through an engagement with its social pathologies (see Harris, 2022; Honneth, 2009). Schumacher, of course, does not use this terminology, imported as it is from the left-Hegelian tradition of critical theory. However, it is arguably the proximity of his thought to this approach to social theory that lends Schumacher’s account of the social impact of modern technology on human creativity both its contemporary import and normative force. Briefly put, the term social pathology is used to refer to ‘socially produced obstacle[s]’ to individual or cooperative self-realization (Harris, 2019, p. 46). Often presented as a normatively weightier optic of critique than that offered by liberal theory, the concept of social pathology incorporates yet advances beyond the latter’s oftentimes restrictive focus on questions of legitimacy and justice by disclosing the existence of ‘social circumstances [that] violate those conditions which constitute a necessary presupposition for a good life amongst us’ (Honneth, 2000, p. 122). Integral here, however, is that these ‘circumstances’ refer not to particular instances in the misuse of power but rather to the underlying logics and associated meanings that give shape to particular social constellations, thereby enabling social theorists to raise fundamental questions about our very ‘form of life’ (see Jaeggi, 2018). Indeed, it is precisely in this sense of social pathology that Schumacher interrogates the question of technology, suggesting that ‘[i]f that which has been shaped by technology, and continues to be so shaped, looks sick, it might be wise to have a look at technology itself’ and ‘consider whether it is possible to have something better – a technology with a human face’ (2011, p. 120).
Here we want to use this notion of ‘technology with a human face’ to consider a world where not just technology and not just computers are transforming or potentially challenging the notion of human creativity. With the rise of AI, a new factor in the transformation of work and sociality has emerged that ironically foregrounds technology with a human face, albeit not in the way that Schumacher envisaged. These systems are crucially different from the computers with which Schumacher himself would have been familiar in that they are large-scale, distributed, networked systems of systems, which also incorporate the capacity to use machine learning – a creative potential that has been used as a generative capacity for new AI systems, such as LLMs (large language models). Yet while a current idea in AI research is that self-learning by a machine learning system will engender a radical rise in productivity and wealth, and so on, there has also been a great deal of debate over the threats posed to society by these new and sometimes very capable systems, concerns that have often been expressed in terms of their capacity for differentiation, classification, filtering, and, by extension, judgement and decision-making itself (Berry 2023b). This is, of course, a key area where Schumacher was concerned that the delegation of human decision-making and planning to machines could have serious consequences, both for the possibility of a good life and for creating an out-of-control economy and society. But more than this, Schumacher’s critique suggests that we deeply misunderstand the social meaning of human creativity when we naively grasp our productive activities as something that could simply be automated away, rather than as part of a much broader set of psychosocial phenomena that ripple outwards into our practices, beliefs and behaviours (Porritt in Schumacher, 2011, p. xi). In short, these activities depend, for Schumacher, on the maintenance of ‘infinitely precious and highly vulnerable’ ‘psychological structures’ of the kind that underly cooperation, social cohesion, mutual and self-respect. Yet, once damaged, ‘all this and much else’, he argues, ‘disintegrates and disappears’ (2011, pp. 57, 159). This ‘much else’ Schumacher grasps with the notion of ‘the human substance’, a suggestive albeit arguably essentialist concept that we wish to translate into a sociological register as a shared discursive ‘system of meaning’.
In this article, we propose to offer a reading of the notion of ‘human substance’ as a symbolic reservoir of values, beliefs and norms that help contribute to social cohesion, cooperation and mutual understanding. This now de-essentialized concept encourages sociological inquiry into the construction of identities and the formation of social bonds within the context of accelerating technical practices. It invites an examination of the ways in which individuals derive meaning from their interactions within a community where increasingly these meanings, embodied in cultural symbols and language, are mediated through infrastructural technical systems which have become increasingly proficient at working with human language. By understanding ‘human substance’ through this sociological lens enables the exploration of how technological advancements, especially in the realm of AI, may influence or alter these shared meanings and practices, potentially posing challenges to assumptions about agency, empathy and community. For example, the assumption of a shared experience of understanding in a conversation is deeply undermined when one party is a computer system that has no understanding of the meaning of the discussion, whilst presenting a remarkably proficient simulacra of intersubjective dialogue. This is an experience many have now shared using ChatGPT but which was foreshadowed by Joseph Weizenbaum’s ELIZA chatbot in 1966 (see Berry 2023b; Weizenbaum 1976).
Indeed, we accept that implicit in Schumacher’s notion of the ‘human substance’ is a partly spiritualized understanding of the subject that is indebted, at least in part, to more romantic critiques of technology and industrialization. That said, while Schumacher is certainly guilty at times of employing an outdated or even religious vocabulary, he does not offer a hypostatized image of the human or of humanity, but rather endorses a normative humanism committed to a critique of the ‘inhuman’ failings of society. In other words, Schumacher’s idea of the ‘human substance’ arguably refers less to a distinct metaphysical substrate than to the complex social dynamics that enable human beings to situate themselves and find meaning in the world. Importantly, it is precisely these systems of meaning that we claim are only further distorted when we rely on the mediation of computation in the form of networked systems of systems, especially in terms of AI and machine learning.
A brief history of machine learning
Arthur Samuel is a key thinker to help us begin to understand these issues. Samuel is claimed by many who work in the area to have defined machine learning in 1959 as a ‘field of study that gives computers the ability to learn without being explicitly programmed’ (see e.g. Kariappa et al., n.d.). We say claimed because although widely cited in the literature, the phrase does not appear in Samuel’s 1959 work, although he does coin the term ‘machine learning’ and explain that ‘programming computers to learn from experience should eventually eliminate the need for much of this detailed programming effort’ (p. 211). Nonetheless, this is an accepted (and repeated) origin point in the field and is often used to show how machine learning is particularly geared towards the self-learning capacity of a machine and how it differs from AI, that is, through the application of computation to symbolic tasks that are usually undertaken through human cognition. Indeed, Samuel programmed one of the first programmes that are considered key to machine learning. Namely, a checkers game that he programmed in 1959 on an IBM 701, which still relied on vacuum tubes. It is fascinating to note that Samuel was not only programming his first machine-learning algorithms at the same time as Schumacher was working at the NCB, but that both were working with similar machines, Samuel on the aforementioned IBM 701, and Schumacher with the IBM 650 (these were largely based on the same computer architecture) and, later, an IBM 1401. Of course, their intellectual paths were very different. Samuel continued to develop learning machines throughout his life and inspired the field of machine learning to develop notions of machine creativity, whilst Schumacher became critical of the threat of calculation to human judgement and creativity. These issues become key when considered in light of the social theoretical implications of machine learning, whereby instrumental rationality becomes a key structuring framework for reorganizing the social organization of education, labour and everyday life and, as already hinted at in previous critiques of the administrative life, impose ‘on the senses of human beings’ the imprint of a now cybernetic regime (Adorno and Horkheimer, 2002, p. 104; Berry, 2014).
Until the late 1970s, machine learning was a part of AI’s evolution. Soon after, however, it branched off to develop on its own due to its increasing relevance to actual computing problems, specifically those created by the rise of so-called Big-Data. By 1997, computer scientist, Tom Mitchell, further narrowed its early definition, describing machine learning as ‘a computer program [that] is said to learn from experience E with respect to some class of tasks’ (Mitchell, 1997). Thus understood, machine learning became geared towards the self-learning capacity of a machine to undertake a particular activity or task. We should note here that this notion of ‘learning’ is very specific and technical in its deployment. It relates to the ability to carry out highly scoped skills or tasks, not to wider humanistic connotations of learning as understanding or interpretation. Indeed, Ethem Alpaydin describes this as the ability to ‘extract automatically the algorithm for [a] task…[as] there are many applications for which we do not have an algorithm but have lots of data’ (2016, p. 17). This is a very mechanical and formal notion of learning. However, with developments in computing power, impressive results became available, and the approach has since become hegemonic. Human learning too, in light of machine learning’s success, has been reconfigured into an epistemological regime of data-based learning and pattern detection, augmented, where it is not replaced, by machine calculation.
Indeed, it is precisely this focus on domain-specific problems that is said to delimit machine learning in relation to the wider knowledge problems associated with general AI. The successful turn to machine learning has also been driven by the limited capacities within disciplines to cope with an ever-growing mountain of digital data, combined with a political economy that sees huge economic potential in mining this data for insights and profit. As Burrell explains, machine learning algorithms are used as powerful generalizers and predictors. Since the accuracy of these algorithms is known to improve with greater quantities of data to train on, the growing availability of such data in recent years has brought renewed interest to these algorithms. (Burrell, 2016, p. 5)
It is with this background in mind that we now want to introduce the question of scale. As already discussed, the problem of gigantism underwrites Schumacher’s analysis, in particular, his call for an ‘intermediate technology’. In machine learning, however, the turn from small-scale to gigantic fields of computational power has been required to transform the ideas behind it into highly capable learning machines that can be turned towards profitable directions. This required a combination of the predictive power of these new machines with the communicational systems of the twenty-first century, particularly social media. To give a sense of developments in machine learning and how the social was crucial for major new breakthroughs, Google’s X Lab created an AI algorithm in 2011 called Google Brain. By 2012, it became famously adept at image processing, particularly in terms of its ability to identify cats in pictures from the internet (Clark, 2012). Similarly, in 2014, Facebook’s research team developed DeepFace, a deep learning facial recognition system which mobilized a nine-layer neural network trained on four million images of Facebook users. This AI was claimed to be able to spot human faces in images with the same accuracy as humans do (which they disconcertingly ‘approximated’ to 97.53 per cent) (Simonite, 2014). In 2016, the AlphaGo programme was the first AI to beat a professional Go player. Go is one of the oldest and hardest abstract strategy games and was previously thought to be near-impossible to teach to a computer. AlphaGo’s mastery of Go was so significant because of the ‘near-infinite number of board positions available and the intuition that top human players rely upon to pick between them. Hassabis described Go as “the most elegant game that humans have ever invented,” with “simple rules [that] give rise to endless complexity”' (Borowiec, 2016).
Although these examples may seem trivial or game-based, these systems have a broad range of uses, including pattern recognition, so that machine learning can be used for facial or optical character recognition, and so on; time series prediction, so that machine learning can be used to make predictions; signal processing, so that machine learning can be trained to process an audio signal and filter it appropriately; control, so that machine learning can be used to manage steering decisions of physical vehicles; and, lastly, anomaly detection, so that a machine can learn to recognize patterns and be trained to raise an alert when something is anomalous. These basic functions are incorporated into foundational models that enable the connection and reconnection of dispersed data into determined outcomes, what Chun has described as ‘correlating ideology’ (2021). For example, feeding back these system outputs into social media and personal computing devices can create a communicational system of misinformation at scale, that is across millions, if not billions of users, to engage in persuasive campaigns, propaganda and even intimate relations to manipulate individuals, groups and even large sections of a population (Berry 2023a, p. 2). As Lash notes, in the communication order, power is not just in the flows: it is in the emergent non-linear socio-technical systems that channel, block and connect the flows. Hence, literally, power through control. Cybernetic power works through command, control, communications and intelligence. Here intelligence scans the system’s borders. It processes the rather amorphous stuff out there, the already somewhat patterned noise out there, into information. (Lash 2007, p. 68)
These newer machine learning systems generate models that are internally so complex that they are extremely difficult to critique or contest in their operation and with that comes power without democratic accountability. Indeed, we hear, perhaps too much, about ‘opening the black box’ of computing, as if by merely peering inside the innards of a computational system we will thereby understand it. Unfortunately, this is not always the case, especially with machine learning systems. By way of an example, the OpenAI Generative Pre-trained Transformer 3 (GPT-3) has a capacity of over 175 billion machine learning parameters and is therefore extremely difficult to ‘explain’. GPT-3 works by predicting the next word in a sentence. It does this by looking at the previous text and seeing which words are most likely to follow. Repeating this procedure enables it to create extremely surprising textual outputs, some of which appear comparable to the writing capacity of humans. However, the actual operation of these systems is very difficult to pin down. Explaining how it does what it does is difficult and leaves the system in obscurity and the decisions that it makes all the more problematic – this is known in AI research as the problem of explainability (see Berry 2021, 2023a, p. 6). These problems are not just about normative outcomes but can also be about the reliability of a system, that is, how one can optimize it, control it and, indeed, ensure that it is doing what one wants. Thinking with Schumacher, these vast systems of machine learning thus offer a real challenge to the question of ‘intermediate technology’. Not only do their operations rely on vast macrotechnological infrastructures and energy-intensive processes, but they also add new layers of complexity to what Schumacher already identified as the ‘inhuman’ dimensions of modern technology.
Pathologies of meaning
To further unpack Schumacher’s potential contribution to a critique of these new AI systems, we now return to the question of human creativity. Indeed, while questions of scale and creativity are closely interwoven throughout Schumacher’s critique of modern technology, automation is located as a primary site of deprivation. Yet whereas Schumacher’s critique of automation was focused primarily on the workplace or factory, it is today increasingly directed at our most intimate noetic capacities, including imagination, sensibility and the production of meaning. With names such as ChatGPT, Mistral, Llama, DALL-E 2, Midjourney and SingSong, we are increasingly confronted with so-called generative systems that aim to simulate and, subsequently, automate aspects of human creativity, from the writing of text to the creation of art and even the composition of music. In part because of their ability to make automated cultural creations, these systems have also captured a great deal of attention within the wider public sphere. For instance, the possibility of the public sphere being flooded with disinformation or machine-generated fake conversations to steer voting or public feeling on a particular issue is a real and pressing problem. Here, however, we are specifically concerned with the automation of creative labour and its potential impact on broader ‘systems of meaning’. Due to limitations on space, we only propose to offer a speculative encounter with ChatGPT and DALL-E systems. Although they remain early versions or betas, they have been chosen as they are structured to be infrastructural technologies for cultural production.
The automated production of visual images is perhaps the most striking example of cultural production within contemporary AI systems. DALL-E (a name formed from combining WALL-E and Salvador Dalí) and DALL-E 2 (the second version of the software) are transformer models developed by OpenAI to generate digital images from natural language descriptions. DALL-E was first revealed by OpenAI in a blog post in January 2021 and is a 12-billion parameter version of GPT-3 (see Pereira, 2021). Its latest version, DALLE-2, was released in 2022 and generates more realistic images by offering a resolution which is four times greater than the original DALL-E system. Roughly put, DALL-E works ‘by swapping text for pixels’ in the GPT-3 platform with the aid of a ‘smaller version of GPT-3 that has also been trained on text-image pairs taken from the internet’ (Heaven, 2021). DALL-E, in short, is able to generate synthetic images from text captions that are submitted to it. However, given that it works in a similar way to GPT-3, it suffers from the same problems of memorizing the images that it has been fed, and there is little understanding of what is being generated, which can often be ‘hallucinated’ or ‘confabulated’. Rather, its outputs are predicated on a mechanical process of combination and synthesis. In November 2022, however, ChatGPT was launched, providing GPT-3 with a human face. Namely, a chatbot interface for interacting with the technology. The subsequent ability to converse with the technology has meant that average, everyday users have been able to use and experiment with these generative AI systems in remarkable ways, which goes a long way to explaining GPT-3’s growth from almost zero users in November 2022 to over 100 million users by May 2023. That said, GPT-3’s source-code remains closed, and it is important to remember that OpenAI is a private company committed to monetizing its technology through a kind of cultural-production-as-a-service.
This brief reflection on ChatGPT-3 and DALL-E 2 is not intended to be exhaustive but rather to gesture towards the ways in which current AI research signals the emergence of a new and complex economic and communications system. Indeed, we are now starting to see lots of new companies in this area of technology, and the political economic structure of the AI industry is similar to the computing services industry. Unlike the latter, however, we are just starting to see its outlines and the way in which these companies will make a profit. Similarly, we are only just beginning to get to grips with understanding these systems, and it is likely that their underlying source code will remain hidden, thus leaving us with the task of probing the surface of these systems to reveal deeper structural formations in the infrastructure. Far from being small or beautiful, they are behemoths of a gigantic mode of computation. Yet while we are a long way from having a codified method or set of methods for engaging with these systems, we are nonetheless already aware of some of the ways in which they contain various problematic aspects, such as particular biases towards gender, race or class (see e.g. Mcquillan, 2022). But what is also deeply concerning is that these new generative technologies offer huge potential for creative automation, not just in terms of writing and text, but also in terms of creative work more generally. Indeed, because of the flexibility of its implementation and range of application, there are already concerns that generative AIs such as ChatGPT will serve to restructure work itself such that human creativity will be removed from areas of productive activity that had previously been thought impossible to computerize. Not only, as David Golumbia suggests, are these jobs ‘generally among the kind of work that people enjoy doing and, presuming they are properly compensated, derive significant satisfaction from doing’, but as he also warns, generative AI itself is arguably ‘built on very dark and destructive ideas about what human beings, creativity, and meaning are’ (Golumbia, 2022).
To begin to broach this question, it is helpful to turn to one of the decisive texts in this discussion, Emily Bender et al.’s 2021 paper on ‘stochastic parrots’. Beginning from the position that meaning is a contextually located, ‘jointly constructed’ phenomenon, involving complex forms of intersubjective experience and language-based competencies, the authors highlight the extent to which the apparently ‘creative’ outputs of LLMs such as GPT-3 are, in fact, foundationally meaningless. Indeed, ‘[t]ext generated by an LM’, they write, ‘is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind’ (2021, p. 616). Instead, ‘an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot’ (2021, p. 617). 2 While the focus of their paper is on language, the same reading can be applied to the generative production of art, music or, indeed, any other purportedly ‘creative’ output. Of course, this does not mean that we do not find meaning in these generative outputs. Here we might point to German artist, Boris Eldgasen’s, 2023 submission to the Sony World Photography Awards. Unbeknown to the judges, Eldgasen’s depiction of ‘two women from different generations in black and white’ was created using AI. On winning the award, however, Eldgasen refused its acceptance, claiming that ‘he “applied as a cheeky monkey” to find out if competitions would be prepared for AI images to enter. “They are not”’(Grierson, 2023). While important, the point that we want to make is not that the judges were somehow deceived by the particular photo-like qualities of the AI-generated image but rather that they must have read into the image meaningful content. Yet, as Bender et al. suggest: ‘if one side of the communication does not have meaning, then the comprehension of the implicit meaning is an illusion arising from our singular human understanding’ (2021, p. 616). 3
In other words, LLMs such as GPT-3 are ‘bullshit generators’, their success reliant upon making ‘a sufficiently good guess to pass [our] sense making filter’ (Mcquillan, 2023). The subsequent ‘illusions’ of what we might term ‘synthetic meaning construction’ may indeed seem fairly innocuous when addressed from the standpoint of particular examples. But, as mentioned above, generative systems such as ChatGPT and DALL-E are structured to become infrastructural technologies for the cultural production of synthetic media. It is on this point that we want to introduce Schumacher’s critique of ‘the human substance’.
Described as a form of ‘natural capital’, Schumacher’s notion of ‘the human substance’ can be understood to refer simultaneously to our sense of self, our creative capacities and the rich semiotic order within which we situate ourselves in the world. It can be understood, in short, as socially generated systems of meaning, the depletion of which culminates in a decline of what Schumacher terms ‘free being’. For instance, human beings are ‘destroyed’, he argues, ‘by the inner conviction of uselessness’, but ‘modern technology is most successful in reducing or even eliminating […] skilful, productive work of human hands’ and thus creativity (2011, pp. 159, 122). On this point, Schumacher’s concerns broadly resonate, firstly, with the left-Hegelian idea that we objectify our creative capacities in our productive activities and that therefore work is, or at least should be, an autonomy-enhancing activity, and secondly with Bernard Stiegler’s account of ‘proletarianization’ and ‘spiritual misery’.
Put briefly, proletarianization in the context of the factory refers to the exteriorization of thought and the grammatization of gesture (i.e. of memory) in machines, a process through which workers’ knowledge (savoir-faire, i.e. know-how or skill) is no longer circulated, transmitted, and transformed among communities but is rather fragmented and rearticulated within and across the mechanization process (2010). Becoming little more than an appendage of the machine, the worker is thus disindividuated, that is, they no longer form part of an individual and collective process of transindividuation, which both engenders ‘new knowledge(s)’ and constitutes the ‘the history of what Husserl called a “transcendental we”‘ (2012, p. 129), that is, shared horizons of knowledge, action and meaning. For Stiegler, the process of proletarianization will later extend to the loss of savoir-vivre (knowledge of how to live) through the culture industries and the loss of savoir-théoriques (theoretical knowledge) through the automation of cognition within computational technologies (2010, 2012, 2020). Compounding this process, what we want to suggest is that developments in generative AI signal a further stage in this process: the proletarianization of the creative production of meaning.
Indeed, multiple so-called ‘creative jobs’ (e.g. artists, musicians, journalists, designers and writers, inter alia) are each in their own way threatened by generative AI systems. Many of these roles are, however, integral to the wider production of culture, offering shared references, experiences, aesthetic sensibilities and understandings that both emerge from out of and respond to the particular historical contexts of their development. Dada and Surrealism, for instance, emerged in response to the ‘senselessness of World War 1’ and the subsequent realization that ‘there could be no return to traditional forms of art or traditional relations between artists and society’ (Schecter 2007, p. 148). Of course, such concerns may seem somewhat peripheral, but taken as a whole creative labour is arguably essential not only in terms of maintaining but also of enabling historical transformations in the social. Moreover, as Schumacher was well aware, the ability to undertake creative labour as one’s primary source of income is a rare pleasure and generative AIs radically risk exacerbating this issue (2011). Bluntly put, it is far cheaper to issue a prompt to a computer rather than pay a precariously employed graphic designer to create your business logo. But more than this, generative AIs are also highly parasitic on past cultural creativity. The fact that DALL-E, for instance, is able to produce a ‘photorealist image of Adorno in the style Picasso’ is because it has ingested and recombined unimaginably vast amounts of pre-existing data. This has led Ted Chiang to memorably describe ChatGPT as a ‘blurry JPEG of the web’ (Chiang, 2023). With creators neither acknowledged nor renumerated for their contributions, generative AI systems have been described as automated machines for plagiarism (Christian, 2022). Yet while an assessment of their future direction remains, at least in part, a matter of speculation, we here want to foreground the concerns raised by digital artist Annie Dorsen, who succinctly registers the wider threats to creativity and culture posed by the new political economy of generative AI: These tools represent the complete corporate capture of the imagination, that most private and unpredictable part of the human mind. Professional artists aren’t a cause for worry. They’ll likely soon lose interest in a tool that makes all the important decisions for them. The concern is for everyone else. When tinkerers and hobbyists, doodlers and scribblers – not to mention kids just starting to perceive and explore the world – have this kind of instant gratification at their disposal, their curiosity is highjacked and extracted. For all the surrealism of these tools’ outputs, there’s a banal uniformity to the results. When people’s imaginative energy is replaced by the drop-down menu “creativity” of big tech platforms, on a mass scale, we are facing a particularly dire form of immiseration. (Dorsen, 2022) to give [humans] a chance to utilise and develop his faculties; to enable [them] to overcome his egocentredness by joining with other people in a common task; and to bring forth the goods and services needed for a becoming existence. (2011, p. 39)
The stakes of the failure to realize this new system of thought are powerfully drawn out by Schumacher when he argues that the depletion of ‘human substance’ together with the living environment haunt the modern industrial system as its repressed content: ‘the modern industrial system, with all its intellectual sophistication, consumes the very basis on which it has been erected’ (2011, pp. 8, 216). Predicated on the negative self-perpetuating dynamics of capitalist growth and environmental extraction, the modern industrial system thus threatens to undermine its very conditions of possibility – it is ‘sick’ if not suicidal (2011, pp. 2–10). But where one should look for what we might term ‘intermediate artificial intelligence’ is currently uncertain, although open-source approaches have been suggested, where open-source AI enables the underlying system to be explainable and therefore open to critique (Berry 2023a). At present, ‘risk’ has been the key category driving debates, especially in the European Union. It is true that these risks are also often presented as ‘existential’, but not in the humanistic sense argued for here. Rather, this discourse often combines an almost ecstatic celebration of technical enhancement with a deeply apocalyptic fascination with the ‘existential risks’ that are said to be posed by the arrival, at some unknown time in the future, of ‘artificial general intelligence’. This discourse evinces, in short, a distinctly anti-humanist, often nihilistic and, at times, deeply fascistic worldview (see e.g. Golumbia, 2019; Hui, 2017). In response, it seems to us that Schumacher’s emphasis on scale and creativity might be a more fruitful and human-scale way of mitigating some of the identified risks of generative, machine learning technologies through, for example, open-source methods.
Conclusion
In this article, we have attempted to enquire into the potential relevance of revisiting Schumacher’s critique of technology in light of recent transformations in the field of generative AI. Of particular interest for us has been his human-centred approach to technological development, a project through which he evinces a deep care and sensitivity towards the question of what it means to live, work and think within the framework of a technological society. Work, for instance, is never merely grasped by Schumacher as a means to make a living but is instead understood as deeply existential, the psychological impacts of which spread far beyond the workplace or factory. As such, it is not in the name of profit but in the hope that it might provide a foundation for a truer, kinder and less destructive society that his notion of intermediate technology seeks to ‘reintegrate[] the human being, with [their] skilful hands and creative brain’ (2011, p. 131). As noted earlier, there is, however, an undeniably romantic element to Schumacher’s analysis. Perhaps due in part to his frequent use of religious language, his ideas, at times, sit in an ambiguous position between critical social theory and critical moralism. That said, his hope that an intermediate technology might break with the gigantism and estrangement of modernity neither marks a nostalgic return to craft or other earlier forms of technical practice nor does it require that we fall behind the ‘vast accumulations’ and ‘splendid’ techniques of scientific knowledge. Rather, it requires that we become aware that this ‘[t]ruthful knowledge […] does not commit us to a technology of gigantism, supersonic speed, violence, and the destruction of human-work enjoyment’ (p. 124). In place of ‘mass production’, Schumacher calls for ‘production by the masses’, a new project based on the development of affordable, democratic, explainable, environmentally sustainable and creativity-enhancing technologies (2011, p. 125).
These seem to us to be fruitful ways in which Schumacher can be repurposed to help us think through the current debates and threats posed by new generative AI systems. As we have argued in this article, Schumacher’s identification of scale as a crucial concept to deploy in critiquing technology and its organization and functioning is a powerful way to consider alternatives to the gigantisms of the FAANG (Facebook, Amazon, Apple, Netflix, Google) and Silicon Valley-style ideologies of digital transformation. Current instantiations of machine learning systems require hundreds of millions, if not billions, of dollars together with huge amounts of energy which are combined to create opaque foundation model systems. The current common-sense notion is that bigger is better in AI and that the complexities and distortions that this approach brings might be mitigated by transplanting a ‘human face’ onto them via a chatbot interface. We are not convinced of this, and caution that these systems, which are largely unregulated and experimental, are deployed without consideration of their human impact or environmental toxicity. Instead, we would prefer to see AI return to a ‘small is beautiful’ framework of development, whereby open-source decentralization of the key components might be developed. Networks of smaller, more human-scaled elements might create the conditions for distributed social councils, formed locally and serving the ends of prudentia AI.
Footnotes
Acknowledgements
The authors would like to thank the anonymous reviewer, all participants at Oxford Brooke’s conference on Schumacher and Neal Harris and Lucy Ford for their insightful comments and feedback on earlier drafts of this article.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
