Abstract

The 8th of March 2023 was a special day in the history of corporate comeuppance. It was International Women’s Day (#IWD) and so digitally active organisations seemed particularly keen to signal their status as progressive, welcoming, and healthy spaces. The patriarchy might not yet have been smashed but it seemed that cracks were beginning to appear. #IWD celebrated this apparent progress on a post-by-post basis. That many of the resulting platitudes were vacuous, exaggerated, and perhaps even deflective might not have mattered.
Mercifully, @PayGapApp introduced some freely available facts into these self-congratulatory stories. The Twitter account’s operating procedure was described in a pinned message from two days prior. Whenever an organisation posted about International Women’s Day, their message would be juxtaposed with data about its gender pay gap, as well as with information about how this gap had evolved (on which see https://gender-pay-gap.service.gov.uk/). Taking only the most recent example (at the time of writing) as a case in point, North Tees and Hartlepool NHS Foundation mentioned the ‘inspirational women at our Trust’ and the bot added the following: ‘In this organisation, women’s median hourly pay is 23.4% lower than men’s. The pay gap is 1.3 percentage points smaller than the previous year’. Call-out culture is at its’ most satisfying when words and deeds are so patently misaligned.
The automated protocol went viral. In response, many organisations deleted their #IWD posts, probably out of embarrassment. Many others perhaps saw what was happening and decided not to publicise their sanctimony. According to Francessca Lawson, the co-creator of @PayGapApp: ‘It tapped into a lot of people’s sense of frustrations about how the day is often treated as a marketing opportunity, yet there’s seemingly no accountability for any of the bad stuff (Raiken, 2023; see also Herring, 2020). The case highlights the increasing role now played by automation in the negotiation and contestation of corporate reputation.
In what sense do algorithms now participate in the work of persuasion? Miles Coleman’s recent book encourages us to think carefully about this question. The book’s opening chapter conceptualises two important trinities. The first, drawn from computational studies, compares ‘the front end’ and ‘the back end’ to ‘the deep end’. Analyses of ‘the front end’ prioritise the interactions that are permitted by the user interface of digital media: the deck, the dashboard, and the platform. Front end analyses of the @PayGapApp might therefore foreground the identifiable behaviours of Twitter’s users: the engagement numbers, the role of influential accounts, the impact of commentating journalists, and so on. Analyses of the ‘back end’, secondly, would foreground the program interface: the database, the processor, and the network. Here, we might examine the role played by the script which the @PayGapApp account mobilised, as well as the codes that are put into play by #IWD posting accounts.
These two ‘ends’ might be unproblematic for those familiar with the semiotic study of marketing communications (e.g. Barthes, 1977; Williamson, 1978) and the market studies tradition that explores the digital and physical infrastructure of contemporary marketing practice (e.g. MacKenzie et al., 2023; Pridmore and Zwick, 2011). However, Coleman complicates them with his addition of the ‘deep end’. This, he writes, is the ‘realm of computing that deals with the performative expenditure and experience of machinic rhetorical energies (i.e. the catalysing of visceral feelings)’ (2023: 13). In making space for analyses of this deep end, Coleman continues, we consider ‘the energies that animate computational performances manifest not in the front end or the back end alone, but between them, as they are entangled with wider ecologies of discourse’ (2023: 14).
And so, coming to terms with @PayGapApp requires us to look at the role played by users (the front end), by programmers (the back end), and by computational performances themselves (the deep end). Rather than explaining the case away in terms the ingenuity of the creators or the appetites of the audience, we should also consider what computational entities themselves do. Working along these lines requires us to accept that bots such as @PayGapApp are ‘lively but not alive’ (2023: 3).
Any lingering hesitation to jump in at ‘the deep end’, for Coleman, needs to be understood as a product of our humanistic reluctance to treat computational machines as communicators in their own right. But following the work of Karen Barad (2007), the book instead encourages its readers to consider how the inanimate nevertheless animates: We need to be willing to go deeper than asking how machines may, or may not, mime humans, and take seriously the idea that, despite being non-human and non-animal, they also bring lively movements as machines’ (Coleman, 2023: 13; see also Kennedy, 1992).
The opening chapter’s second trinity then demonstrates how ‘traditional’ (i.e. human based) rhetorical appeals routinely mobilise the seemingly factual, the apparently beautiful and the clearly satirical in their efforts to linguistically move their audiences. Here, Coleman takes inspiration from the work of Caccarelli (2005, 2013) and asks what happens when such epistemic, aesthetic, and political ‘ends’ come to be pursued not by people but by machines. In what sense can we say that something like the @PayGapApp is itself capable of producing truthful, beautiful, and/or good affects and even outcomes? A traditional humanist view might suggest that rhetorical effectiveness will become somehow minimised, jeopardised, or compromised in the process of its computationally linguistic automation. Machines just can’t do what we can! But the post-humanism of Coleman’s thinking – which is expanded in the book’s second, third and fourth chapters respectively – highlights the extent to which epistemic, aesthetic, and political appeals succeed despite their machinic foundations. It might even be better to say that certain epistemic, aesthetic, and political appeals succeed because they have been automated. We can’t do what machines can!
The book’s first case study is of Vaccine Calculator, a medical expert system which has been ‘used by counterpublics of science to garner legitimacy for pseudoscientific claims’ (2023: 24). Instead of alleging conspiracy or highlighting stupidity, Coleman explains why good-faith users of the Vaccine Calculator might erroneously accept its scientifically sounding statements as scientific facts. Leaning upon aspects of dual-process theory (e.g. Kahneman, 2003; Patterson, 2017; Petty and Caccioppo, 1986), as well as Lynda Walsh’s (2013) analysis of the prophetic sense of authority, Coleman explains that the epistemic protocols of machines offer the possibility of ‘scientific ritual without scientific credentials’ (2023: 30). In other words, they invite: the user to participate in the rituals of science, to manipulate outputs and to try different permutations … Being able to run different simulations and to be able to examine the outputs equip the user with just enough information to be able to feel competent to make a conclusion (2023: 39).
The book’s first case study demonstrates how we often persuade ourselves to know through machines. Its second case illustrates how machines can strike us with awe. Here, the focus is on the front and back ends of @censusAmericans, a Twitter bot that was programmed to produce life-like statements on an hourly basis, until it exhausted the almost 15.5 million rows in its dataset. Jia Zhang (2015), the bot’s creator, estimated that it would take some 1,760 years for the performance to run its course. And yet, it stopped producing content on the 23rd of June, 2023, bowing out with the following: ‘I have a doctorate. I got married in 1985. I usually work 60 hrs per week. I studied biology. I work in offices of physicians. I am married’.
This closing statement is interesting both on its own but also because it forms a part of a bigger whole. It is a statement that expresses a sense of the overwhelming complexity of American life - an overwhelming complexity which is nevertheless rendered potentially coherent by the algorithm. So, whereas the Kantian notion of the sublime moves us from the overwhelming character of nature to the beatifying nature of artistic production, the sublime energy of @censusAmericans, on Coleman’s account, is such that we cannot help but appreciate the feats it set out to accomplish. Our aesthetic sensibilities are not so much tarnished as amplified by our awareness of the fact that it isn’t a person but a protocol that brings these biographical snippets into our purviews.
The book’s final case study looks at how machinic rhetorical processes have been put towards political ends. It focuses on @DeepDrumpf, an automated political parody of the elocutionary mannerisms of its near namesake. Whereas influential satires of Donald Trump’s carry-on have been crafted by Alec Baldwin, Peter Serafinowicz, and Randy Rainbow, Coleman deliberately foregrounds an influential political machine to highlight the specific role played by linguistic computation in contemporary politics. Much like @PayGapApp, this handle presents us with the prospect of ‘new forms of civic engagement’, and in so doing it opens up ‘potentially fruitful avenues of public expression facilitated by the communication of machines’ (2023: 86–87). Clearly more than a tool, but probably less than an ally, such protocols invite us to think speculatively about the ongoing work of politicisation in an age of linguistic computation.
The book’s penultimate chapter moves case studies to ethics. Given that rhetorical machines now play epistemic, aesthetic, and political roles, the main moral question becomes how we are to deal with matters of culpability. Who should we blame, for instance, when these machines make harmful errors? Coleman here underlines the importance of what he calls ‘interventionist design’, an ethic in which harm isn’t dismissed as an unfortunate accident but anticipated – and indeed mitigated – as an ever present failing of machines. This entails a move from thinking about machines through the metaphor of the platform (e.g. Srnicek, 2017) to that of the custodian. It recognises that, ‘to protect the tolerant from the intolerant, intervention is necessary on the part of the platforms and not just the user, despite the strong impulse to imagine that it is not the platform’s responsibility, and that regulation of speech should be outsourced to the user’ (2023: 99).
Following Reyman and Sparby (2019), such an ethic of responsibility on the part of the designers and owners of influential machines requires regulators to intervene, designers to pre-empt, account managers to anticipate, and users to think twice. We need heightened degrees of care and consideration in the front, back and deep ends. In an age of machinic rhetoric, in other words, the responsibility to mitigate harm isn’t isolated but distributed: it falls not to some but many. In this, Coleman recalls Quintilian’s characterisation of rhetoric ‘as the science of a good person, speaking well’ (2023: 107). The production, management, and reception of influential machines – the whole ‘assemblage of actors that bear on the work of upholding good communication’ – should strive to ensure that ‘rhetoric might also be the science of a good machine, speaking (and moving) well’ (2023: 108). On behalf of the academic reviewing machine, I strongly encourage you to read this book!
