Abstract
Automation is a defining feature of today’s societies—not only since ChatGPT and generative artificial intelligence (AI) have accomplished to produce yet another wave of hype. This essay introduces a special issue on automation and communication in the digital society. It aims to study how subjectivity, agency, and empowerment become defined and reconfigured in novel human–machine encounters and, more broadly, in societies which in large parts are kept going and sustained by complex digital infrastructures. The issue includes contributions from a wide array of disciplines and perspectives and engages with conditions, contexts, and consequences of automation in very different settings ranging from journalism to self-service hotels, and from social movements in Hong Kong to the Russian Invasion to the Ukraine. The articles offer critical perspectives on the transition of human activity into machine operations, and back, as well on the social dynamics changing and emerging in increasingly digitized and datafied societies.
Introduction
Automation has momentum, and automation has a long-standing history. In any case, automation is a defining feature of today’s societies—not only since ChatGPT and generative AI have accomplished to produce yet another wave of AI hype. With a view to communication, automation converts the production of content, the distribution of information and messages, the curation of media use, and the governance of our networked lives into machine operations. All of these areas are increasingly shaped by algorithmically-driven processes and automated agents. They help to automate the selection and filtering of news feeds and search engines, attribute relevance and popularity, and perform content moderation and fact-checking. Automated agents like social bots participate in organizational communication such as customer service and, as a potential force of manipulation, also seem to intervene in election campaigns. The most recent iteration of technologies and products labeled as AI are driven by ambitions to delegate physical motoric functions, cognitive processes, decisions, and evaluations to increasingly autonomous and capable technology. At the same time, we need to acknowledge that automation is not a one-way transfer from humans to machines. Rather, we also witness environments where people come to act in an automatic fashion, where human contributions feed into processes of automation and help to improve technological systems and optimization processes that we have become used to call “machine learning.”
Background: waves of automation
In a basic definition, automation is the “process of designing, implementing, and updating human and machine systems to do work that might otherwise be done by humans; the tools developed and deployed in those processes; and communication about the technology, work, and workers involved” (Barbour et al., 2023: 262). This is an all-round way of thinking about automation that encompasses a number of concerns. When speaking about automation, we can therefore refer to increasingly automatically running technologies; to the processes of design, implementation, and operation through which some function or operation is automated; to occupation and industry trends toward increasing automation; to shifting human–machine relationships; and to automation as a topic in discourse.
Yet even such a broad perspective has trouble capturing what automation is because the notion of automation and our expectations of what automation can do mutate with technological progress. Thus, the first-wave automation was mechanical in character and mainly aimed at substituting motoric functions. It goes back to experiments in Antiquity and the subsequent fascination with automatons and seemingly autonomously running locomotive machines (Jones-Imhotep, 2020). Some of these ambitions can later be found in robotics and mechanical engineering where they intersect with another wave of automation, this time informational in character. It moved the focus away from automated labor as a 20th-century preoccupation to the automation of cognitive capacities and understanding with the help of computers (Andrejevic, 2020). Currently, we are in yet another wave of automation that hinges on generative AI and machine learning, innovations through which automation is about to become omnipresent (Acemoglu and Restrepo, 2020; Brynjolfsson and McAfee, 2014). In the two previous waves of automation, manual work and mental processes had to be codified and thus broken into parts to be emulated by machine functions which involved recreating them to some extent (Henderson, 1998). With the new generation of intelligent machines, these human-made representations and abstractions seem no longer needed as a precondition for automation: Where engineers are unable to program a machine to “simulate” a nonroutine task by following a scripted procedure, they may nevertheless be able to program a machine to master the task autonomously by studying successful examples of the task being carried out by others (Autor, 2015: 25).
In that kind of tech race, the combination of deep learning, machine vision, natural language processing, and social robotics promises to do away the hurdles that up to now have stood in the way toward full automation.
Such data-intensive automation that is further powered by AI rests on a feedback loop already identified by Zuboff (1989) who explained that automation is not only about delegation and substitution, but it also generates information about the work being delegated or substituted and about all operations that can be mechanized or computerized. “Automation produces flows of data about work being done,” as Barbour et al. (2023: 279) posit in the same vein, and this datafication increases the transformative potential of automation. Andrejevic (2020: 9) calls it the “cascading logic of automation,” a logic that unfolds as “automated data collection leads to automated data processing, which, in turn, leads to automated response.”
Automation of communication—communication of automation
One area of intense experimentation and excitement is the automation of communication. In fact, “present day automation is remarkable because it is communication itself that is increasingly being automated,” Barbour et al. (2023: 268) write. More fundamentally, Elena Esposito (2017) observes that the current hype and developments around “artificial intelligence” (AI) upon close inspection are not about the issue “that the machine is able to think but that it is able to communicate” (p. 250).
There are now myriad applications which span from the automated generation of messages and AI-mediated communication (Hancock et al., 2020) to automated content moderation and support provision. It moreover includes a panoply of communicative agents and social bots (Gehl and Bakardjieva, 2016; Hepp, 2020), and it spreads to areas like automated journalism (Diakopoulos, 2019; Montaña-Niño and Burgess, 2024) and also to data-driven content personalization and recommendation (Hermann, 2022). To capture this ever-multiplying area, Barbour et al. (2023) distinguish between the automation of communication, communication about automation, and communication with automated agents, three fields that all have received increasing scholarly interest (Bailey and Barley, 2020; Guzman and Lewis, 2020; Hepp, 2020; Kellogg et al., 2020; Seeber et al., 2020). In all of them, there is a dual interest in both technological capacities to build communicative services and tools and in the communicative activities involved or emerging in automation (Sundar, 2020).
Indeed, the automation of communication does not stop with rendering existing genres and modes of speaking and writing into machinic functions—no matter how advanced they are. Our interactions with automated communicators and automatically organized communication are set in a kind of transactional relationship where increasing contact with automated technologies changes the ways we communicate—not only on the individual level but maybe more importantly also on the societal level (Hepp et al., 2023). In turn, intelligent systems collect and digest all the data from these interactions that feed back into their continuous development and optimization (Fortunati, 2018). Hence, it seems plausible to follow Pink (2022) who asserts that automatic technologies are essentially “things that are in the world with us, rather than as having been added to a world that was already here” (p. 749).
There is some evidence that humans are inclined to communicate with automated entities the way they do with humans and attribute human qualities to them (Natale and Henrickson, 2024; Reeves and Nass, 1996), and users are becoming accustomed to the kinds of prompts and formulations that help to successfully communicate with interfaces such as Siri, Alexa, or ChatGPT (Kudina and Coeckelbergh, 2021). More than that, the automation of communication might also change our understanding of what communication is (Bialski et al., 2019). The point has been elaborated by Andrejevic (2020) who asserts that automated media are augmenting and possibly also displacing the position humans traditionally assumed in communication, cultural production, and decision-making. Yet this transformation requires a reconfiguration of the subject too, a reconfiguration that abstracts away emotions and intentions from automated communication and misses out the reflexive layer of subjectivity. It entails reevaluating our notions of what empathy and interaction should be; the same holds true for notions of creativity and ideation, Wajcman (2017) adds. As a point in case, warding off considerations about machines getting bored or depressed by monotonous communicative labor as silly, fantastic, and idle may indicate how narrowly defined automation is and what (still) escapes its operations, Andrejevic (2020) observes. The language—a language “purposefully saturated with anthropomorphism” (Wajcman, 2017: 121)—we use to talk about automation is of little help here either in thinking what kinds of human capacities are acquired, simulated, or emulated and thus reshaped by machines. Artificial neural networks do not “learn” as people do, “cognitive” computing does not think the way we do.
We might be tempted to treat such skepticism as preliminary; mere technical obstacles that will sooner or later be resolved by more ingenious and technologically sophisticated innovations. However, this is not only an engineering challenge to create machines that are emphatic, acquire tacit knowledge, and are able to fully engage in interaction which is spontaneous, indexical, self-reflexive, and open-ended (Collins, 2018). When seen not from the prospective endpoint of a singularity (Kurzweil, 2006), but from its conceptual and material basis, we are confronted with the twin tendencies of dehumanization and anthropomorphization that are at the heart of what Rhee (2018) has reconstructed as the “robotic imaginary,” which risks to be applied to the realm of automated communication once again. It is on that token that Andrejevic (2020) saw “a trajectory parallel to the social de-skilling of physical labor taking place in the communicative realm” (p. 5) with added layers of control and surveillance. Interestingly, while the ability to engage in conversation, written and spoken, is treated as bedrock of a socially intelligible AI, it is usually modeled around situations of deceit (Natale, 2021). From the Turing test onward, communication between humans and machines follows ideas of imitation and deception where humans are left guessing if a machine is convincingly and indistinguishably acting human-like. (However, the European Union’s AI Act now stipulates that users must be made aware that they are conversing with a bot, not a human.)
The question remains what form of communication is taken to be the standard upon which to judge a machine’s performance and if such kind of approach toward appreciating the capacities of automated communication is viable and what its value is in assessing communicative performance. In fact, much human-to-human communication is quite formulaic and template-driven and can therefore more easily be assimilated by computer programs also without any sophisticated AI. So, the real question may be what is more confusing: treating machines as humans or treating humans as machines. What seems clear is that chatbots or synthetic conversational agents do not need to trick customers into believing they are exchanging messages with a fellow human in order to be functional and fulfill a communicative task. Just think of thanabots and griefbots whose users are not in a limbo about the artificial nature yet still embrace the chance of conversing with some communicative simulation of a departed (Henrickson, 2023).
From a conceptual point of view, what is interesting here is that such considerations attend to the performative side of automated communication and leave ontological questions about what makes humans human and what constitutes machines unanswered. A striking example is the currently defunct Bot-o-Meter, a language classifier that was programmed to identify bots on Twitter through linguistic features, temporal patterns, and networking structure. It yielded a score giving the likelihood of an account being operated by a bot or a human user. A high score indicates a likely bot account, and this measure is largely based on the degree of recognizable automation, that is “accounts with similar patterns, features and characteristics that were previously detected, or with patterns that diverge from typical use patterns of regular, non-automated accounts” (Martini et al., 2021: 2). The question then is not if you are a human or a bot, but if you are performing more human-like or bot-like. This non-essentialist way of detection turns an existential boundary into a spectrum of performance, a fairly imprecise measure, for sure, given its low diagnostic power. In fact, Rauchfleisch and Kaiser (2020) estimate that, depending on data and language, Bot-o-Meter produces between 41% and 76% false positives and 71% and 90% false negatives. Needless to say, Bot-o-Meter is just one among many ways in which automated communication is unpacked with the help of data-driven automation of communication (Lazer et al., 2009).
More generally, the agency of automated communicators becomes of interest, in particular when the agency is not taken to be a property but a practical accomplishment (Neff and Nagy, 2016; Pentzold and Bischof, 2023). Giddens’ (1984) definition of agency proves helpful here, who maintains that agency “refers not to the intentions people have in doing things, but to their capability of doing those things in the first place” (p. 9). Agency or agentic power, as he puts it, is to be achieved in action; there is no residual quality that is owned or not. That way, it makes sense to also speak of a “machine agency,” as proposed by Sundar (2020), who adds another element, that is, intervention into a course of affairs that demands discernment, reflexivity, projection, and the ability to do otherwise. Intervention means to recurrently renegotiate agency between humans and machines, very much in the way Hepp et al. (2023) understand “supra-individual agency” (p. 51). Barbour et al. (2023) conclude that “any intervention that assumes the intentions and actions of agents remain distinct and limited because machine and human agency change together as each negotiates the other” (p. 281).
Contexts, consequences, and critique of automation
Despite the steady innovations that push the waves of automation, the discourse about automation and its ramifications seems caught in the perennial dialectics of promise and crisis (McGuigan, 2019; Vergeer, 2020). Automation is associated with an age-old fascination with machines carrying out tasks to release people from recurrent or tedious duties and free them for more creative, joyful, or otherwise meaningful activities, with the added value of productivity gains and cost reduction. Yet these hopes for delegation have time and again turned into anxieties about substitution and loss of control (Noble, 2011; Rahm and Kaun, 2022). As Sundar (2020) reasoned: we tend to “welcome . . . the convenience of machines, hesitate to cede decision-making control” (p. 76). This is not to say that the basic binary is unfounded, yet it is problematic as it runs the risk of obscuring more nuanced concerns (Demo, 2017; Yu and Couldry, 2022).
One recent area, where such hopes and worries prominently come to the fore, is automatic decision-making (ADM). By definition,
algorithmically controlled, automated decision-making or decision support systems are procedures in which decisions are initially—partially or completely—delegated to another person or corporate entity, who then in turn use automatically executed decision-making models to perform an action. This delegation—not of the decision itself, but of the execution—to a data-driven, algorithmically controlled system is what needs our attention (AlgorithmWatch, 2019: 9).
ADM systems could go from rule-based procedures to predictive scoring via decision trees and sorting. For Lomborg et al. (2023), ADM can in principle be modeled as a communicative sequence that moves from moments of encoding data as information to interpreting data by processing and ordering to generating output decisions which again are decoded to make sense of them in a certain application area. ADM usage is increasing and each time the adoption of ADM systems is envisioned and piloted in one of them, like, for instance, public service or news making, there is an ensuing discourse around tropes of efficiency, optimization, speed, and scale whereas critics cannot stop highlighting issues of privacy, justice, control, surveillance, commercial exploitation, ownership, the need for regulation, as well as the potential negative ramifications when automation requires standardizing, eventually shoehorning a complex world (Amoore, 2020; Dencik et al., 2022; Kaun et al., 2023).
Another common theme is the marginalization of the labor behind automation that is instrumental in keeping alive the pipedream of seamless and effortless procedures. From its mechanical implementation, automation predicates on obfuscation, yet this will not render the human laborers less important, humans whose work is subject to gendered, classed, and racialized biases and inequities. Quite the contrary: “the more advanced a control system is, so the more crucial may be the contribution of the human operator” (Bainbridge, 1983: 775). Hence, every new wave of automation does not replace work altogether, but reshuffles work relations and jobs (Crawford, 2021; Gray and Suri, 2019).
The gig economy and toil of workers on platforms such as Amazon’s Mechanical Turk are epitomizing the consequences of automation that erodes work relations, decomposes work processes, and makes them invisible. On an even larger scale, Stiegler (2016) has warned us that in digital capitalism subjects are reduced to a condition of automatons whose behavior is determined by algorithmic mechanisms. In the same vein, Zuboff (2019) has admonished that with the recent wave of automation which relies on algorithmic sorting, the automation of information flows has morphed into a fully-fledged automating of behavior. The wedding of neoliberalism, rationalization, bureaucracy, and computerization is driving tendencies to de-skill employees, disenfranchise users, and remake persons and relationships along its inherent agenda of commensuration, comparison, and scoring. Automation is in trouble, Wajcman (2017) writes, when it privileges the pursuit of profit, not progress, or avoids addressing the latter altogether. Instead of bringing more efficiency, this can result in dysfunctional processes and manipulation, what Gusterson (2019) has dubbed “roboprocess” (p. 2). Roboprocesses unfold, he posits, when automatically running computerized systems take a life of their own and common sense and situational logic are displaced by a logic of automation that has difficulties in adjusting to non-stereotypical scenarios.
More than rehashing the well-rehearsed utopian and dystopian binary of promise and peril, discourses also provide the opportunity to evoke alternatives to existing automation practice and its axioms (Endacott and Leonardi, 2022; Jensen et al., 2022). It is in communication in which automation choices are made upfront and it is through communication that the negotiation of automation’s values and norms comes to matter (Barbour et al., 2023; Leonardi, 2012). In pursuing such discourses around automation, we stop treating automation as a technical fait accompli where major decisions have already been locked in and cannot be turned back. Rather, discourses confront us with the reality-making power of sociotechnical imaginaries and their entanglement with material choices, regulation, and commercial interest (Jasanoff, 2015). They are effective, consequential—and can be otherwise (Katzenbach and Bareis, 2022; Pentzold and Knorr, 2023). Consequently, it still makes a difference what kind of imaginaries congeal into decisions about what is automated through what kind of technology and who is allowed and responsible for making, monitoring, and possibly redoing the decisions this enfolds. Such an approach stresses also the efforts of resistance that can take a number of shapes (Bonini and Trere, 2024; Rafélis de Broves et al., 2024).
Discursive reflection is all the more pertinent given the anticipatory impetus encapsulated in automated technologies meaning that automation is an almost inevitable part of desirable or daunting digital futures. In effect, Pink (2022) prompts us to appreciate more the pathways laid by methods in speculative design, fiction writing, and anthropologies of the future in order to investigate and imagine things that could happen and invoke possible futures or alterities. At best, this could also allow us to democratize the organization of automation. Such endeavor ties in with calls to “rehumanise automation” (Pink et al., 2022: 3) by first asking what people do with automation, not what it does to them. It means to engage with the situations to which automated technologies should cater to and how they come to be actually employed and appropriated (Burgess et al., 2022). As a point in case, Pink et al. (2022) refer to the “see and forget” scenario of setting the options or accepting the default setup of a device or service, like the cookie notes we get while browsing. This is quite an unspectacular act, yet highly effective in backgrounding automated processes that come to mold everyday practice.
Contributions to this special issue
The contributions to this special issue look at these challenges and questions from a diverse range of perspectives and disciplines and work with multiple methods and phenomena. The first set of articles scrutinizes discourses and data as crucial foundations of automation in the digital society. Peter Nagy and Gina Neff contribute to understanding better the discourse on automation and algorithms by reminding us how magic is performed. This perspective also twists the debate on the power of the tech industry in original and insightful ways. Along the case of OpenAI and ChatGPT Nagy and Neff show how the “conjuration of algorithms” allows the tech industry to forge vivid, overly positive, and deterministic narratives of automation, leaving it challenging for critics to call attention to the very real harms that algorithmic systems pose to users and society. Will Orr and Kate Crawford investigate datasets as another critical foundation of automation. While we know that data is crucial and never neutral, there are few studies of the practices and processes of dataset construction. Based on interviews, Orr and Crawford specifically highlight and detail four key challenges and thus the contingencies of dataset construction, and with that, of automation: balancing the benefits and costs of increasing dataset scale, limited access to resources, a reliance on shortcuts for compiling datasets and evaluating their quality, and ambivalence regarding accountability for a dataset.
A second set of articles looks at alternatives and activism in the context of automation and communication. Annika Richterich and Sally Wyatt study the use of chatbots for feminist concerns. In three case studies, the authors highlight how feminist chatbots accomplish to successfully oppose mainstream automation, yet also how platform dependencies and the limits of automating complex intersectional issues constrain their effect. In a case study of the 2019 Hong Kong protest movement against the government’s new extradition laws, Ngai Keung Chan and Chi Kwok ask how activists strategically make collective claims about and through algorithms and how they mobilize algorithmic tactics on social media. Thriving for more visibility, activists turned automation and algorithms into a strategic resource that helped to further their cause. The authors’ analysis of social media groups shows how automation and algorithms become deeply embedded within broader processes of political claim-making in the digital society.
A third set of articles surfaces the complexity of automation by looking in detail at practices of automation in context. These articles challenge in insightful ways the narrative of automation as a one-way-street delegation of tasks to technological systems. In their study on self-service hotels, Christian Greiffenhagen and Xinzhi Xu demonstrate the necessary human labor involved in making these automated systems work. Customers need to adjust, perform, and cooperate in order to successfully check-in, yielding them, as Greiffenhagen and Xu conclude convincingly, as “disciplined costumers.” Philipp Seuferling offers historical depth to debates on automated interaction and evaluation systems by scrutinizing practices and narratives of automated border governance and migration management. He specifically discusses the use of “proxies” for decision-making, that is biometric or biographic data, collected as seemingly authentic and neutral stand-ins for humans. His material stretches from the late 19th century to today’s “smart border” systems, offering a rich description of the politics of automation in a high-stake context. Annette Markham engages with and interrogates the auto-complete function of search engines as a less physical but highly pervasive and taken-for-granted form of automation. The article shows how these systems exert significant control on the user’s communication and reasoning style who consider even interactional and relational failures as humorous defects or user errors.
A fourth set of articles investigates the role of automation in the context of social media across different dimensions. Marcus Bösch and Tom Divon look at sound as a dimension that is often overlooked in social media research. Investigating TikTik videos in the context of the Russian Invasion of Ukraine, the authors show how TikTok’s audio features are used to spread extensive semi-automated propaganda and disinformation. CJ Reynolds and Blake Hallinan present a comprehensive study of user folklore on YouTube’s algorithmic governance regime. Investigating 250 videos of creators discussing platform governance, they identify strategies, concerns, and targets of accountability. This “user-generated accountability,” the authors argue, provides one productive starting point for understanding automation in platform governance in a remarkably intransparent context. A different route avenue in the quest for more platform transparency is platform archives and advertising explanation functions. Building on their Australian Ad Observatory, Jean Burgess, Nicholas Carah, Daniel Angus, Abdul Obeid, and Mark Andrejevic scrutinize Meta’s “Why Am I Seeing This Ad” (WAIST) feature. The authors particularly highlight that these individual-level explanations do nothing to help us understand the patterns and sequences of targeted advertising across users and time, which would be necessary to assess substantial concerns on political and economic integrity.
The final article focuses on automation in journalism. Joanne Kuai is interested in the relationship between automated news and copyright across different jurisdictions. While identifying significant differences between regulations in the United States, the European Union, and China, the author also concludes that the current regulatory frameworks in all cases are weakening the institution of copyright, which indirectly contributes to the deinstitutionalization of journalism and the institutionalization of algorithms in the organizing of societal communication.
***
With this special issue, we seek to contribute to the calls of scholars to resist the hype on AI and instead critically study the growth and spread of automated communication (Hepp et al., 2023; Pink et al., 2022). The notion of automation allows us to lend historical depth to recent phenomena by connecting them to long-standing narratives and socioeconomic structures. It also foregrounds a perspective on processes that operate both ways instead of the unidirectional assumption that AI and automation always replace human activity. The articles in this special have highlighted for a wide range of sectors and practices how the relationship between automation and communication is much more complicated and complex than that—and that the crucial issues are often hidden in the infrastructures, organizations, and routines of everyday life. Since we are currently still in the process of building and updating our infrastructures of automation, there is a lot at stake right now.
Footnotes
Author’s Note
Christian Pentzold is also affilated to Department of Communication and Media Studies, Leipzig University,Germany.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Funded by Deutsche Forschungsgemeinschaft (DFG; German Research Foundation)—Project-ID 416228727—SFB 1410, and Deutsche Forschungsgemeinschaft (DFG; German Research Foundation)—Project-ID 440899634.
