Abstract
This article goes beyond the current discourse on post-truth, fake news, and related developments by tracing the emergence of a broader and more fundamental change—a posthuman turn in politics. The article begins with examining the background of post-truth and the causes of its prominence. Those causes are found to primarily lie in the natural human predilection for pleasure and satisfaction as well as capacity for affective agglomeration. Such agglomeration is particularly manifest in the digital and, even more specifically, new media environment where the abundance of information and alignment opportunities has led to incessant competition of media and political actors for audience attention. The main weapon in this competitive struggle is maximization of consumer satisfaction, that is, pleasure (and not, e.g. veracity). However, the main transformative factor that both enable the maximization of satisfaction and lead to the posthuman turn is the abundance of data and the potential for its algorithmic analysis as well as interpretation of the results of such analysis. In this context, data-rich algorithm-wielding political actors are capable of knowing in advance the utterances and other affective triggers necessary to alter their audiences’ choice environments in such a way that it will have no other option but to choose the preferred outcome, receiving pleasure in return. However, the same political actors are simultaneously reduced to mouthpieces for algorithmically selected utterances instead of themselves being active shapers of political strategy. Consequently, humans must share agency with algorithms and other pieces of code, if not cede it altogether.
Transformations of political discourse and mobilization brought about by social media, big data, post-truth, and other related developments are already high on the agenda of public and academic debate. However, that debate still tends to adopt a human-centric perspective: humans are seen to remain the sole active agents as creators of strategy, users and manipulators of data, audiences that embrace nonfactual narratives, and so on (d’Ancona, 2017; McIntyre, 2018; largely also Sunstein, 2018). This article, meanwhile, postulates the need for a posthuman turn (see, for example, Beer, 2019; Braidotti, 2013; Bucher, 2018; Herbrechter, 2013), thereby fundamentally reconceptualizing political mobilization and association. To that end, data-based creation of meaning and the ensuing affective nudging push politics beyond the exclusive domain of the Aristotelian political animal into a new environment in which such anthropocentric conceptions no longer make sense, forcing us to turn attention to a posthuman world driven by agglomerations of human persons, affects, and digital tools.
The idea of algorithmic politics also enables better contextualization of fake news and post-truth: terms that, although very much in vogue, remain notoriously tricky to pin down. Instead of being aberrations or somehow exceptional, these developments are instead seen in this article as part of a progressive shift toward politics as an algorithmic pleasure-maximizing service (see, for example, Citton, 2017; Hannan, 2018; Margolin & Liao, 2018; Vaidhyanathan, 2018), in which consumer experience and satisfaction and personalized maximization of the latter, rather than veracity or some other traditional criterion, is key (see, for example, Baron, 2018; Ecker, 2018; Lewandowsky, Ecker, & Cook, 2017). This shift is further conceptualized in this article as a move toward politics-as-a-service, that is provision of politics as an on-the-go pleasure-maximizing tool that can be consumed when and as necessary. Indeed, politics as a service will be revealed below as a key characteristic of algorithmic posthuman politics. While the discarding of truth is by no means unavoidable in algorithmic politics—truthfulness simply becomes irrelevant or, at best, of secondary importance—it is useful for the purpose of this article to focus on fake news and post-truth as developments that push algorithmic politics to its natural limit, thus revealing both the full potential scale and the human underpinnings of the present condition.
The article first discusses the key characteristics as well as psychological underpinnings of post-truth, revealing why this condition is, in effect, natural. Particular attention is paid to affect and pleasure that are taken to provide an explanation for the appeal of a factual messaging and formation of (particularly online) human agglomerations. Finally, the role of affect and pleasure is greatly enhanced in the context of algorithmic politics: as the behavior and affective states (both actual and likely) can be modeled and analyzed, possibilities are opened for imperceptibly nudging audiences toward predetermined conclusions by deliberately guiding their affective states. Hence, the final section of this article explores the effect that algorithmic politics has on human agency and introduces a posthuman framework for explicating today’s political mobilization.
Telling it as they Like it
It is now commonplace to describe today’s politics as “post-truth.” Hence, one frequently encounters claims that the present day is to be characterized by, for example, “the rejection of basic principles of reason and veracity” as well as a progressive irrelevance of “anchoring political utterances in relation to verifiable facts” (Hopkin & Rosamond, 2017, pp. 1-2). This change correlates with the proliferation of “a qualitatively new dishonesty on the part of politicians” who, instead of being merely economical with the truth, “appear to make up facts to suit their narratives” (Mair, 2017, p. 3), leading to the formation of a political culture “in which political debate characteristically assumes the form of appeals to personal feelings detached from policy details, and of frequent repetition of bold assertions to which factual counter-evidence is disregarded” (Horsthemke, 2017, p. 275). In such an environment, truth seems to have become a matter of personal preference, thereby allowing fake news to proliferate.
In the above context, what matters in political communication and, more fundamentally, creation of political doctrine is whether the audience would like the pronounced things to be true (Lockie, 2016). It is already well known that “opinion-congruent information is rapidly and involuntarily associated with truthfulness” and vice versa (Gilead, Sela, & Maril, 2018, p. 7). Hence, misperceptions, particularly if they fit preexisting views, are often more comfortable to embrace (Strong, 2017, p. 140) and, from a communicator’s perspective, easier to sell. After all, it is reasonable to claim that “[p]opularity now competes with logic and evidence as an arbiter of truth”—that, of course, is inherent in the social aspect of social media (Hannan, 2018, pp. 220, 224). Indeed, social media in particular empowers individuals to “choose their own reality, where facts and objective evidence are trumped by existing beliefs and prejudices,” and the satisfaction granted by a particular truth-claim can easily become a buck-stopping argument in and of itself (Lewandowsky et al., 2017, p. 361). Moreover, it seems only natural that nonfactual information is more effective in reaching and convincing target audiences because it is specifically designed to satisfy our desires and needs and in doing so is unconstrained by facts and objective circumstances (Ball, 2017, p. 242). And to complement such predilections, online (and particularly social) media enables an abundance of news and fake news content that allows elements of the information environment to be picked and mixed—seemingly as one pleases but more likely as one is deemed by an algorithm most likely to be pleased.
If, then, we are truly entering a period of post-truth—one in which the distinction between veracity and falsehood has become irrelevant—one should only admit that truth has become “simply a matter of assertion” (Suiter, 2016, p. 27). That assertion, of course, has to be effective in terms of successfully amassing public support—it is only then that a post-truth-claim starts getting traction and becoming true through its own effects (as people begin believing in and acting on it, the results become detached from the veracity of the claim); as a result, the effectiveness of assertion becomes the key criterion for truthfulness. And that effectiveness, in turn, is a matter of establishing meaning that subsequently becomes the default point of reference for the target audience, whatever the actual basis of the claim is: as Baron (2018) asserts, “what matters [. . .] is not evidence (i.e. facts) but meaning” (p. 73). Moreover, once hooked on a particular mode of assertion, we seem to desire more of the same as an “emotional and affective circuit of relationality between human and information in computer-mediated environments” is formed, in which the experience of “[e]xpression-response-satisfaction alternates with expression-response-dissatisfaction, to ensure a resurgent desire for more” (Boler & Davis, 2018, p. 83). In other words, once we have experienced how good a particular post-truth-claim feels, that is once it has effectively (and affectively) imprinted itself on us, we will come back to the same source looking for more, expecting to be satisfied at least as much as last time, if not this time then perhaps next time. Attention to post-truth and nonfactual information can, thus, become habit-forming.
Why we Need Pleasurable Affects
The advent of social media has changed communication and interaction patterns, whereby if previously we were confined to our physical location, now we can be immersed in a number of digital worlds and interacting with a number of individuals regardless of distance and our own physical location and the locations of others (Couldry & Hepp, 2017, p. 90). This has, in turn, opened the floodgates for affects, sentiments, and emotions (or “affect-laden practices”) to flow relatively unconstrained, forming discursive and emotive patterns not confined by the divisions of the physical world but characterized more appropriately as “displays and flows of varied intentionality and intensity” (Giaxoglou & Döveling, 2018, p. 2). As a result, however, new pressures have arisen as the self has become expected “to be available for interaction on digital media platforms” and simultaneously pressurized to “represent itself on these platforms” (Couldry & Hepp, 2017, p. 145). As a result, second, the self is increasingly functioning as an interactive digital effigy that can be shaped and molded as necessary to maximize interactive (i.e., affective) capacity. And that equally applies to lay citizens and to politicians, further rendering the veracity of uttered truth-claims subservient to maximization of affective capacity and thus creating perfect conditions for post-truth. Third, this onslaught of affects and incessant competition for attention by affective capacity-maximizing agents has meant that only the catchiest of messages are capable of standing out and attracting attention (for a discussion, see, for example, Citton, 2017; Vaidhyanathan, 2018), once again conferring competitive advantage to post-truth utterances that are unconstrained by any objective conditions.
As understood for the purposes of this article, affect refers to “the capacities to act and be acted upon,” found in intensities and resonances “that pass body to body,” making it an inherently relational concept (Seigworth & Gregg, 2010, p. 1; see also Boler & Davis, 2018, p. 82). When analyzing affect, we have to engage with “feelings travelling alongside the usually more salient images of our minds,” such as consciously cognized thoughts and arguments (Damasio, 2018, p. 99). Affect, thus, underscores the relationality between bodies as well as between bodies and artifacts (physical as well as virtual), and through that relationality, brings humans and/or objects together into “affective assemblages” (Mulcahy, 2012). This relationality implies that human subjectivity itself should be understood as “a relentlessly constructed narrative” that “arises from the circumstances of organisms [. . .] as they interact with the world around, and the world of their past memories, and the world of their interior” (Damasio, 2018, p. 159). Hence, affective engagement is about what we encounter, how that fits within our experience, how that makes the body feel. Perhaps most importantly, those encounters do not have to be physical: they can equally be mental and digital, for example on social media, messaging services, or, increasingly, virtual reality. As something that, by definition, refers to an impact of one thing on another, affect is always true, not fake, even if its premises can be either.
Affective linkages are key to the human mode of existing within, as well as constitutive of, social formations: individuals become entangled within a “web” or “field” of affect (von Scheve, 2018, p. 54). As Papacharissi (2015, p. 22) notes, perhaps the most easily conceivable example of affect is found in our reaction to music, as in unconscious humming or tapping of the foot—we do react, and do so viscerally, but that reaction precedes not only cognition but also feeling (this understanding of affect can equally be extrapolated to engagement with any other stimuli, including news content). However, that same example also pinpoints to affect being nonuniform as people are affected by the same music in different ways (or not affected at all) depending on their backgrounds and, therefore, tastes; hence, while affect in a general sense is universal, the reactions to stimuli are not. Instead, our present affective flows and the capacity to affect and be affected in particular situations depend on the history of our previous immersions in affective flows, thus rendering the ability to predict and cause impactful affects a highly sought-after asset.
Unsurprisingly, we tend to prioritize pleasurable affects and make every effort to maximize them, particularly as we “learn, by experimentation or by instruction, that some ways afford greater pleasure,” thus teaching ourselves to act in the new, pleasure-maximizing ways (Matthen, 2017, pp. 24, 10). That maximization of pleasure can also be seen a constitutive part of the development of consumer culture during the second part of the 20th century, characterized by “emotional enrichment from commercial goods and experiences”—a utopia of easily accessible pleasure (Horowitz, 2012, p. 2). And once one learns to maximize pleasure (and once manufacturers or retailers or suppliers learn to enhance consumer satisfaction) with regard to goods and services, it would be strange if the same attitude was not applied to media and politics. Crucially, as the same hedonic systems of the human brain appear to be involved when indulging in all things from drugs and sex to art to social media (Nadal & Skov, 2018), such transference is merely part of the pleasure continuum.
Notably, pleasure-seeking should not be seen as a manifestation of some kind of decadent hedonism. Instead, there exists a normative basis for it. One must come to acknowledge that feelings and emotions are not independent fabrications of our nervous systems but, instead, “the result of a cooperative partnership of body and brain” (Damasio, 2018, p. 12). Here one must turn to the concept of homeostasis, referring to the capacity to ensure that “life is regulated within a range that is not just compatible with survival but also conducive to flourishing, to a projection of life into the future of an organism or a species” (Damasio, 2018, p. 25). Feelings play a crucial role here as they not only reflect but also affect the homeostatic condition, that is, positive emotions, such as pleasure, are capable of improving (and, in case of negative emotions, worsening) the homeostatic condition of an individual’s existence (Damasio, 2018, p. 108). Essentially, the aim always is to persevere in existence by engaging with affects that improve the physical configuration of the body. Here Damasio (2018, pp. 108-109) as well turns to music as an explanatory aid: the delight of music transforms the organism not only in an emotional but also in a physical way: release of pleasure hormones, changes in muscle, blood vessel, metabolic, respiratory, and cardiac parameters, boost to the immune system, and so on, all leading to a more comfortable and sustainable functioning of the body. Hence, there is a physical need for pleasure and satisfaction. Likewise, ensuring an at least perceived mental quality of life simultaneously improves the bodily one, presumably even if that involves embracing post-truth (hence, in an important way, post-truth is good for you).
Of course, the maximization of pleasure through embracing post-truth takes place on the premise that it is better to obtain an immediate reward for following post-truth than to wait for the benefits of truthfulness to materialize (even if the latter, due to the accuracy of their premises, are likely to be greater and more durable). This behavior is a clear manifestation of delay discounting, whereby “the individual shows preference for smaller and sooner rewards in lieu of the prospect of obtaining larger but later rewards” (Negash, Sheppard, Lambert, & Fincham, 2016, p. 690). At first sight, such behavior might well seem irrational. However, satisfaction and pleasurable rewards can become addictive (Negash et al., 2016, p. 697): if rewards are plentiful and easily available on demand, achievement of satisfaction must also become constant. In other words, why wait for something big when bits and pieces of enjoyment are permanently available, especially if everybody else is enjoying themselves nonstop. Hence, one must look at the supply side as well: the more plentiful satisfaction (including the ever-present ability to opt for opinion-congruent information) is, the more of a must it becomes.
Capitalizing on Pleasurable Experiences
It is becoming ever more evident that we are passing from the Information Age to the Experience Age (Riccio, 2017; Wadhera, 2016), whereby “the density of the web environment in the contemporary media landscape results in an intense and incessant competition for attention” (Dahlgren & Alvares, 2013, p. 54). Since today’s media environment is characterized by abundance, interactivity, and mobility (Mazzoleni, 2017, pp. 140-141) as well as “high velocity and dizzying excess” (Dahlgren, 2018, p. 26), no time is left for verifying and thinking. Due to the onslaught of information and plentiful opportunities for gratification offered by competing media content, audiences have become impatient, especially when faced with “suspiciously long-reasoned arguments for truth-claims”; instead, a much more effective option is “an instant gratification of feeling as knowing” (Harsin, 2017, p. 517). As we need to “drastically select from the environment with which we must interact in order to make it more manageable” (Couldry & Hepp, 2017, p. 113), emotional connection with content steps in as a convenient mental shortcut. As human attention is a limited and increasingly scarce resource, it is natural that the current environment can be categorized as an “attention economy,” in which the time and focus of the consumer is the primary locus of competition to turn that attention into revenue (see, for example, Williams, 2018). That leaves news and politics content as only one of the many competing options, attention to which has to be won and then retained against competition from every direction (Stroud, 2017, pp. 479-480).
In the above context, experience comes to the fore as consumers (including of politics) “want to be immersed in the story” and “live” it, being entertained in the same vein (Newman, 2016). It is a key characteristic of the provision of any good or service (including of information) in the Experience Age that it is “designed to engage customers, enabling connection with the service in a personal, memorable way” (Klapztein & Cipolla, 2016, p. 568), and necessarily making consumers feel “uniquely understood and important” (Wladawsky-Berger, 2018). Other authors stress “ludification,” or the rapid growth in importance of “playfulness as an attitude” as well as of “playful designs in our everyday reality” (Mäyrä, 2017, p. 47) that, due to the spread of mobile devices, permeates every aspect of life and the environment, “embedding playfulness in the interstices of everyday life wherever we happen to be” (Hjorth, 2018, p. 8). Approaching the environment in a game-like manner allows for immersion and losing yourself. If that is a recurrent audience expectation—and, indeed, a conditio sine qua non of effective assertion of a truth-claim—then it is natural that actors, both business and political, aim to capitalize on providing as pleasurable information consumption experiences as possible while consumers, being spoilt for choice, opt for alternatives that are high on positive affective impact. Non-pleasure-maximizing criteria, such as truthfulness, recede to a secondary position.
Indeed, in most sectors, consumer experience is now the key driver and generator of value (Wladawsky-Berger, 2018) and must manifest itself not only with regard to the product and its consumption but also during acquisition and postconsumption: all three stages must be as easy, comfortable, and pleasurable and seem as seamless and tailored as possible—it is an expectation familiar to almost any consumer of today (Williams, 2018). After all, at least in the more developed societies, we can get the product (a physical, information, political, or any other good) anyway—so we buy experience. And since pleasure can be learned—particularly when it comes to acquiring new ways of doing something in a pleasure-maximizing way (see, for example, Matthen, 2017, p. 10)—consumers can be expected to quickly and effectively determine and follow the ways that give them the greatest pleasure available on offer, including siding with post-truth-claims. Consequently, aiming to create the most pleasurable experience possible, the providers (of goods, services, information, ideas, or political programs) have no other option than to strive to ensure that each and every consumer of their offering receives the highest degree of personal(ized) satisfaction possible (see, for example, Williams, 2018). That constant striving on the supply side, in turn, leads to even further growth of consumer expectation of satisfaction and immediate gratification with whatever they are engaging in; that, in turn, conditions the suppliers to go to even greater lengths in satisfying their consumers—a trend likely to be strengthened even further through the use of machine learning to “continually refine and personalise customer experiences” (Colvin & Kingston, 2017). With this in mind, one could label the present political offering as politics-as-a-service (hereafter abbreviated PolaaS to differentiate from platform-as-a-service, or PaaS) with regard to it being a tool for consumers to assert the self and, perhaps, even attain a coveted badge (such as pubic affirmation and acceptance of belonging to a tribe or a community of those “in the know”), provided by political actors on an on-demand basis.
PolaaS is obviously conceived to relate to one of the features of digital technology—the provision of software-as-a-service (SaaS). The latter typically refers to a model whereby instead of selling packages of software for a set price, a vendor is allowing the online use of centrally located software on a pay-as-you-go basis (see, for example, DeMuro, 2018; Krancher, Luther, & Jost, 2018). In a similar manner, instead of stable political identities and affiliations (ideological and/or institutional), PolaaS proposes no-strings-attached immediate gratification services based on big data-sourced audience preferences. Moreover, PolaaS providers typically offer a customized narrative frontend that takes away the necessity of making active choices from the human actor—instead, the building blocks have always already been algorithmically sourced and arranged, thereby substantiating the posthumanist thesis that follows. In itself, SaaS is composed of smaller sets of services, PaaS and infrastructure-as-a-service (IaaS) (see, for example, Amazon, n.d.; Microsoft, n.d.; Tolosana-Calasanz, Bañares, & Comom, 2018). Drawing the parallel between SaaS and PolaaS further, IaaS, or the hardware (such as servers and storage), can be likened to the basis of Polaas, that is the emotional and the visceral components on which the ensuing service is built and run; PaaS, or the framework software used to manage the underlying hardware, can be likened to the addition of key issues and ideas onto the underlying basis; finally, SaaS, or the service complete with end-user applications, relates to the narrativized and audience-tailored product of PolaaS.
As SaaS providers typically offer the full package of “hardware, middleware, application software, and security,” there are obvious cost, speed, and scalability benefits for the clients (Oracle, n.d.). SaaS providers themselves are keen to associate their service with disruption of traditional businesses and their models by emergent cloud-based startups, the capacity of small and agile businesses to be first to the market with new product offerings, the necessity for powerful analytics, and the need to deal with ever-present and easily expressible consumer discontent (Oracle, n.d.). Likewise, a similar story can be told about PolaaS, which allows political outsiders lacking institutional and media support to mobilize significant following nevertheless, unseating the traditional behemoths with new, innovative, pleasure-oriented services that can be easily scaled with the addition of extra technological infrastructure, and an immediately appealing analytics-based (and, therefore, consumer-friendly) offering.
With PolaaS, the focus is on competitive provision of ways to materialize and experience one’s own enjoyment. In other words, politics turn into competition over who is to provide an experience- and enjoyment-maximizing service to the citizen-consumer based on the analysis of available data. Likewise, referring to politics as a struggle to decontest essentially contested concepts (see Freeden, 2013), PolaaS would refer toward a shift away from finality, decisiveness, and monopolization of meanings (Freeden, 2013, p. 73) to monopolization of enjoyment in which finality is replaced with service providers following the trail of audience wishes and expectations, that is nonfinality par excellence. That in itself might be seen as a self-reinforcing strategy whereby faced with a kaleidoscope of meanings resulting from such nonfinality, consumers of political product offerings simply opt for meanings that lack emotional and visceral sense (for lack of a better option), thus creating even riper conditions for PolaaS providers.
Seeing politics as competitive provision of services also helps to better understand why both political actors and their audiences simply select arguments, messages, and ideas that work best to maximize pleasure. Notably, personal emotion, sentiment, and affect, originating in and transmitted through the personal use of media, “leads the way into locating one’s own place in a converged sphere of activity where socio-cultural, economic, and political tendencies and tensions are collapsed,” enabling connection and attachment of people that would otherwise be strangers but happen to populate the same affect-worlds (Papacharissi, 2015, pp. 116-117; see also Bennett & Segerberg, 2012, p. 743). Therefore, instead of political concern driving the person to action, the personal springs the political into being. The preceding thereby implies that personal affects also become the truth criterion when deciding on claims, political programs, and (different versions of) news.
Ultimately, the preceding has to be conceived in light of the posthuman turn in conceptualizing the role and position of our species. Traditionally, human exceptionalism has been the norm, implying that humans possess “unique and exceptional qualities” and, therefore, “deserve a standard of care that exceeds that of other beings” legitimizing instrumental use of living and nonliving environment in pursuit for ever-increasing human well-being (Srinivasan & Kasturirangan, 2016, p. 127). Traditionally, that was being done through emphasizing the human person as the only being capable of probing and cognizing the world and, therefore, the only true subject of ethical obligations (Ferrante & Sartori, 2016, p. 177). In contrast to such anthropocentric views, posthumanism challenges any dualisms between humans and their environment by unseating human beings from their privileged positions (Margulies & Bersaglio, 2018, p. 103; see also Braidotti, 2013, p. 67). Increasingly, that is being done through stressing the inability to draw clear and sharp distinctions between humans and their digital environment.
In today’s world, human and nonhuman actors (primarily computer code and hardware) agglomerate into assemblages, collectively shaping the visible and the sayable (Beer, 2018, p. 466). In the same vein, courtesy to such entanglements, the constitution and maintenance of identities become impossible without their digital counterparts, thereby rendering human self-sufficiency impossible (Pötzsch, 2018, p. 3316). In this way, technological posthumanism breaks with the previous tradition of seeing technological artifacts as “a mere extension of human creativity and invention” as well as “tools that help humans express intention”—that has changed with the advent of automated technology and machine learning algorithms that operate largely independently to structure our environments (Zatarain, 2017, p. 91). In this way, human claims to (self-)mastery are significantly undermined: it is no longer true that only humans act and the world reacts—instead, focus now has to be on co-action (Choat, 2018, p. 1030). Hence, “human” is reformulated into a question, the answer to which is becoming progressively less clear (Herbrechter, 2013, p. 38), thereby destabilizing commonplace thinking about human politics that can no longer be conceived in terms of human ideas and actions (Braidotti, 2013, p. 2). The latter problem is the key issue in the sections that follow.
Algorithmic Aggregation of Publics
As already inferred earlier, online circulations of affects and emotions progressively serve as substance that creates and maintains groups and their identities, however fleeting and ephemeral such agglomerations of online personae may be, simultaneously providing “artificial meeting points” by turning emotion into “a relational resource used for alignment”, particularly since in the digital realm, more than anywhere else, “participation and orientation are guided by emotional interaction chains” that arise from the sharing-based nature of social platforms (Döveling, Harju, & Sommer, 2018, p. 4). In this manner, “affective gestures” bring networked publics together through, among other things, construction of “electronic elsewheres,” that is “social spaces sustained through digitally enabled affective structures” (Papacharissi, 2015, p. 24). Such affective publics can be defined as “public formations that are mobilized and connected or disconnected through expressions of sentiment” (Papacharissi, 2015, p. 125). However, such affective relational practices, despite their seemingly free-floating nature, are not entirely spontaneous or egalitarian. Instead, “digital affect cultures are inherently normative and infused with relations of power”: consequently, “some emotional scenarios are normalized at the expense of others” (Döveling et al., 2018, p. 2). Moreover, in a (political) economy based on attracting and retaining audience attention, the media’s capacity of “sustaining and transmitting affect that might lead to the cultivation of subsequent feelings, emotions, thoughts, attitudes, and behaviours” (Papacharissi, 2015, p. 23) is a crucial strategic resource in political struggle.
The precognitive manner in which the above agglomeration takes place implies that affective publics are united by having learned to savor the same kind of pleasure. The pleasure itself is a rather peculiar one and must be understood in terms of having one’s opinions, preconceptions, stereotypes, and so on, confirmed through overlapping affective flows, even if the image of the world thereby affirmed is not necessarily a conventionally pleasurable one. The ensuing deliberate stimulation by providers of PolaaS, coupled with the speed at which new information is encountered and the necessity to react (and interact) quickly, naturally leads to delay discounting and striving for immediate gratification, which can easily become addictive (Negash et al., 2016, p. 697), only serving to further reinforce our information acquisition habits, tilting them ever more toward the here and now and making irrelevant the consideration of whether that which maximizes our present satisfaction is truthful or not. This drive toward instant gratification and agglomeration through shared affects (i.e., shared gratification) also means that competition over audience attention effectively turns into competition over maximization of pleasure and consumer satisfaction: in other words, politics-as-a-service becomes a satisfaction-maximizing service. And the algorithmic remolding of politics at the heart of such maximization leads, in turn, to reconceptualizing politics as posthuman.
A key premise for thinking about today’s digital politics as algorithmic and, therefore, posthuman, lies in “the lowering costs and easy availability and processing of vast amounts of trace data,” leading to a society “dominated by the logic of quantification,” in which “most objects [are] being tagged, locations identified, people’s attributes marked, behaviour traced, and interactions mapped,” leading in turn to the availability of a digital representation of “all aspects of working and living” (Faraj, Pachidi, & Sayegh, 2018, p. 64). Indeed, as more and more conveniences are introduced, from new devices and services to personal assistants, and as they seem to be working more and more seamlessly—indeed, intuitively—the higher a price we pay in our data (Tiku, 2018).
The present world is, crucially, one in which “merely existing [. . .] means you are structured into the technologies and systems that structure most of social life today” (Caplan & boyd, 2018, p. 4). As data are being gauged from a rich substratum of online traces, from social media use to purchases to geolocation data to interactions with the Internet-of-Things, and so on, they promise to reveal the concrete essence of life in granular detail, providing “unmediated access to the real, multiple and complex world” (Chandler, 2015, p. 847). Potential is opened up for prediction not only of people’s actions but also of likes and thoughts (Newell & Marabelli, 2015, p. 4), leading to “digitization and automation of public- and private-sector decision-making” (Kemper & Kolkman, 2018, p. 1). Even though such grandiose promises are to be taken with a pinch of salt, it is, nevertheless, clear that big data have an acute capacity of tapping into the minds and affects of target audiences in real time.
The above datafication relies on computer code, usually referred to as algorithm, in both its acquisition and interpretation stages as well as in employment of the results of such datafied processes. The term “algorithm” is increasingly acquiring a very broad meaning as “evocative shorthand for power and potential of calculative systems that can think more quickly, more comprehensively and more accurately than humans” (Beer, 2018, p. 11). Here, however, algorithm is to be understood in a slightly narrower sense, as “a computational formula that autonomously makes decisions based on statistical models or decision rules without explicit human intervention,” thereby reflecting “the recent advancement of autonomous decision-making capabilities of algorithms from artificial intelligence and machine learning” (Lee, 2018, p. 3). In this vein, then, algorithmic analysis and selection should be conceived as “automated assignment of relevance to certain selected pieces of information” (Just & Latzer, 2017, p. 239). Because of their ranking, sorting, and value-assigning function, algorithms “can be seen as governance mechanisms, as instruments increasingly used to exert power” (Just & Latzer, 2017, p. 245), shaping the inputs of decision-making processes.
Moreover, the potential for algorithmic governance does not end with the inputs: as a corollary to their data collection and analysis functions, algorithms can also be used to influence the decision-making process itself, enabling algorithm-wielding actors to “model, anticipate, and pre-emptively affect and govern possible behaviours” (Williamson, 2017, p. 271). Hence, algorithms may also wield an even more subtle power, which further reduces human agency. That involves managing not the consciously perceived information but sentiments and affects, particularly pleasure. After all, even though facts may possess an objective component of their own, subjective reactions to them, that is “opinions, sentiments, appraisals, attitudes, and emotions” are no less, if not more, important (Serrano-Guerrero, Olivas, Romero, & Herrera-Viedma, 2015, p. 18). Such analysis, by enabling the understanding of the “preferential signalling” of users, has become “a vital component of the ‘Like economy’ that underpins the business models of all major social media companies” (Puschmann & Powell, 2018, p. 5). Likewise, it has become essential in a variety of applications from “enhancing sales and improving a company’s marketing strategies (by tracking customer reviews and survey responses)” to “identifying ideological shifts and analysing trends in political strategy planning” (Giatsoglou et al., 2017, p. 214). The latter application is of particular relevance for the purpose of this article.
The algorithmic re-rendering of politics has also had a deep impact on creation and mobilization of human groupings. While the default understanding of human groupings and communities has been one in which some substantive characteristic brings people together, be it shared physical presence, shared ideologies or interests, or at the very least, common partaking in affective flows, in the data-centric environment “[p]ublics emerge when technologies create associations by aggregating people” (Annany, 2016, p. 100), particularly through revealing potential for shared pleasure in given utterances. In other words, diverse, disparate, and dispersed agglomerations are first algorithmically probed for target characteristics and/or potentialities that are otherwise not clear or similar enough to consider the relevant individuals as members of the same public, and only subsequently the individuals in question are turned from nonpublics to new publics. The latter is achieved through algorithmically coordinated nudges as well as affective and informational signaling by way of truth-claims deemed capable of agglomerating people through shared partaking in pleasure. Effectively, users tend to be sorted and herded into predetermined information enclaves rather than choosing to immerse themselves in them (see, for example, Webster, 2017, p. 356). Hence, instead of intentionally putting forward and object of concern, one gives way to personalized induction of satisfaction through algorithmically determined (probably not-(yet)-conscious) triggers. Notably, it would be incorrect to assume the initial nonpublics to be not-yet-publics because the latter would imply them being somehow bound to sooner or later agglomerate. Instead, the new publics would likely have never come into being if not for intentional algorithmic identification, nudging, and herding, with the meanings and affects used for such agglomeration processes being derived not from some form of manifestly shared leanings but from big data analysis of underlying trends and affective potentialities.
The above developments clearly signal a shift away from human-centered politics toward an algorithmic process, which is, as argued in the chapter that follows, posthuman. That signals a move beyond political actors merely testing different strategies and narratives to find out what is going to work or not, canvassing target audiences, and engaging with them on the channels that are the most appropriate, including online platforms—something that has definitely been done before (even the term “post-truth” had already been applied to, for example, the 2012 Obama-Romney Presidential race—see Parmar, 2012). The focus is, instead, on political actors’ use of both platform affordances and independently acquired data to supply an all-in-one enjoyment-maximizing service that attracts and locks attention by focusing less on interests and identities as standalone factors but on the flows of affect. That, of course, has already been implied in the definition of PolaaS. This service, though, is reliant on algorithmic management in three major ways that reduce human agency, leaving them as merely one of the elements within the data ecosystem (see, for example, Coleman, 2018): first, algorithms determine what data are collected and how, thus shaping the input stage; second, they determine how the data are arranged and analyzed, thus shaping the output; third, they determine who is to see the output and how, as in content management systems. This role, which bears more than a passing resemblance to Lessig’s (1999, 2006) canonical dictum that “code is law,” allows to create the conditions for individuals to be nudged toward predefined (political) choices in truly posthuman ways. Indeed, as the digital component of today’s agglomerations “make the world appear in certain ways rather than others,” political realities “are never given but brought into being and actualized in and through algorithmic systems” (Bucher, 2018, p. 3), that is, without humans at the center. It is also that political reality of algorithmically structured pleasure that PolaaS providers offer to their consumers.
The Posthuman Pleasure of Nudge
As the process of algorithmic public formation indicates, the current data-centric environment manifests “a shift from a disciplinary logic that aims to structure individuals around norms, to a control-based logic that opens and closes possibilities” (Carah, 2017, p. 396). People no longer have to be made to act and associate in particular ways, either through explicit external pressure or, more likely, through internalized “normal” modes and routines of action—in the digital environment, which is constantly probed and simultaneously remolded algorithmically, a person can be left free to choose whatever they want or desire, the caveat being that the choice environment and the affective spectrum are recursively recalibrated to nudge and herd the newly formed publics toward a predefined result by putting signposts for pleasure and, if necessary, inducing affective triggers so that they become impossible to miss or ignore.
The ability to alter choice environment should not come as a surprise: being malleable by definition, the online environment presupposes “a shifting ground upon which users must constantly renegotiate their positionalities, rights and abilities to interact with one another” (Iliadis, 2018, pp. 219-220). However, there is, again, a clear caveat: even when users do indeed negotiate their digital existence, supposedly manifesting agency of their own, they do not do that under conditions of their own choosing but merely in reaction to the strategically predesigned digital environmental changes that they are faced with. Here, once again, the “nimble, unobtrusive and highly potent” (Yeung, 2017, p. 122) nature of data-centric algorithmic governance manifests itself: no coercion, regulation, or even persuasion has to be used because governance of affective flows suffices: after all, “much individual decision-making occurs subconsciously, passively, and unreflectively rather than through active, conscious deliberation” (Yeung, 2017, p. 120). As a result, once the communicating actor is in control of the psychological data necessary to “predict, target, and change [. . .] emotions and behaviours,” triggering particular affective reactions and ensuing behavioral change becomes an easy way forward (Williamson, 2017, p. 271), altering the digital landscape in a way that leads users to seemingly freely choose the needed outcome.
For algorithmic affect management to operate, a “tripartite regulatory cycle” must be permanently repeated; this cycle consists of first refining the target’s choice environment so that, in light of the available data, the desired choice would become likely; second, data feedback that evaluates the effect of the previous conditioning and, if necessary, determines the characteristics of further conditioning act in light of the expanded data; and third, additional modification of the choice environment (Yeung, 2017, p. 122). Such nudging, driven by big data, is also referred to as “hypernudging,” that is “a design-based instrument of control,” whereby “algorithmic analysis of data patterns dynamically configures the targeted individual’s choice environment in highly personalised ways, affecting individual user’s behaviour and perceptions [. . .] in directions preferred by the choice architect” in ways that are extremely difficult to resist due to their covert nature (Yeung, 2017, p. 130). It is, therefore, true that political agency is perhaps the first victim of algorithmic politics: even though we may think that we possess agency and make choices on our own and courtesy of our own will, the choice environment is likely to be rigged in such a way as to predetermine us to choose in a particular way. In other words, data-rich political actors will know in advance what desires and expectations are to be satisfied, what messages are to be uttered, and what affective glue is necessary to bring and bond a necessary public together. And as audiences are offered affectively engaging content that they are predisposed to like, questions of adequacy and veracity of the utterances in question are unlikely to arise (Hildebrandt, 2016). After all, the alteration of choice environment happens through pleasure, produced by providers of PolaaS: it is through production of affects that are known in advance to lead to pleasure (see, for example, Williamson, 2017) that hypernudging becomes so irresistible.
Nevertheless, it would be overly simplistic to reduce algorithmic politics to subjection of individuals by data-rich political actors. In fact, loss of agency must be seen as happening both ways. In PolaaS, political actors are reduced to service providers, maximizers of pleasure and consumer satisfaction. Instead of being active agents shaping strategy, PolaaS providers become restricted to voicing the results of algorithmic analysis of their target audiences. Once again, actual and predictive affective dynamics determined through algorithms limit the choice environment, this time for PolaaS providers. Hence, a truly posthuman picture of algorithmic politics is taking shape: governance through collection and analysis of data and ensuing design of affective flows that disempowers both political actors and their audiences or, at the very least, drives them away from their privileged positions.
Ultimately, agency becomes dissolved across multiple layers of information infrastructures, software innovations, and agglomerations of data, enabling management and control that, although based on patterns of human action, emotion, and affect, is machinic, beyond human (Iliadis, 2018). Indeed, as algorithms increasingly become “autonomous actors with power to further political and economic interests on the individual but also on the public/collective level,” essentially themselves becoming “actors and policy-makers” and affecting how the world is perceived both individually and socially as well as the ways in which individuals act (Just & Latzer, 2017, pp. 245, 254), politics is entering a posthuman phase. As the role played by humans is being shrunk, agency is relocated to “an assemblage of human actors, code, software and algorithms” (Beer, 2018, p. 476) rather than remaining the exclusive domain of the Aristotelian humans as political animals. The result is a seemingly paradoxical situation whereby, on the one hand, as shown earlier, engagement with politics has become me-centric due to the audience expectation of satisfaction and fulfillment of one’s expectations and desires while simultaneously human capacity to act is shrunk, thus seemingly decentring that same “me.” This paradox is, nevertheless, mediated through PolaaS whereby providers offer readymade pleasure maximization services on a competitive basis. Such competition as well as unconstrained (as discussed above) nature of consumption allow for me-centricity whereby the premade (data-informed and algorithmically generated) nature of the offering, nevertheless, underscores the posthuman nature of such politics. At the same time, PolaaS providers themselves cannot be seen as bucking the posthuman trend as they are themselves not active makers of their offering but mere purveyors of it.
The above developments play into the broader turn toward posthumanism that signals the end of understandings of reality that are top–down and manifest clear unilinear causal chains; instead, the proposed alternative involves rich, fuzzy, and nonlinear causation in which multiple causes and multiple effects are fused together through recursive feedback loops (Chandler, 2015). Crucially, it also manifests a strongly decentred understanding of the self—one detached from specific bodies but existing, instead, as “an assemblage,” that is “a distributed, networked self that constantly emerges at various intersections between humans, non-humans, objects, materials and energy flows” through a fundamentally algorithmically driven process of identity construction (Pötzsch, 2018, pp. 3314, 3316). Such deprivileging takes place as individuals become immersed in “complex socio-technical networks,” rendering identities, individual and collective alike, into “fluid, hybrid and constantly evolving” ones, always merely partial results of the aforementioned sociotechnical networks (Pötzsch, 2018, p. 3314). As a result, instances of something being uniquely and specifically human, independent of the digital and the machinic, are increasingly difficult to come by (see, for example, Herbrechter, 2013; Mahon, 2017). Political processes, from information acquisition to political behavior proper, appear to be no exception, with the human part characterized by audiences seeking to maximize their pleasure and PolaaS providers seeking to maximize consumer satisfaction, both sides depending on algorithms to achieve their ends.
Conclusion
In today’s largely data-centric environment, truth-claims can be specifically manufactured in a way that maximizes the pleasure experienced by target audiences by affirming their desires, hopes, and fears, or suggesting aspirational visions derived from preexisting opinion. As striving for pleasure is a natural feature of human life, and individuals are capable of learning to maximize their satisfaction by repeating and emulating previously successful behaviors (e.g. siding with opinion-congruent fake news), it is only logical that positive affective impact replaces truthfulness as the main criterion for news selection.
The onslaught of diverse media content, caused primarily by the rapid spread of mobile devices and social media, has meant that there is usually even no time available to cognize and consider the truthfulness of a message: decisions on consumption have to be made instantly, usually by means of an emotional click with particular content (or lack thereof). Simultaneously, this onslaught has caused immense competition over human attention, thereby forcing media content providers and political actors alike to compete by means of offering immediate gratification to their prospective audiences. All in all, then, the power of post-truth lies in pleasure. Moreover, it is also this pleasure or, more precisely, the ability to determine the preconditions of and then to induce pleasure that gives rise to posthuman politics.
Technological developments related to data collection and analysis have enabled political actors to offer immediate gratification and pleasurable affects with great precision. This, however, has raised important questions regarding human agency. If humans are associated by algorithms instead of associating themselves and are being affectively nudged into making predefined political decisions, it is no doubt clear that independent human agency is significantly reduced. At the same time, however, political actors are reduced to mouthpieces for algorithmically selected content. As a result, algorithmic politics can be claimed to have introduced a posthuman era in politics—one in which human agency must be shared with (and is likely exceeded by) algorithms and other technological elements. In this context, political processes are transformed into competitive provision of politics-as-a-service in which political entrepreneurs aim to offer pleasure-maximizing services to their consumers based on already known characteristics of the latter.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
