Abstract
From a socio-theoretical and media-theoretical perspective, this article analyses exemplary practices and structural characteristics of contemporary digital political campaigning to illustrate a transformation of the public sphere through the platform economy. The article first examines Cambridge Analytica and reconstructs its operational procedure, which, far from involving exceptionally new digital campaign practices, turns out to be quite standard. It then evaluates the role of Facebook as an enabling ‘affective infrastructure’, technologically orchestrating processes of political opinion-formation. Of special concern are various tactics of ‘feedback propaganda’ and algorithmic-based user engagement that reflect, at a more theoretical level, the merging of surveillance-capitalist commercialization with a cybernetic logic of communication. The article proposes that this techno-economic dynamic reflects a continuation of the structural transformation of the public sphere. What Jürgen Habermas had analysed in terms of an economic fabrication of the public sphere in the 1960s is now advancing in a more radical form, and on a more programmatic basis, through the algorithmic architecture of social media. As the authors argue, this process will eventually lead to a new form of ‘infrastructural power’.
Keywords
Introduction: On Facebook and ‘snake oil salespeople’ 1
On 30 December 2019, Andrew Bosworth – former Facebook vice president (VP) of Ads and Business and today VP of AR/VR – wrote a memo to his staff on his internal Facebook page, sharing some ‘Thoughts for 2020’. In his post, ‘Boz’ – who had personally been responsible for political ads during the 2016 US presidential campaign – reflected on recent years and noted, with regard to Donald Trump’s election victory: ‘He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser’. He also referred to a company that has gained extensive notoriety since: Cambridge Analytica (CA).
Bosworth dedicated a substantial portion of his statement to CA. On the one hand, he established, in sober analytical terms: “The company […] started by running surveys on Facebook to get information about people. It later pivoted to be an advertising company, part of our Facebook Marketing Partner program, who other companies could hire to run their ads. Their claim to fame was psychographic targeting. […]; their ads performed no better than any other marketing partner.”
On the other hand, his choice of language to refer to this company, which produced the greatest data scandal in Facebook’s company history and forced Mark Zuckerberg to appear before all kinds of special inquiry committees, was in some instances a little more emotional: “In practical terms, Cambridge Analytica is a total non-event. They were snake oil salespeople. The tools they used didn’t work, and the scale they used them at wasn’t meaningful. Every claim they have made about themselves is garbage.”
Bosworth’s memo, which was widely liked, shared and commented upon, provided an intriguing insight into Facebook’s internal corporate communications. Not only did it show, in terms of its content, that some form of internal accountability actually existed: as ‘Boz’ admitted, ‘we were late […] on data security, misinformation, and foreign interference’. The undermining of CA’s role also drew attention to another factor, namely, the significance of Facebook’s own platform. ‘So was Facebook responsible for Donald Trump getting elected? I think the answer is yes’. 2
This statement already contains the hint that the social network is more than a mere information broker or intermediary of individual communication. The ‘advertising platform’ 3 increasingly functions as an infrastructure 4 of public opinion-formation and meta medium for the digitally transformed ‘industry of political marketing’, 5 the programmatic preconditions and effects of which shall be subjected to a more detailed reflection in what follows. To start off, the case of CA is reconstructed from a media-technical and media-theoretical perspective – not as an attempt to add a further discursive facet to this company’s controversial psychographic microtargeting, 6 but in order to investigate CA’s established ‘standard campaign practices’ 7 and thus the mechanisms of the Facebook platform’s operation. The aim is to illustrate, at a more fundamental level, how Facebook controls an affective infrastructure that combines ‘surveillance-capitalist’ 8 commercialization with a cybernetic logic of control. Not only does this fusion transform political campaigning, but the network induces a techno-economic dynamic that can be interpreted as the continuation or intensification of the (infra-)structural transformation of the public sphere through the mechanisms of the platform economy. What Jürgen Habermas analysed as the imbuing with power [Vermachtung] and manufacturing of the public sphere as early as the 1960s, is continued here at the prior level of a cybernetically oriented platform, threatening to eventually manifest itself as a ‘programmed absence of alternatives’.
The (standard) case of Cambridge Analytica
In March 2018, the Guardian and the New York Times published investigative reports on a ‘data leak’ at Facebook. Together with whistleblower Christopher Wylie, they turned the global public spotlight on the company CA – a spin-off of the Strategic Communication Laboratories Group (SCL). 9 It transpired that CA had gained access to the personal data of 87 million Facebook users via Facebook’s ‘Friends API’ and the quiz app thisisyourdigitallife, developed by behavioural psychologist Aleksandr Kogan, and then used this information to compile personality profiles and utilize them during the Trump campaign in 2016. Even though the use of data provided by third parties such as CA had constituted a violation of Facebook’s terms of use since 2015, the company was able, until it was exposed, to present itself as a supposed specialist for psychographic microtargeting (the targeted appeal to individuals based on psychological criteria). This, even while advertising its key role in the MAGA campaign and indicating that it operated databases which had each accumulated some 5000 data points (DP) on 230 million Americans. 10 Facebook was made aware of CA’s violation of its rules as early as 2015, but neglected its supervisory duties. As a result, a global scandal would gradually unfold, which again turned the spotlight on the issue – alongside questions of commercial user profiling – of data privacy and data protection in social media. 11
In this context, CA’s practices have become the focal point of rather binary debates, 12 in which one side identifies the company as the ultimate incarnation of digital voter manipulation 13 and the other – as in the case of Bosworth – considers it as a hyped-up marketing firm whose methods consist of no more than ‘snake oil’ or ‘bullshit’. 14 In light of the current state of facts as well as the investigation of the British Information Commissioner’s Office (ICO), a more differentiated picture emerges: contrary to information provided by CA itself, it is rather implausible that the company conducted systematic psychographic microtargeting yet possessed ‘psychographic inventories’ 15 of around ten DP for 30 million individuals. It is thus unlikely that the company engaged in any large-scale modelling, but instead, in the run-up to the 2016 election, conducted somewhat crude experiments on the responsive segmentation of target groups. 16 Further figures also required correcting: the bulk of consumer and voter data used by CA were commercially available, and their extent and data-point accumulation had to be rated well below the advertised figures – the number of individuals recorded was only 160 million at most, while the number of DP apparently never exceeded 3,000.
What is remarkable about the ICO report is that, although the datasets may have been quite substantial, they reveal hardly any exceptional practice at all. Instead, it is emphasized that CA almost exclusively, and certainly extensively, applied ‘commonly available technology’ and ‘widely used algorithms for data visualization, analysis and predictive modelling’ 17 – and, if anything, may have supplemented its data with those from Facebook (570 DP on 30 million individuals, accumulated via Kogan’s app). Correspondingly, CA must be largely regarded as the norm of digital campaigning. Yet, this casts the established practices of political marketing in an even poorer light, and ultimately highlights how public opinion-forming or electoral campaigning are being rearranged (including technologically) via the meta medium Facebook. Although – despite the firm’s own common self-portrayal – CA was anything but the secret mastermind behind the MAGA campaign, the company actually did costrue a vital media-technical link and – even apart from data analysis and modelling – was involved in the orchestration of election ads, as discussed in detail in the following section.
Digital campaigning and informative engagement
As is well-known, Facebook has developed a surveillance-capitalist infrastructure that allows marketing companies like CA to identify, bundle and target users according to ‘relevant information’ 18 (gender, place of residence, preferences, etc.). For the Trump election campaign, CA developed an additional dashboard named ‘Siphon’ which recorded in real time any specific campaign ad’s performance, impressions and interactions (web page visits etc.) both according to segmentations (ranging from ‘persuaded’ to ‘persuadables’, from ‘Hispanic’ to ‘African American’) and in terms of costs. 19 This way, the analytical tool enabled the MAGA team to conduct a data-based fine-tuning of the campaign, and the hierarchization and selection of individual ad cycles. This opened up a certain scope for strategic corrections: as soon as a message was seen to have insufficient impact, failing to generate a response, additional ad slots could be purchased via Siphon so as to accelerate the dissemination of an ad or extend its reach, including beyond public visibility, with Facebook offering what it then called ‘dark posts’: individualized, non-public messages shown only to preselected users. 20
This segmented, algorithm-based, and – in the words of Habermas – ‘temporarily manufactured political public sphere’,
21
primarily followed economic parameters in particular: the costs of these campaigns ranged from $10,000 to $100,000 and totalled about $44 million in 2016 alone (while the Clinton campaign invested some $28 million). Through this, multiple ad variants (differing in terms of colour design, slogans etc.) were circulated that focussed especially on ‘engagement’.
22
As ex-CA staff member Brittany Kaiser, writes: “For example, if the campaign put out a video […] it could put money behind a few different versions of the ad and watch its performance in real time to determine how many people were watching, whether they paused the video, and whether they finished watching the video. Did they click through links attached to it […]? Did they share the content with others?”
23
The focus on engagement rates reflects the influence of Facebook, which has turned political advertising – just like commercial advertising – into a business of activation, of maintaining ‘economies of action’. 24 Facebook does not sell advertisers mere ad slots, like in print media, in order to unidirectionally and uniformly address a mass audience; rather, it sells a certain group-specific output, orchestrated via the newsfeed algorithm, for example, web page visits or likes. This process itself is both simple and automated: Once an advertiser has identified, or segmented a ‘key target group’, Facebook scans for profiles that show similar preferences, using tools such as ‘Custom’ or ‘Lookalike Audience’. It thus generates correlations, as well as seeking to initiate – through platform products like sponsored or (in the past) dark posts, or via surveys or lotteries – a process that generates clicks, shares, comments and thus a cycle of constant information-harvesting. In the case of the MAGA campaign, this might have included ads for a rally, an ‘Official Approval Poll’, or a raffle to win a MAGA cap, whereby anyone interested would enter their phone number and email or street address. 25 This way, the team accumulated data that enabled a more personal appeal and an improved coordination of funds – with greater investments promising a higher degree of activity.
For the most part, ads were created that would display – as is not uncommon in a digital ‘culture between overproduction and recombination’ 26 – only slightly differing, but in any case, individualized content. This was done in order to prompt a lasting phatic engagement among Trump followers. Both the design of ads and the issues they addressed relied more on machine work than on editorial skills; they were based on user profiles, preferences and characteristics, as a result of which these ads were placed in the respective newsfeeds in a fragmented but nonetheless standardized manner. On the whole, the Trump campaign ran around 5.9 million ad variants on Facebook in 2016 – compared to the Clinton campaign’s 66,000. At the same time, 84 per cent of the ads solicited voters for some form of activity (e.g. donations), whereas only about half of Clinton’s ads focussed on such requests. 27 The bulk of CA’s work consisted of rather conventional digital campaigning: the company did not have to conduct a psychographic screening of the electorate in order to communicate campaign messages. Indeed, CA largely drew on the ad mechanisms of platforms like Facebook, their modes of demographic segmentation and services that served to create a target group-specific kind of permanent background noise. 28 Besides these informative feedback loops, which aimed at ‘a mood of conformity’, 29 CA also focused on feeding a programmatic polarization premised on the massaging of affects.
Antagonistic network affects: Decontextualization and demobilization
The striking aspect about CA’s campaigning activities was that it was taking advantage of the algorithm-based advertising placement on Facebook, which privileges not the highest bidder but the ad that will – most likely – provoke more attention, that is, feedback. With this functional logic of an attention economy, or engagement economy operating in the background, the data company created election ads guided less by the ideals of rational discourse than by the ‘affective politics of digital media’. 30 The aim, at all times, was to orchestrate ‘network affects’, 31 and thus to generate emotional feedback which was primarily expressed via images, videos and memes – Facebook’s algorithm prioritizes audio-visual content over text – and were virally shared via the connective, constant renewal of information flows. In the process, CA largely appealed to an aggressive friend-or-foe mindset that served to distinguish the Trump campaign or set it into contrast, relying on decontextualization when it came to image-text compositions (fake news etc.). It thus deployed a logic of ‘filter clash’, 32 that is, an intentional collision of political positions.
One of CA’s particularly catchy ads was a heavily edited video of a campaign appearance by Michelle Obama: in the original, from during her husband’s campaign in 2007, she had emphasized that even though the family was currently immersed in the race to the White House, the two daughters’ education still had priority, because, after all: ‘If you can’t run your own house, you certainly can’t run the White House’. CA then took these remarks out of context and recombined them, directed them against Hillary Clinton, and publicized them under the title ‘Can’t Run Her Own House’. Apart from denouncing Clinton’s tarnished family values, the company was also applying a certain sexism – as Bill Clinton’s Lewinsky affair was hinted at – thereby conveying the impression that it was not only ‘Democrat-against-Democrat’ but also ‘woman-against-woman’. 33 The aim of this framing was to increase the likelihood that Democrats who prioritized conservative family values over their dislike of Trump would not identify with Clinton and thus would refrain from voting for her. 34 Similar strategies were pursued through videos in which Clinton referred to Trump supporters as a ‘basket of deplorables’ or to members of youth gangs as ‘super predators’ (1996). 35 Such ads were specifically placed in the newsfeeds of African Americans in swing states, intended to serve a firm ‘deterrence’ 36 or even voter suppression. They were correlated, as Samuel Woolley and Douglas Guilbeault note, with ‘less conventional goals: to sow confusion, to give a false impression of online support, to attack and defame the opposition, and to spread illegitimate news reports’. 37
While Barack Obama had already used platforms like Facebook to mobilize voters, 38 and his campaign had always sought to use the ‘positive culture of affects’, 39 CA’s ads frequently focused on the exact opposite, namely, inducing ‘negative affects’. 40 Affects like disgust, fear or anger were specifically targeted – negative feedback is easier to generate and thus less costly, and it spreads more quickly and more widely in social networks. 41 Another focus were efforts aimed at actuating spirals of indignation, at concentrating ‘irritabilities’ 42 on social media: after all, the ads placed were supposed to have both an inward effect – that is, boosting the activation of the friend, the Republican voter base – and an outward one – the deactivation of the enemy, the demobilization of the opposition. 43 From this perspective, the aim was less to turn Democratic into Republican voters – as has often been insinuated in the public debate 44 – but rather to apply antagonistic strategies heightening the ‘us vs. them’ division and undermining the legitimacy of opponents’ demands. 45
In this sense, the campaigns underscore that CA used the variable differentiation of communication in terms of image, text, video or meme not only for information extraction but also for prompting affects, in order to feed polarization. They thus mainly served the programmatic development of a populism that relies on group-dynamics rather than mass ones. On the one hand, network processes were pursued that aimed at the formation and engagement of communities of affect, that is, ‘particular collectives’ and, connected to this, ‘the promise of an immediacy of political participation’. 46 On the other hand, CA also pursued the exact opposite, that is, democratic de-participation, a kind of calculated de-politicization. This ambivalent correlation itself reveals how the company transposed Facebook’s commercial, affective forms of communication to the political realm and how it deployed the platform-economy mechanisms of ‘rating’ and ‘interaction’ for the purposes of heightening antagonisms. In short: ‘Facebook has built an algorithmic ad-buying system with a mercenary drive toward results, and Trump’s campaign exploits it tirelessly’. 47
The affective infrastructure: Facebook’s interactive platform experimentalism
That CA’s advertising strategies, as outlined above, have long since become business as usual is underscored by the fact that the company was working on ‘affect heuristics’ 48 – that is, the production of affect on specific Facebook groups – already in 2014. At a more basic level, the significance of Facebook’s infrastructure is also reflected here, as it is precisely the programmed objective of the newsfeed algorithm that allows for polarizing selected target groups, that is, deploying the platform as a ‘radicalization machine’. 49 As a case in point, Wylie describes the significant change in the newsfeed that a like for the ‘Proud Boys’ prompted in contrast to liking such sites as Walmart, because ‘liking an extreme group […] marks the user as distinct from others in such a way that a recommendation engine will prioritize these topics for personalization’ and ‘start to funnel the user similar stories and pages—all to increase engagement’. 50
Alongside harnessing such modes of personalization, the infrastructurally immanent ‘affective feedback loops’ 51 of the newsfeed were also used for activation: that is, mechanisms ‘by which affect circulates from the user, into the algorithmically determined product, which returns “desired” content back to the user’. 52 Wylie, too, explains these ‘ludic loops’ as ‘“variable reinforcement schedules” […] that create anticipation, but where the end reward is too unpredictable […] to plan around’—meaning, methods that create a personalized, informative ecosystem and provoke a ‘self-reinforcing cycle of uncertainty, anticipation and feedback’. As a result, Wylie states, the platform operates with the ‘randomness of a slot machine’. 53 User attention is to be additionally absorbed through flashing messages, live videos, photos, as well as by apps like the ‘pull-to-refresh mechanism’ (mediated via the app’s buffering or loading symbol). 54 Content is made to have the effect of rewards, and the ‘infinite scroll’ down the endless newsfeed expresses the oft-cited ‘fear of missing out’ (FOMO). 55
Facebook’s founding president Sean Parker already encapsulated the main objective of his company’s ‘social infrastructure’ (Zuckerberg) in the question, ‘How do we consume as much of your time and conscious attention as possible?’, construing the ‘like’ button as ‘a social-validation feedback loop’, emphasizing the ‘dopamine hits’ it triggered and defining it as a mechanism which ‘[exploits] a vulnerability in human psychology’. 56 The network’s affective structure follows a behaviourist and behavioural economics-based rationale. Through its interface design it seeks to increase the likelihood of a certain behaviour, pre-structure spaces of action and reaction in a machine-readable form (from the ‘wow emoticon’ to the angrier variants). Indeed, the various tools such as push messages and ‘poking’ are quite literally reminiscent of ‘nudging’, that push ‘in the right direction’ 57 whereby the desired behaviour, which, incidentally, must be constantly elicited, is that which generates information. 58 Facebook makes those platform instruments available to a wide range of political actors; the corporation likes to cite this fact in order to portray itself as politically unbiased – correspondingly, in 2016, Zuckerberg denied any interference with the US election, insisting that Facebook is a neutral ‘tech company’, as opposed to a content-based ‘media company’. 59 The more nuanced insight that technology could theoretically be conceived of as neutral but that Facebook in particular never acts merely as a neutral technological medium operating in isolation from any specific interests or logics is substantiated not only by the platform’s fundamentally profit-oriented nature but especially by its programmatic-interactive experiments.
For example, in-house research was conducted to find ways to influence user behaviour by redesigning the interface and/or newsfeed. In the context of the ‘61-Million-Person-Experiment in Social Influence and Political Mobilization’ during the 2016 congressional elections, a group of Facebook users were not only provided with the option of clicking on an ‘I-voted button’ on their screen but also shown information on whether their friends had voted. According to Facebook’s researchers, the effect of the ‘social message’ was that the probability of users seeking information about their polling stations rose by 0.26 per cent and the likelihood of users casting their vote increased by 0.39 per cent – which translated into an additional 340,000 votes. 60 In another experiment on ‘emotional contagion’, 61 Facebook was able to prove in 2014 that users produced more negative feedback as soon as their newsfeed contained fewer posts, thereby showing that it was able to influence user behaviour through the algorithmic curation of content. In retrospect, these experimental designs reflect an interactive ‘platform experimentalism’, 62 which seeks to make the company’s own – scientifically founded – tools available to both businesses and political interest groups. 63 As Wylie writes, Facebook was ‘frequently a supporter of this psychological research’, granted academic researchers like Kogan ‘privileged’ access to its users’ private data – and certainly not simply so for altruistic reasons – and had already in 2012 filed for a US patent for ‘Determining user personality characteristics from social networking system communications and characteristics’. 64
In the scholarly literature on political communication, platforms are often conceptualized merely as neutral distribution channels. 65 Yet, experiments in strategic advertising in the engagement economy highlight the reality that Facebook, with its profit-focused motive, contradicts such interpretations. The platform is anything but a mere medium for sender and receiver. Instead, it forms its own mode of – both channelled and channelling – communication, converts network effects into network affects, and forcefully applies its commercial mechanisms, often in combination with cybernetic mechanisms.
Cybernetic communication and the logic of feedback propaganda
In looking at the infrastructure of Facebook, we can identify the stimulation of ‘social contagion effects’ 66 through the feedback logic, effects that translate digital communities into ‘nervous systems’, and we may thus conclude, more generally, the merging of commercialization with cybernetic logics of control. 67 From this angle, the social network denotes a place where news – regardless of whether it is true or fake – can be scaled (indeed, escalated) at high speed, thereby prompting a productive dynamic that is essential for the (information) system Facebook: hence, for the latter, the value added of an affective polarization (see CA) arises not so much from supposedly pluralistic disagreement. The objective is, rather, the classic cybernetic quest for exploitable information and communication, the channelling and acceleration of its flow rate (likes, shares, etc.), that is, a constant circulation that is perpetually readjusted through a controlling, automated curation of the newsfeed. What develops is ‘communication-breeding communication’, 68 independently reproducing itself and intensifying in a virtually infinite recursive loop.
Actors such as CA represent, at best, variables in the function of the platform, the expression of a cybernetic-capitalist infrastructure that provides the programmatic basis for feedback propaganda. This latter includes the placing of interaction-enticing ads, whose reach (see Siphon dashboard) is tested and controlled in real time in experimental feedback settings, and which are geared towards variability and virality. At the same time, it harnesses users as forwarding hubs for polarizing election ads. 69 Gary Coby, Digital Director of the Trump campaign in 2016, explains what he calls ‘A/B testing on steroids’: ‘They [Facebook] have an advantage of a platform that has users that are conditioned to click and engage and give you feedback. […] On any given day… the campaign was running 40,000 to 50,000 variants of its ads, testing how they performed in different formats’. 70 The cybernetic logic thus encompasses an ‘experimental epistemology’ (McCulloch), it is recursive work in progress, constantly creating new outputs, which are fed into the system as inputs and thereby create new outputs – entirely along the lines of the homoeostatic system which relies on constant activity and commotion for productivity, that is, for its survival.
This dynamic not only reflects Norbert Wiener’s early definition of feedback – ‘the control of a system by reinserting into the system the results of its performance’ 71 – but also points to a cybernetic concept of communication: after all, for Facebook, the value of a message lies not so much in its actual substance, but rather in its formal informational content. The platform thus updates Claude Shannon’s decision theory-based communication model, which emphasized the process character and frictionless manageability of information flows. In the Foreword to Shannon’s The Mathematical Theory of Communication, his colleague Warren Weaver underscores the focus on calculus and operativity, not on sense or semantics, by pointing out that it is irrelevant from the perspective of information technology whether a message comprises the entire Bible text or simply the word ‘yes’. 72 And neither was CA’s choice of ad variants based on the quality of the content: all that mattered was their affective and activating potential, that is, whether or not they generated information and follow-up communication. The selection of any particular message was not made based on truth content but rather valued in the currency of engagement. This reduction of communication to mathematical parameters ultimately not only explains the cybernetic machinery’s disinterest in regulating content, but turns it into an ideal interface for polarizing practices – in fact even privileging such practices in the context of fluid newsfeeds.
Cybernetic governmentality
Apart from the cybernetisation of communicative action, the case of CA also illustrates the cybernetization of subjects, which is addressed here in terms of the ‘best possible conductor of social communication, the locus of an infinite feedback loop’. 73 In fact, early cybernetics proceeded from an image of subjects not as reflective but as adaptively reacting actors that adjust to changing environments. Both Wiener and W. Ross Ashby already during the 1940s and ‘50s turned their attention to the ‘adaptive behaviour’ 74 of individuals and systems, albeit with differing emphases. They did so some time before the scholar of cybernetic politics Eberhard Lang asserted individuals’ existential political ‘need for instructions’, which ‘behavioural research’ 75 helped to accommodate. The subject thus appeared as behaviouristically primed black box, which mattered not in terms of intrinsic motivation or existential psychology, but, if at all, with regard to registrable and quantifiable inputs and outputs. This, already decades before behavioural researchers like Kogan enlightened us about the potentials of social networks through their data-based experiments. 76
Likewise, Facebook does not burden itself with speculations on the potential intentionality of an action, but concerns itself with correlations and measurable patterns, a kind of ‘psychology without the psyche’ (Hans Jonas) that makes it possible to infer character traits, preferences or expectable voting behaviour. The aim is therefore simply to valorize quantifiable behaviour and, if need be, to adjust the stimuli in such a way that they have an affective impact and, by being fed into a communicative circuit, enable feedback. Facebook’s transformation of communication into an algorithmically readable, mathematically formulated operation essentially puts into concrete terms what Tiqqun describes as the ‘cybernetic hypothesis’: 77 the claim that modes of social behaviour can be shaped, or modelled via feedback loops. In this context, Tiqqun identifies a subject deprived of all substance whose actions can be adjusted by shaping their environments. The Facebook platform features similar programmes of attempted ‘behavior modification’ 78 as it decodes and analyses its users’ behaviour patterns and structures them in accordance with emotional cybernetics – and therefore, in a way, governs them.
If we define government along the lines of Michel Foucault, as ‘a total structure of actions brought to bear upon possible actions’, as a setting which ‘incites, […] induces, […] seduces’, 79 then Facebook’s platform marks the establishment of a new art of government sui generis – a cybernetic governmentality. It may not pursue any definitive course in terms of political partisanship, but it is geared towards generating feedback, the communication flows so vital for its business model, and towards the sovereign control of the channels. That said, it should be emphasized that Facebook’s cybernetically structured communication by no means creates a digital machine à gouverner, as early critics of cybernetization were concerned, which moves the masses at the push of a button – or, as is often suggested, even ‘hacks’ voters. 80 Today’s control reflexes are more subtle, rarely unidirectional, and focus—given their reach—on systematic productivity under the banner of a constant feedback dynamic. It is therefore only logical for Facebook to have no interest whatsoever in the hierarchical registers of suppressing communication so typical of classic propaganda. Instead, the codes of communicative evolvement are orchestrated via ‘information control’. 81
Hence, the cybernetic art of government relies on the constant expansion of communication channels, indeed of the platform logic. It crucially depends on the infrastructure incessantly supplying ever-more affective stimuli, an ever-greater variety of options and nudges, so that when a problem – a disruption, or, in cybernetic terms, an instance of noise – arises, substantial restrictions in the form of excommunications (as with Trump in January 2021), seldomly occur. The objective, at all times, is the circulation of information and the (at best, homoeostatically stabilizing) self-regulation, as a result of which any new crisis – or increasing demands with regard to content regulation – is responded to merely by devising new tools which aim to generate further responsive networking (even the incorporation of users in the process of content moderation via offering the opportunity to ‘flag’ content can be read as an instance of information generation). In this system of objectives, the diktat of informative connectivity has always been paralleled by the affirmation of a specific form of communication, which construes, designs and structures individual and public opinion-forming itself as a sequence of machine-readable, commercializable signals and choices.
The platform economy’s infrastructural transformation of the public sphere
The ‘social infrastructure’ called Facebook potentiates what Habermas refers to as the ‘transformation of the public sphere’s political function’ that accompanies the manufacturing of the public sphere and its being imbued with power [Vermachtung]. 82 The philosopher famously diagnoses ‘a kind of refeudalization’ 83 of the public sphere with a view to its commercialization (including in the context of the emergent ‘opinion-molding services’ 84 and the increasingly dominant field of public relations), and traces how, from the 19th century, this public sphere was increasingly shaped by private business interests via the business press. With regard to the role of platforms such as Facebook and marketing firms like CA, we can observe a continuation, or even an intensification of this ‘structural transformation of the public sphere’, following the logic of the platform economy. Social networks long managed to portray themselves as the incarnation of a radically democratic pluralism, as trailblazers of ‘equal access’ and a digital ‘culture of participation’ – one in which matters of public interest could, it was claimed, be freely discussed without any forms of domination. Yet, today there is no question – despite all ‘emancipatory potential’ that Habermas concedes even to the new digital public sphere that has been freed of ‘admission requirements’ 85 – that these intermediaries not only accelerate the commercialization of the public sphere, but also concentrate power structures at a programmatic level.
From a media-technical perspective, the platform economy’s infrastructural transformation crucially rests on cybernetic mechanisms which establish a dialectic of engagement and control in order to thereby enable a fragmentation of the public sphere, as illustrated here with a view to Facebook and CA. In the process, the platform, firmly rooted in the feedback logic as it is, controls a highly responsive infrastructure which, in contrast to the unidirectionally emitting mass media like print or TV, curates specific contents – adhering to the concept of fluid inputs and outputs – via reactance and performance, and constantly readjusts and optimizes information flows. The shifts in the digital manufacturing of the public sphere thus become apparent at the level of algorithmic sorting of news contents by the gatekeeper Facebook, 86 and thereby in the replacement of classic media’s filter function by automated practices of rating and selection such as the newsfeed algorithm. 87 Consequently, Facebook acts as a pre-existing, preselecting program architecture that decides which content and which public sphere is shown to whom, and when. At the same time, given the proprietary shield – Facebook’s newsfeed algorithm remains a business secret – and the constant programmatic revision of the code, neither are the selection mechanisms publicly transparent nor do content moderators’ decisions always follow consistent or transparent criteria. 88
Yet, if news are increasingly programmed and sorted according to the patterns of personal interactions and preferences rather than social relevance or rationality, and users – or voters – are ‘attended to’, at best, as ‘key target groups’ or audiences, then there is a risk – and, more recently, Habermas has in fact conceded as much – that a collective frame of reference for political opinion-forming gets lost 89 and is replaced with situational affect. Beyond the normative horizon of a deliberative, transparent discourse, there also seems to be an immanent ‘establishment of a multiplicity of parallel public spheres’, which, ‘in extreme cases’ can erode ‘the common basis for a debate between subjects’. 90 This dynamic is growing in scope precisely with regard to Facebook – see also CA’s demobilization strategies.
The programmed lack of alternatives and infrastructural power
Against this backdrop, the Facebook platform constitutes a ‘persuasive technology’, an intrusive program architecture, which structures the (political) public sphere mainly along the principles of the attention or engagement economy and in-forms it via connective immediacy – understood here, in a somewhat reductionist manner, as click-based participation [Teilnahme], as opposed to participative inclusion in the sense of partaking [the broader dimension involved in Teilhabe, in sense of having a share in sth]. 91 Under the banner of cybernetic governmentality’s forms of interaction, it absolutizes and monetizes not only specific modes of information and communication, but also increasingly manifests as what can be described as a programmed lack of alternatives. After all, ‘the ultimate objective of Internet companies such as Facebook’, as William Davies summarizes the goals of the ‘gatekeeper platforms’ 92 from Silicon Valley, ‘is to provide the infrastructure through which humans encounter the world. […] According to this vision, when the mind wants to know something, it will go to Google; when it wants to communicate with someone, it will turn to Facebook’. 93
This same aspect has consequences for the political arena, too: if a party wishes to reach its voters and disseminate its positions and agenda in our digital present day, it is forced to adapt to the platform’s logics, codes and experiments – as is also suggested, incidentally, in the Bosworth memo. 94 Facebook is therefore anything but an unbiased company or neutral medium; rather, it acts as a para-democratic infrastructure 95 whose monopoly increasingly arises from an epistemic predominance, that is, from its technological know-how itself. After all, the platform’s programmed lack of alternatives also engulfs both the expertise of political or public opinion-formation and the structure of election campaigns, as there is one particular actor who, thanks to the most effective tools, is particularly well informed on just about everything: that is, the platform architect. This is a scenario in which Facebook not only functions as distribution channel but also acts as consultant, and in which it appears plausible that digital campaigning teams will be made obsolete in the short to mid-term, because their work is outsourced entirely to the platforms. 96 It would therefore seem only logical, in an age of ‘technology-intensive campaigning’, 97 that the corporations’ experts actively advertise their products to political parties, that the campaigns collaborate closely with Facebook, and that ‘symbiotic relationship[s]’ 98 may even develop between political actors and Silicon Valley. Correspondingly, Kreiss and McGregor concluded – not too long after Trump’s election in 2016 – we are witnessing a rising dominance of ‘click-bait campaign ads, where the technological ability and incentive to monetize engagement by both firms and campaigns leads to increasingly sensationalized and targeted political communication’. 99
This tendency was demonstrated and indeed carried further by the modes of communication that dominated the 2020 US presidential campaign – owing not least to the rise of a long-since globally active ‘influence industry’. 100 Even Facebook’s concession that it would allow users to deactivate election ads reflects a cybernetic-capitalist objective, as each choice represents an additional, lucrative piece of information and feeds into user profiles. Apart from a general suspension of calculated targeting – and even the 7-day ban on political advertising right before the US election marks an exception that proves the rule – everything is supposed to remain within the platform’s control loop, and so the focus is always on systemic self-regulation rather than external legislation. Even the ‘deplatforming’ of Trump in the wake of the attack on the Capitol, which is as understandable as it appeared necessary at the time, underscores this logic. Facebook has carried the privatization of the public sphere to a point at which the platform, as a meta medium (not to be confused with the company’s new name), develops its own infrastructural power, which sets (community) standards, that is, the constitutive frame of its various publics, 101 and in exceptional cases decides, direct-technocratically, on a user’s exclusion. 102 In this way, the platform not only sorts relevance in accordance with engagement performance and prepares the proprietary, algorithmic space for the ‘arcane policies of special interests’, 103 as already addressed by Habermas, but it in fact determines, as the ‘gatekeeper platform’ that it is, what is displayed to users, when, how and by whom, while controlling overall access. 104 In other words: sovereign is he who decides on the communicative norm.
The platform economy’s infrastructural transformation of the public sphere thus also prompts an intensification of the Habermasian diagnosis, in the sense that even beyond the public sphere’s economization and its ‘transformation […] into a medium of advertising’ 105 the contours of Facebook's monopoly as a meta medium become apparent. 106 From this angle, the (standard) case of CA does not only cast light on particular developments in political campaigning. Rather, it represents a company-specific, programmatic power configuration, which closely intermeshes economic interests and cybernetic logics of communication. It thereby subjects the (political) public sphere to its code, indeed with an increasingly lasting effect, via a prior infrastructural level, and centralizes it in accordance with the platform logic.
In their prudent pursuit of self-interest, the producers of the platform managed for a long time to propagate a public image of themselves as neutral tech companies, which model the public sphere, as Habermas puts it, as the ‘cybernetic dream of a virtually instinctive self-stabilization’ 107 and re-program communicative exchange. However, the portrayal of social media as ‘learning organisms’ (Zuckerberg) with a non-partisan mission can hardly gloss over the fact that, in the surveillance-capitalist application of cybernetic logics to public opinion-forming, ‘the value system would [contract] […] to a set of rules for the maximization of power and comfort’. 108 Correspondingly – not least with a view to the platform operators’ botched attempts to regain control over antagonistic network affects through algorithmic tools – what Habermas established as early as the 1960s is even more valid today: ‘This challenge of technology cannot be met with technology alone’. 109
ORCID iD
Anna-Verena Nosthoff https://orcid.org/0000-0003-0890-2511
