Abstract
This response to the article ‘Conceptualising the “Algorithmic Public Opinion”: Public Opinion Formation in the Digital Age’ by Alessandro Gandini, Silvia Keeling, and Urbano Reviglio considers the concept of algorithmic public opinion and highlights that while the algorithms of digital and social media platforms and associated information and communication infrastructures now exert a substantial influence on how public opinion formation unfolds (the process) and what public opinion patterns emerge as a result (the product), these algorithms are by no means the sole determinant of process and product. It argues instead that the ordinary users of these platforms – and they are active users of these communicative spaces, not just passive ‘audiences’ for the communicative efforts of others – retain significantly more agency than this article's discussion affords them.
Introduction: Algorithmic or algorithmically informed?
To avoid any misunderstandings, let me say at the outset of this brief commentary: this article makes a very useful contribution, which is sure to generate further constructive debate. The idea of ‘algorithmic public opinion’ pushes us to further consider and conceptualise exactly what role algorithms have come to play in public (and indeed, personal) opinion formation, and to keep updating these concepts as the communicative environment continues to change around us. The idea also remains, as Alessandro Gandini, Silvia Keeling, and Urbano Reviglio acknowledge themselves, ‘inevitably partial and incomplete’ (Gandini et al., 2025, this issue, p. 12), however – it necessarily foreshortens a complex interplay of factors and forces into a single three-word term. We must take care to avoid oversimplification as we reflect on and operationalise this term, therefore, and note where it skips over points that require further problematisation. This is my aim for this commentary.
Most centrally, perhaps, the very term ‘algorithmic public opinion’ could be understood at first glance as implying that public opinion itself is now wholly determined by algorithms – but that is not the authors’ intention. A more nuanced but also more unwieldy formulation of the process and product that they describe in their article might be algorithmically informed public opinion formation: This highlights that the algorithms of digital and social media platforms and associated information and communication infrastructures now exert a substantial influence on how public opinion formation unfolds (the process) and what public opinion patterns emerge as a result (the product), but that these algorithms are by no means the sole determinant of process and product. As the authors highlight prominently, benign as well as nefarious actors – including ordinary, everyday users of news and information – draw on their imperfect but imaginative understandings of how platform algorithms work to variously exploit, enrol, or resist those algorithms in shaping information flows to their own ends. Platform operators and their algorithm designers, in turn, respond to those attempted interventions for diverse economic, political, or regulatory reasons, resulting in a constant, oblique tug-of-war between platforms and users.
Let us be cautious about ascribing too much irresistible power to these algorithms, then: They may serve to channel and shape the processes of public opinion formation, but they do not absolutely determine them to the exclusion of all other factors. Indeed, it is highly likely that, the more overt and heavy-handed algorithmic shaping of news and information flows on social media becomes, the more users will push back against such obvious interference in the free flow of information. Arguably, the recent exodus of users from X under Elon Musk to alternative platforms is driven at least in part by user backlash against the deliberate ‘enshittification’ (Doctorow, 2022) of its content recommendation and moderation algorithms in favour of the far-right ideologies now embraced by the platform's proprietor, for example.
Put another way, when the authors of this article write of ‘the central role of automated decision-making processes in the circulation of informational content’, and suggest that these algorithms are ‘decisively shaping which issues of public interest gain prominence, which are suppressed, and what news sources and interpretations circulate around them’ (p. 11), our first questions must be: how central? how decisive? They seek ‘to avoid a broadly techno-deterministic perspective’ (p. 12), but come close here to embracing it nonetheless: I would argue instead that the ordinary users of these platforms – and they are active users of these communicative spaces, not just passive ‘audiences’ for the communicative efforts of others – retain significantly more agency than this article's discussion affords them.
Whose news? Which gates?
This becomes especially evident when we consider the processes of news and information circulation on social media platforms which provide the necessary foundations for algorithmic (or algorithmically informed) public opinion formation. The authors centrally draw on the concept of gatekeeping here, and cite Shoemaker et al.'s definition as ‘the process of selecting, writing, editing, positioning, scheduling, repeating and otherwise massaging information to become news’ (2008: 73) – but as a term that is so thoroughly associated with the routines of news production, gatekeeping has always struck me as particularly ill-suited to the discussion of news on social media. In its original meaning, news outlets control the gates of the news publication process by choosing which stories and events become news – by selecting ‘all the news that's fit to print’, in the New York Times’ famous slogan, and (by extension) rejecting all the news that's not.
Yet on social media platforms, news is not produced, but circulated, discussed, and otherwise engaged with: the gatekeeping has already happened, further upstream. For the past twenty years, I have therefore championed the term gatewatching as a description of what happens on these platforms (Bruns, 2005, 2018): instead of producing news, participants – from institutional actors through politicians, journalists, and activists to ordinary users – share and amplify the news content they have encountered elsewhere (by watching the gates of news publications). Other, alternative ways to describe aspects of the same process are news sharing or news curation (cf. Kümpel et al., 2015; Oeldorf-Hirsch and Sundar, 2015; Thorson and Wells, 2015); with an emphasis on different stages of the process, together they capture the full workflow that social media users follow as they engage with news content: discovery (gatewatching), dissemination (news sharing), and compilation and contextualisation (news curation).
I argue that it is crucial to consider the full trajectory of news from the gatekeeping of its production through the processes of gatewatching, news sharing, news curation, and indeed the user-driven as well as algorithmic amplification of news content, because it points us strongly to key limitations of the algorithms that influence public opinion formation: Generally, and certainly as described in the present article, they do not produce and publish news. Rather, much like the human participants in the process, they too engage in gatewatching, news sharing, and news curation. Indeed, even their gatewatching is somewhat underdeveloped and second-order: The algorithms employed by Facebook, X, TikTok, and other prominent social media platforms do not originate the posting of news content to these platforms from scratch, but merely act up on the content posted by users – including news organisations themselves, journalists, politicians, activists, experts, influencers, influence operators, bots, as well as the vast multitude of ordinary users.
This is not to deny that, when they do act on the news content shared by users, they can have an outsized influence on the further visibility and availability of that content: as the table on p. 9 helpfully illustrates, platform algorithms are commonly endowed with a variety of direct and indirect mechanisms for curtailing or extending the circulation of content. Again, though, while these are described in the table as direct and indirect gatekeeping, only the removal of content and the restriction and suspension of accounts actually qualify as a genuine gate that is opened or closed to posts and accounts – the remainder of these interventions enhance, shape, or reduce the circulation of news content, but do not stop it altogether. In reality, they are forms of targeted gatewatching, news sharing, and news curation, not gatekeeping.
This distinction matters, because as long as the gate is not closed altogether it remains possible for interested actors – from the nefarious to the legitimate, and from organised groups to loosely affiliated masses – to push back against such interventions, especially in concert. Indeed, what is identified in the table as ‘indirect gatekeeping’ approaches are actually all techniques employed by non-algorithmic actors that are designed to counter or circumvent the shaping of information flows effected by platform algorithms; some of them even seek to work around outright gatekeeping (that is, take-downs and deplatforming). Such attempts may ultimately prove futile, of course: The operators of social media platforms will always have greater control over their communicative infrastructures than users do, and can exercise that control to comprehensively remove unwanted content and accounts if they so choose. But – as recent experience with X under Musk demonstrates – such heavy-handed interventions do produce considerable user backlash, and therefore remain an option of last resort. More subtle approaches to massaging information flows by targeted algorithmic amplification or deamplification are therefore considerably more common, and remain open to counteraction by platform users.
Reaffirming the agency of users
I have emphasised the distinction between gatekeeping and gatewatching (and associated activities) here because only the latter recognises the considerable agency that platform users of all types retain even as platform algorithms inform and interfere with public opinion formation. I acknowledge that Gandini, Keeling, and Reviglio do recognise this when they write that ‘public opinion materialises as a product of interactions between users and algorithmically organised informational content, compelling users to navigate and negotiate their agency within this ecosystem’ (Gandini et al., 2025, this issue, p. 3), but that emphasis on user agency seems to fade somewhat from subsequent discussion in the article. If we maintain a dual focus both on how algorithms shape news and information flows and on how users engage with and push back against this algorithmic curation, and indeed on the constant interplay and contest between both these processes, I would suggest that the authors’ claims about the ‘decisive’ contribution of platform algorithms to public opinion formation processes must be tempered – arguably, individual and collective user agency is just as decisive for these processes and their results.
But if this is the case, then how far can and should we push the claim made by the authors that as the product of these processes ‘a new form of public opinion arises’ (p. 5)? It is, of course, undeniable that platform algorithms have now gained significant influence on news and information flows, and that that influence intertwines and competes with that of news organisations as gatekeepers, of online and offline opinion leaders as gatewatchers and news curators, of motivated individuals and groups as activists and agitators, and of the great mass of ordinary users as deliberately or stochastically sharing the news and thereby crowdsourcing certain information to greater prominence (cf. Meraz and Papacharissi, 2013). Yet does the emergence of algorithms as a newly powerful factor into this field of competing forces constitute a genuinely epochal change, or merely a rebalancing of power – reducing the influence of legacy media and other established actors, and increasing that of digital media stakeholders? Again, this should not be misunderstood as denying the validity of many of the observations made in this article: Platform algorithms are powerful and deserve more intense critical scrutiny – but the question is whether they facilitate the emergence of a genuinely new form of public opinion, or merely (but still importantly) affect the operation of established processes of public opinion formation in new ways.
Such caution is warranted also because – as the authors note (pp. 10f.) – great swathes of the population barely care about the news at all, and assume that any genuinely important news will find them eventually, through the combined efforts of algorithmic and user-driven gatewatching and news sharing (Gil de Zúñiga et al., 2017). Although this might appear to lend further weight to the idea that platform algorithms will decisively affect how these disinterested users form their individual and collective opinions (since serendipitous encounters with news content on social media are their only source of information), the opposite argument can also be made: Their very disinterest in the news is also likely to prevent them from forming strong opinions (since they simply do not care enough to become meaningfully engaged). Algorithmic curation that privileges one side of an issue over the other may be lost on these users – and indeed, the more prominent way of catering to the tastes of this disinterested majority of social media users appears to be Meta's approach of reducing the visibility of any news content on its platforms (Australian Financial Review, 2024). This could be seen as constituting an algorithmic retardation of public opinion formation, perhaps.
Some may see this critique as splitting hairs: it is centrally about the terminology used in the present article – ‘algorithmic public opinion’ vs. ‘algorithmically informed public opinion formation’; ‘gatekeeping’ vs. ‘gatewatching’; ‘a new form of public opinion’ vs. ‘a new influence on public opinion formation’. I do not contest the authors’ emphasis on the importance of studying the impact of algorithmic systems on news and information flows in social media as such. But terminology matters, as it directs us towards a specific conceptualisation of the phenomena it describes, and encourages a particular operationalisation of these concepts in subsequent empirical research. It is for this reason that I urge us to cautiously evaluate any claims of a fundamental state change that the term ‘algorithmic public opinion’ would imply.
Footnotes
Acknowledgment
This work was supported by the Australian Research Council through the Australian Laureate Fellowship project Determining the Drivers and Dynamics of Partisanship and Polarisation in Online Public Debate, and the ARC Centre of Excellence for Automated Decision-Making and Society.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Australian Research Council (grant numbers: FL210100051, CE200100005).
