Abstract
Social media companies and their owners offer these tools to control epistemic frameworks across different communities and networks. We must assume that they use them for their own benefit. This means that we need to somehow reframe ‘The Algorithm’ from being a free-floating, data- and profit-driven, but otherwise inert agent, into a tool which is used by its masters and their clients to control our symbolic spaces. The interplay, in contrast to what Gandini, Keeling and Reviglio are saying, is not between the ‘algorithmic systems and users’, but between those who design, operate and use these algorithms, and those who are controlled by them.
In their article, Gandini et al. (2025) capture and give a name to the process and product of public opinion formation as mediated by algorithmic social media systems. They list the direct and indirect ways such systems gatekeep access to information and describe some basic characteristics of the audiences thus construed. They invent a new name for such publics: ‘Algorithmic Public Opinion’.
Gandini, Keeling, and Reviglio know a lot about algorithmic systems, how they work, and what kind of impact they may have on various domains of society. Their research has been trailblazing in giving new, highly informative insights into the intricacies of the platformization of identities (Rama et al., 2022), subjectivity (Gandini et al., 2023), work (Gandini and Garavaglia, 2026), collaboration (Arcidiacono et al., 2018), and information dissemination and discovery (Kermer et al., 2024; Reviglio, 2023; Reviglio and Santoni, 2023). At one point, most researchers arrive at the point when they need a shorthand to name the object of their study. While I understand the sentiment, ‘algorithmic public opinion’ is not a fortunate choice, I’m afraid.
Research on data-driven, large-scale digital media services that rely on a hybrid of humans and software to organize the social, economic, intimate, political, and cultural interactions of millions of users has been plagued by oversimplifying, highly misleading labels invented by entrepreneurial researchers and parroted by everyone else. Indeed, I cannot agree more with Bogost (2015) (explicitly referenced in the article) that concepts like algorithms – and I’d like to add platforms, personalization, and artificial intelligence to his list – are ‘sloppy shorthand, slang terms’, which at best are inaccurate, and at worst focusing hundreds of researchers’ attention on the wrong object. I’m afraid that if the act of naming proposed by Gandini et al. is successful, a similar fate would befall the term algorithmic public opinion. This is not because the phenomenon the authors point out is not relevant, but because it hinges its fate upon the term
Algorithms, the term and object this article centres on, are deeply engrained in, yet somehow floating above, both the sphere of public discourse and the social media companies that use them. Algorithms are conceptualized, in line with the literature, as data- and profits-driven artefacts, which organize the world around them by obeying their own internal logics. Their main goal is conceptualized as turning data into profits by way of maximizing user engagement. What they are seen to do is to connect and interact with users; this is where they are situated, this is their function. They are somehow both inscrutable (they are a black box) and also clearly understood (their goal is to maximize engagement). Their agency is both mysterious and algorithmic: impenetrable, yet calculable.
Yet algorithms, I’d argue, are not free agents of data- and rules-driven engagement optimization, but a complex hybrid of software systems, data, and the humans who design, develop, operate, and own them. It is a means of production in – among others – social media systems (Buissink, 2023; Gnisa, 2022). It is a tool that has no (semi-) autonomous, algorithmic agency. It may be interacting with users, but it is not serving them, at least not any more than a mechanical press is serving the worker. So, instead of focusing on algorithms, their affordances and stand-alone agency, we need to look at the goals and interests of the companies which developed and operate them, and of those who own, and lead those companies (Cohen, 2025), and ask the question: what purpose this resource serves in the arsenal of social media companies? If algorithms are the mechanisms to control the production of interpersonal and social relations, information flows, and symbolic exchange, to what end are they used? In other words, instead of speaking about the term, the role, and the agency/affordances of ‘The Algorithm’, maybe we should centre the discussion on X and Musk, Meta and Zuckerberg in mediated communication and in public opinion formation. Otherwise, we may fall into the trap of discussing the conveyor belt rather than the role and impact of Ford and Fordism in the history of industrial production. This is not to say that the production line is not a foundational, transformative technology in the history of labour, but in and by itself, it tells us little about the social, economic, and political transformations which industrialization brought about. In the same vein, the true relevance of the algorithm is how it is used by its masters to control the mediated construction of our reality (Couldry and Hepp, 2017).
Public opinion is one particularly interesting segment of that reality. It is a fundamentally political object (Minar, 1960), typically defined against the background of, and in relation to, political power. In that context, the media, which facilitate public discourse, from coffee houses, watercoolers, public spaces, via radio and television, to the internet and social media, gain political relevance to the extent they can transmute the discourses they mediate/facilitate into politically relevant ones. The relationship of media and public opinion is in some sense the history of the relationship between political power and the power of media products and owners, from the press barons of past and present (Arsenault and Castells, 2008; Procter, 1998), via state owned and/or aligned propaganda machineries in authoritarian regimes (Welch, 2004), to the tacit alignment between media and political powers where private media conglomerates are happy to manufacture consent for large-scale political projects (Herman and Chomsky, 2002). Online social media and communication services, and their owners, are part of that history. Not necessarily because they bring something new to the table, but exactly because, as owners of media and communications companies, they play the same role. Their capacity to give political relevance to the discourses they facilitate, amplify, or suppress is not a function of the algorithm (the tool) itself, but stems from the fact that they control, parametrize, fine-tune, and optimize the algorithmic systems according to their own political objectives and goals.
By now, it is clear that the owners and operators of algorithmic media and communication services wield not just unprecedented amounts of economic power, but they are clearly using that economic power as political power (Cohen, 2025). The now-infamous photo of all the US tech oligarchs at US President Trump's second inauguration on 20 January 2025 (Helmore, 2025) is the most obvious, but also a slightly misleading example. The political power of social media companies is present even if they don’t line up behind authoritarian political figures.
Gandini, Keeling, and Reviglio identify the profit-driven, commercial nature of these media companies as the main concern for public opinion formation, as it may restrict free speech and suppress diversity. What they leave unsaid is that in this business model, the circulation of political speech, and the answer to the question of ‘Who gets heard?’ is inherently subjected to the interests of economic powers. Social media don’t have newsrooms separated from the ad departments. They are all ad department. The product they offer is not a bundle of news produced along professional journalistic standards and ads written by marketers, it is all targeted communication, some of which has not yet been monetized. The distinction between organic (i.e. produced by ‘The Algorithm’) and paid audiences is somewhat illusory. These companies have a clear and well-documented incentive to monetize access to audiences, at the expense of organic reach (Barron, 2020; Duke, 2025). 1 The constant changes in the otherwise opaque algorithmic rules force senders to treat organic reach as a highly specialized marketing activity. This setup turns any speech that would even remotely qualify as political into a function of the economic power of the speaker. To put it in other words, those who have more financial resources have a better chance of shaping public opinion. The same logic applies to access to data. Though some argue that the data-driven approach to public opinion research is misguided as it only produces speculative bubbles around what the public thinks (Csigó, 2016), data still drives this industry. Despite European efforts to ease access to social media data (van Drunen and Noroozian, 2024), data-driven insight into public opinion is the privilege of those who pay, while the rest of the public remains oblivious to the dynamics they are part of.
But the main issue, in my opinion, is not how the profit-driven business models of digital media companies shape public opinion. The focus on the commercially driven algorithm masks the fact that these algorithms are, first and foremost, tools in the hands of the media companies
Lastly, the term algorithm is often packaged together with other concepts that may have passed their theoretical shelf life. Case in point is personalization, which seems now inseparable from the algorithmic approach to communication. In some sense, the assumption that all of our feeds look different seems to be warranted: our habitus, our life-worlds are indeed different, and why would that difference not be visible for algorithmic tools, and reflected in our algorithmically constructed feeds? Yet positing personalization as a goal, or at least as an inevitable end-product of algorithmically controlled media exposure, has less merit. In the digital sphere, mass production is still the fundamental logic; it is just called a meme, a blockbuster, or a populist candidate rather than a Model T. Fundamental economic logics still favour the production of the same one product for millions to the production of a million different products tailored to individual preferences. This means that despite all the chatter about personalization, the algorithmic control of communication still needs to produce large homogenous consumer blocks out of heterogeneous consumer masses. Algorithmic tools, in this sense, must still be optimized to manufacture consent, coordinate expectations, harmonize epistemologies, world-views, beliefs, and tastes. Personalization may be the way to respond to individual sensibilities and differences, but it is hard to imagine that a hyper-individualized consumer mass and a hyper-fragmented body politic would be aligned with the interests of the political–social media complex.
Beniger's (1986) book on the control revolution ended at the development of the scientific, conceptual, and institutional infrastructures of marketing, public relations, and public opinion research. He conceptualized these as the infrastructures to control market demand for mass-produced goods, including political candidates, parties, and/or ideas. Social media companies and other digital intermediaries are not an exception to this history, but the next chapter, yet to be fully written. They are in the position to control all critical swaths of public discourse, offering the highly customizable tools of algorithmic targeting to market and sell mass-produced ideas, wrapped in individual preferences and personalized prices.
Social media companies and their owners offer these tools to control epistemic frameworks across different communities and networks. We must assume that they use them first and foremost for their own benefit. This means that we need to somehow reframe ‘The Algorithm’ from being a free-floating, data- and profit-driven, but otherwise inert agent, into a tool which is used by its masters and their clients to control our symbolic spaces. The interplay, in contrast to what Gandini, Keeling, and Reviglio are saying, is not between the ‘algorithmic systems and users’, but between those who design, operate, and use these algorithms, and those who are controlled by them.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Balazs Bodo received funding from the Trust in the Digital Society Research Priority Area Grant from the University of Amsterdam.
