Abstract
Popular and scholarly critiques regarding the epistemic power of contemporary generative text AI tools raise some interesting questions in regard to Gandini et al.'s heuristic of algorithmic public opinion. This commentary therefore asks: what is the relationship between algorithmic public opinion and AI-generated text? Given the staggeringly fast uptake of generative AI tools, will social media platforms remain key players in the formation and circulation algorithmic public opinion, or does generative AI necessitate critical attention to a new kind of public opinion – one that is shaped, constituted and generated by AI?
In their heuristic of public opinion formation on social media, Gandini et al. (2025, this issue) join a wealth of scholars who have deployed the word ‘algorithmic’ as a kind of conceptual prefix to longstanding subjects of social critique and public debate. Digital media scholars as well as cultural and critical theorists have coupled the term ‘algorithmic’ with concepts such as ‘thinking’ (Beer, 2022), ‘identity’ (Cheney-Lippold, 2017; Kant, 2020), ‘condition’ (Berry, 2025) and ‘power’ (Bucher, 2018) to name a few. Such critical configurations describe how adaptive and decision-making algorithms, deployed to make sense of datafied society and culture, present moments of force-relation that shape the interactions, conditions and states of being which lay at the heart of social and cultural critique. However as Bucher (2025) notes, since the launch of OpenAI's ChatGPT in September 2022, scholars and the public alike are now embedded in an ‘era of extended AI hype’ (2025: 81). In this age of generative AI, the term ‘algorithmic’ is ‘moving into the background’ whilst ‘popular and academic discourse seems replete with mentions of “AI”’ (Bucher, 2025: 81).
As Bucher argues, in some ways this discursive shift masks that these apparently new and transformative AI tools ‘at their “core” are still fundamentally driven by algorithms’ (2025: 81). However, generative AI comes with an affordance that other algorithmic actors lack: as the name suggests, these systems literally generate user-readable, popularly shareable texts in ways that other adaptive and decision-making algorithms cannot. This affordance raises some interesting questions in regard to Gandini et al.'s (2025, this issue) analysis of algorithmic public opinion: what is the relationship between algorithmic public opinion and AI-generated text? Given the staggeringly fast uptake of generative AI tools, will social media platforms remain key players in the formation and circulation of algorithmic public opinion, or does generative AI necessitate critical attention to a new kind of public opinion – one that is shaped, constituted and generated by AI?
In some senses, generative AI systems function as ‘algorithmic gatekeepers’ in online information circulation and, in doing so, share similarities with other adaptive and decision-making algorithms that Gandini et al. (2025, this issue) assert play a pivotal role in contemporary online public opinion formation. Gandini et al. note that in social media-based public opinion formations, social media platforms themselves employ ‘direct’ forms of algorithmic gatekeeping that ‘actively prioritise, remove, reduce, label, notify, and manage content to shape its reach and visibility’ (2025, this issue). If such functionalities are applied to the algorithmic operations of generative AI platforms, direct algorithmic gatekeeping is similarly at work when a chatbot such as ChatGPT generates outputs relating to any kind of social problem or public debate. That is, generative text AI algorithmically labels, prioritises, reduces and manages the information it presents to users in much the same way that social media platforms algorithmically gatekeep the information constitutive of algorithmic public opinion on social media.
However, there are some distinctions between social media and generative text AI platforms that mark significant departures in how algorithmic power is made operational at the level of the user interface. As Fazi (2024) amongst others point out, generative AI tools such as ChatGPT are built on large language models (LLMs) and are therefore fundamentally different in their operational ontologies when compared to other algorithmic processes that facilitate online knowledge production, such as those made possible through search engines or social media infrastructures. Despite their differences, ChatGPT is not only fast becoming a major competitor to Google Search (Marr, 2023), it is also changing the nature of online information circulation by concretising ‘zero-click’ searches – wherein a user's query is answered directly on the page, without the need to click through to any website – as an epistemic norm. This form of algorithmic gatekeeping is fundamentally undermining the income generation models of news outlets and cultural producers who rely on click-throughs in order to sell advertising space on their websites. In regard to public opinion formation then, the free (at the point of access) and public circulation of information – long-recognised as a foundational component in shaping and informing public opinion – is fundamentally challenged by generative AI's ‘zero-click’ forms of knowledge production. Indeed, news outlets’ efforts to challenge this zero-click model are creating complex problems for algorithmic public opinion formation, as I explore below.
Another striking distinction between social media information circulation and AI-generated knowledge production is that in generative AI ecosystems, the distinction between news producer and gatekeeper theoretically collapses altogether: in generative AI processes, the act of ‘gatekeeping’ happens at the moment of production of information itself. This is because AI-generated news and other cultural forms can be considered ‘synthetic media’ (Berry, 2025; Fazi, 2024; Meikle, 2023), that for Fazi are not simply contrived or ‘inauthentic’ (2024) but can be considered a form of synthesis, wherein generative AI deploys unique human-computational relations that create wholly new cultural texts and products. Berry argues that synthetic media are currently flooding flows of information capitalism, creating a cultural moment of ‘algorithmic inversion’ (2025) wherein established human-algorithmic relations are subverted and AI-generated cultural forms come to supersede and indeed reshape existing modes of ‘authentically’ human-generated content.
What does this moment of ‘algorithmic inversion’ mean for the constitution of public opinion? This depends on the significance of social media platforms in constituting and representing public opinion. Because of generative AI's innate syntheticity, generative text AI platforms such as ChatGPT do not, under any agreed definition, constitute any kind of social media network or platform: for one thing, they do not allow users to connect to other users as with more classical definitions of social networks (boyd and Ellison, 2007). Nor do they facilitate the production or circulation of ‘user generated content’ between social subjects or actors. On generative AI interfaces such as ChatGPT's, there are no many-to-many or algorithmically gatekept interactions between users or accounts; there is only interaction between individual user and AI-output. In fact, under Fazi and Berry's definitions, the material outputs produced by LLMs are fundamentally synthetic – uniquely synthesised cultural forms constituted through AI-human relations. As such, AI generative text or images cannot be considered to be ‘user generated’ by the public, even when they are created in response to a member of the public's prompts and then recirculated under the ownership of that individual. In this sense the constitution of public opinion via generative AI is an oxymoron: it is not possible to formulate public opinion within a generative AI system, because there is no ‘public’ present, only algorithmic system, writer, gatekeeper.
However, taken within a wider political-economic context, the line between public and AI opinion becomes blurred. Tools such as ChatGPT, Google's Gemini and Meta's AI heavily rely on data scraped from Reddit, LinkedIn, Facebook and Instagram posts, among other social media platforms (Niemeyer, 2025; McMahon, 2024). In this way generative AI outputs are very much informed by the same social media data that for Gandini et al. (2025, this issue) also constitute and shape algorithmic public opinion. Generative AI systems do not just repurpose social media data to generate new forms of knowledge – users are also increasingly using generative AI tools to create texts and images that are then shared widely on social media platforms (Corsi et al, 2024; Pan, 2025). Generative AI therefore can and indeed should be considered as capable of intervening in and indeed constituting algorithmic public opinion both as ‘process’ (Gandin et al., 2025, this issue) in the form of training data and as ‘product’ (Gandini et al., 2025, this issue) in the form of texts and images shared on social media as user generated content.
In Gandini's analysis (2025, this issue), ‘traditional’ news outlets still have a part to play in algorithmic public opinion formation: though they are no longer the most powerful ‘direct’ gatekeepers in algorithmic public opinion formation (this title is reserved for social media algorithms), they continue to act as ‘second-level’ or ‘indirect’ algorithmic gatekeepers because they employ ‘the strategic actions through which other actors, aside from algorithms, attempt to influence content circulation by way of algorithmic forms’ (2025, this issue). News outlets remain key players in algorithmic public opinion formation by maintaining widely subscribed-to public social media accounts, sharing news, posts, and viral videos, as well as performing indirect gatekeeping processes such as ‘social media optimisation’ (Gandini et al. 2025, this issue) or ‘algorithmic gaming’ (Gandini et al. 2025, this issue) that maximise post visibility.
Conversely, news outlets’ roles in generative text AI systems present a strikingly different picture: whereas traditional gatekeeping news outlets have sought to maintain power in social media public opinion circulation, these same gatekeepers are intentionally writing themselves out of information production on the most popular generative text AI systems. News outlets such as The BBC, Guardian, New York Times, ABC and Al Jazeera have all reportedly attempted to block one or more popular generative AI web crawlers from scraping their content in an effort to prevent their news reportage informing generative text AI output (Maher, 2024). The question for me becomes: what happens to public opinion if news gatekeepers are written out of public opinion formation entirely? Under current generative AI models, information production becomes increasingly reliant on cheap social media and online data. By cheap I do not mean ‘unnewsworthy’ forms of knowledge such as celebrity gossip or ‘feel-good’ news stories – notions of ‘worthy’ news are inextricable from value judgements bound in cultural capital and therefore often unhelpful. I refer instead to an increasing reliance on LLMs to generate content based on: (a) unreliable, unstructured internet and social media data that carries greater risk of output bias and ‘hallucinations’ (Gautam, 2025) (b) data sourced via crawling infrastructures that recirculate journalistic content but without proper attribution (Kuai, 2024); and (c) data extracted from pirated text libraries (Reisner, 2025). I am increasingly concerned about the kinds of information prioritised as ‘truth’ by generative AI systems if traditional public opinion gatekeepers continue to opt out of LLMs in ways that they haven’t opted out of social media.
I therefore join Gandini et al. in their concern that ‘political-economic forces’ (2025, this issue) – especially advertising – play a fundamental role in algorithmic public opinion formations, both on social media and regarding generative AI systems. At present, generative text tools such as ChatGPT are premised on a ‘zero click’ model that threatens to fundamentally disrupt established monetisation models that underpin both news production and public opinion formation. Given the commodification of news that advertising arguably creates, challenges to existing news monetisation models may be no bad thing. However, it remains to be seen how advertising will be integrated in ChatGPT and its competitors’ models and given that most big tech companies are still struggling to make profits from generative AI, it seems naive to assume that advertising will not play a pivotal role in these systems’ monetisation strategies. Unless news creators, as well as social media creators, are acknowledged, valued and compensated for their data, exploited as it is by generative AI developers, it seems that cheap and pirated forms of knowledge production will come to supersede news journalism in the production of algorithmic public opinion – if not directly through generative AI gatekeeping, then indirectly through the presence of generative AI content on social media platforms.
