Abstract
Do public controversies about AI matter? Can they really make a difference to how AI is taken up or imposed on society? This introduction to the Special Issue on Analysing Articifial Intelligence Controversies (?) discusses how the contributing articles and commentaries address this question through original research and critical reflection. We argue that AI presents a watershed moment for the analysis of public controversies about science and technology insofar as AI controversies today often serve as occasions for the demonstration of techno-scientific authority and the reassertion of hierarchies between expert and lay, which challenges the framing of controversy as a force of democratisation in Science & Technology Studies. We discuss how this technopolitical context compels us to investigate anew the relations between AI controversies and the situations—the underlying tensions, disputes and frictions in society—to which these controversies are relevant, or alleged to be relevant. We identify four different modalities in which AI controversies operate on social and political situations: (1) The controversy conceals the situation. (2) The controversy articulates the situation. (3) The situation articulates the controversy. (4) The controversy and the situation are mutually irrelevant. We draw on the contributions to the Special Theme to develop a typology of these four modalities and discuss how these contributions themselves address the wider challenge of controversy as a force of “authoritanization” by reworking the concepts and methods of Controversy Analysis and by critically investigating, evaluating and reflecting on AI controversies of the last 10 years.
Keywords
This article is a part of special theme on Analysing Artificial Intelligence Controversies. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/analysingartificialintelligencecontroversies
Introduction to the special issue on analysing artificial intelligence controversies? Big Data & Society
Do public controversies about artificial intelligence (AI) matter? Can they really make a difference to how AI is taken up in society? Expert and public debates about AI have been driven by metaphysical imaginaries of the coming supremacy of machine intelligence since its inception in the 1950s, As such, AI has always courted controversy and thrived on the idealisation of disruption and the radical transformation of society. In the wake of the recent boom in Generative AI, however, the AI industry and AI scientists not only insist on AI's innate capacity to revolutionise science, the economy, and humanity itself. They equally dominate expert, media, and policy debates about the risks and harms that AI poses to society, culture and democracy. As we will discuss in this Special Issue introduction, long-standing assumptions in social studies of science and technology (STS) about public controversy as a driving force in the democratisation of science and innovation are put to a critical test in this context. Can controversies about AI in research, policy, and media really make a difference, or do they instead present pseudo-events: artificial interventions staged by powerful actors and designed to set the terms and occupy the channels of public debate?
In this introductory essay, we approach this question by centring the relation between controversies about AI and the situations – that is, the underlying tensions, fundamental disputes, and situated frictions in society (Barry, 2021; Clarke et al., 2016; Mannheim, 1936) – to which these controversies are relevant, or alleged to be relevant. In public debates as well as in scholarly work, it is often assumed that controversies articulate wider situations out there in the world. But this is not necessarily the case. When it comes to AI, industry discourse and technology enthusiasts increasingly occupy even the spaces of AI critique by sparking controversies on so-called ‘existential risks’, in ways that may conceal rather than elucidate the actual social and political situations in which we find ourselves, as AI is imposed on seemingly all domains and sectors of society. The controversy is not the situation.
This realisation takes us back to earlier approaches to controversy analysis, such as the work of Steve Lukes (1974) on the role of power and hegemony in public issue formation. It also requires us to reconsider the capacities of controversy analysis as a methodology for examining the complicated relations between controversies and situations. Does controversy analysis provide a way to surface these relations in our changed political context, and if so how? With regard to AI, we argue that we need to look more closely at the different types of controversiality that are activated in public and expert debates about AI. For example, the controversiality of AI may be deployed in a spectacular mode, as a way to heighten its public visibility and general allure to distract from the push for regulation and to conceal wider societal frictions. Or, by contrast, it may be mobilised to situate AI as a specific technoscientific object of concern and resistance for particular communities in society, as in the case of controversies about the use of facial recognition by the police in Berlin, London, Cardiff, Marseille and Paris. In these two cases, public controversy about AI clearly operates in very different ways on the social and political situations it pertains to. But how exactly, and how can we make sense of the very different modalities, capacities and consequences of AI controversy in and for society?
This overarching question connects the different contributions to the Special Issue on Analysing AI Controversies?, to which this article serves as the introduction. The Special Issue brings together social and cultural researchers of science, technology and media who engage with the challenges of contemporary AI by critically investigating, evaluating and reflecting on AI controversies of the last 10 years. 1 Research articles show how expert debates about neural networks, deep learning and large language models since the mid-2010s prepared the ground for the rise to prominence of an industry-driven discourse of tech boom and gloom across research, media and policy, but also sowed seeds for its problematisation by critical actors across activism, policy, media, science and scholarship. Some of these contributions evaluate these latter efforts to reclaim techno-scientific controversy as a vector for democratization in an era defined by the outsized power of big tech. They show how, during the last ten years or so, independent agencies intervened in the framing and governance of AI by shifting attention from controversies to frictions (Meunier et al., 2021): by focusing public attention on the disruptions, failures and harms caused by AI-based socio-technical systems in everyday social life, they attempted to unsettle the unquestioned authority bestowed on technoscience by ‘AI’. Other contributions, however, offer reflections that question the very capacity of AI controversies to shift the path of AI's seemingly inevitable take-over of the economy, government, science and society. Below we will pull together and explicate these main threads of inquiry, questioning and critique that connect the contributions.
AI as ‘watershed’ moment: Controversy as engine of authority
‘He pioneered AI, now he's warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for’. (
This formula can be considered symptomatic for the relations between science, technology and society in our current moment, and this for several reasons that are explored in this Special Issue on AI controversies. There is the fact that in the case of AI it tends to be industry insiders, as opposed to outsiders – as used to be the norm for public debates about controversial technoscience, such as nuclear power and genetic technologies – that appear in news media to warn society about the dangers posed by the invention in question. Several years before industry ‘godfathers’ like Hinton decided to go public with their concerns, similar warnings had already been made by experts, activists, and citizens, but at the time the latter did not achieve the level of ‘cut through’ as the more recent warnings by godfathers.
Another striking fact about recent AI controversies is the proliferation – alongside the general excitement, concern and warnings about new forms of machine learning that are now commonly referred to as ‘AI’ – of specific and situated controversies about AI applications in particular settings and social domains, such as the use of facial recognition by the police or automated scoring services in sectors like recruitment, which were proven to be biased and discriminate against particular social groups (Fussey et al., 2021; Selbst et al., 2019). The 10 year period 2012-2022 was marked by extensive mobilization by activists, experts, and citizens who produced public demonstrations of societal harms induced by the deployment of AI-based applications in specific contexts, from law enforcement to defence, insurance, recruitment and medical diagnostics. Besides this, there also were careful expert discussions about the lack of provisions for trustworthiness or explainability in AI, and about the impacts of AI on jobs and the future of work, as well as notable scientific disagreements about the revolutionary capacities and limits of neural-network-based AI (Cardon et al., 2018) and about the difference between automation and AI – between ‘stochastic parrots’ (Bender et al., 2021) and machines capable of meaning (Bunz, 2019). But these efforts and interventions seemed to be trivialised and at risk of being sidelined by the deus ex machina of godfathers labelling AI the source of extinction-level danger.
Notwithstanding – or precisely because of? – the hyperbole involved in the labelling of AI as a potential cause of civilisation collapse, an odd sense of inconsequentiality has surrounded the hyped pronouncements about the catastrophic risks posed by AI. Even as such sensational warnings did the rounds of media circuits, most governments decided to adopt a ‘pro-AI’ agenda, often grounded in geopolitical innovation arguments about an AI arms race (Nguyen and Hekman, 2022). Two countries in which AI industries are based, the United States and the United Kingdom, are currently without dedicated regulatory frameworks for AI. In these countries, public warnings by industry insiders go hand-in-hand with efforts by other industry insiders, or sometimes the same ones, to create a moratorium on AI regulation (Rogers and Morris, 2025). Apparently, it is not a very big step from the call for a moratorium on AI, which was made by industry and scientific experts in 2023 (Future of Life, 2023) to the call for a moratorium on meaningful AI governance. More to the point, a probably intentional consequence of such inversions in public discourse on AI risk and harm is that it diminishes the overall credibility of this type of discourse, or even decouples it from social reality. It can feel as if public controversies about science and technology are being devoured from the inside out through the assertion, mobilisation and escalation of AI's controversiality.
The above and other features of the AI controversy landscape are examined in detail in the research articles and commentaries in this Special Issue. Several of the contributions address the striking contrast between the contemporary state of AI controversy and prominent conceptions of the role of public controversy about science and technology in democracy that were formulated at the start of this century (Dandurand et al., 2023; Marres et al., 2024; Sloane, 2024; all in this issue). In this earlier period, controversies about complex issues like climate change, nuclear waste and genetically modified foods were identified as key sites and vectors for the democratization of governance in contemporary societies (Latour, 2010; Whatmore, 2009). Taking their cue from these theories, influential figures in government and research at the time adopted this framing of public debates about technoscientific issues as an opportunity to finally overcome the great divides between science and society, and put innovation at the centre of democracy (for a discussion see Callon et al., 2011; Stilgoe et al., 2014).
What has happened? It is tempting to conclude that ‘AI’ is the shore on which early 21st century visions of technological democracy have come to die. Far from serving as ‘invitations to the table’, today's pronouncements on the controversiality of AI by leading scientists and engineers seem designed to repatriate authority to tech industry allies and to close down the space for inclusive debate (Phan et al., 2022; see also Stirling, 2008). Public assertions of AI's controversiality such as those by Hinton mentioned above do not translate into broad and diverse – multiple rather than binary – disagreements between experts, advocates and affected citizens. They do not on the whole enable the identification of common points of contention and shared points of reference across diverse societal groupings. Rather, they supplant the critical and situated proximity with technologies (again multiple) in everyday life with distant, broadstroked endorsements of the epochal significance of general ‘AI’. Instead of activating a diversifying dynamic, which enables different voices to be heard and compels institutional authorities to expand the frame on what and who matters to the governance of science and innovation, many recent AI controversies have had the opposite effect: they first and foremost provide demonstrations of technoscientific power, as mediatised assertions of technological risk and danger by AIs proponents activate speculative logics of valuation – in which, for example, a track record in the tech industry can be converted into public relevance (McGoey, 2019) – with the effect of consolidating hierarchies between insiders and outsiders. 3 Here, the affirmation of controversiality – with its connotations of disruption – paradoxically enables the consolidation of the ‘thingness’ of AI (Suchman, 2023): acts of objectification designed to render AI unquestionable as a social reality.
AI controversies have thus become aligned with a wider dynamic in which the signifier ‘AI’ acts as a force of the dismantling of democracy (Neff, 2024), and the replacement of an inclusive knowledge society by an unequal innovation economy. However, as indicated above, it is crucial that we recognize that this is not the whole picture. While generalist pronouncements regarding existential risk and harm may seem to overshadow, subsume or even suck the life out of situated controversies about AI that were and are very much happening in a multitude of different social settings, media, online communities and knowledge spheres, it is equally the case that the generalist pronouncements occur in the wake of and alongside specific controversies (see on this point, Poletti et al., ms). As various contributions to this Special Issue suggest (Gourlet et al., 2024; Liebig et al., 2024; Marres et al., 2024; Munk et al., 2024), while it did not dominate media and government agendas at the time, the period 2016–2021 was a formative period for public controversy about AI-based science and technology, as during this period defining applications, normative frames and socio-technical issues were first defined and brought into relation through research, reporting and mobilisation by activists, journalists, lawyers, policy makers, artists, citizens as well as scholars and scientists. It very much remains necessary, then, to affirm the multiplicity of AI controversies, and to distinguish between different forms of controversiality that in recent years have arisen from and been articulated in relation to AI.
The controversy is not the situation
What are the consequences of the observations above for how we go about the analysis of contemporary controversies about AI and related issues at the interface of science, technology and society? For it was not only the phenomenon of public controversy that played such an important role in envisioning the knowledge society and technological democracy in previous decades, it equally informed the development of methodologies for the study of science and innovation in society. Interest in the expansion of narrow expert disagreements into broad and inclusive public debate about technoscience – in different places, with more diverse actors – for example, was operationalised in strategies of network mapping, which focused on tracing the formation of new relations among differently positioned actors and how this enabled the reframing of issues (Marres, 2015; Venturini and Munk, 2021). However, such as empiricist approach can seem naive today. Methodologically speaking, in the case of AI merely ‘following’ the actors or the issues is likely to draw the analyst into publicity bubbles, as ‘flooding the zone’ of news media with spectacular assertions of technoscience's controversiality is an increasingly dominant strategy in this and adjacent topic areas, as discussed above. In this context, to neutrally document the prevalence of certain controversy terms in the media is to risk merely reproducing hierarchies between expert and lay understandings of AI in our studies.
However, while this changed political situation of controversy about technoscience is challenging on several fronts, many of the contributions to this Special Issue find that it does not render the concepts and methods of controversy analysis useless. Indeed, with some adjustments, controversy analysis can be deployed to recover the multiplicity of AI controversies. Thus, several contributions attempt to cut through the ‘tsunami’ of promotional AI publicity (Roberge and Castelle, 2020) by combining established methods of controversy analysis such as data mapping with methods borrowed from related fields such as design research. Several turn to design-based methodologies of elicitation, taking up creative and participatory techniques, such as prototyping and annotation, as a way to render AI controversies explorable and evaluatable from the specific, minoritarian standpoints of users, professionals, scientists and activists (Gourlet et al., 2024; Marres et al., 2024;). In doing so, these contributions reconnect with older, critical strands in Science & Technology Studies (STS), such as the sociology of knowledge formulated by Karl Mannheim in his classic work Ideology and Utopia (1936). In this book, which can be regarded as the founding text of the social study of science, Mannheim formulated what he called a ‘political situationalism’, which aims to understand how public disagreements about specific knowledge propositions – A.K.A. controversy – provide expressions of much deeper and broader, underlying societal conflicts and tensions between differently positioned social groups.
As several contributions in this Special Issue emphasise (Gourlet et al., 2024; Munk et al., 2024), the controversy is not the situation. Controversies about science and technology, particularly those reported in news media, cannot automatically be equated with the wider political situation at hand, the fundamental disputes, disagreements, and frictions in society for which debates about ‘AI’ may turn out to provide only a trigger point or lighting rod (Barry, 2021). It is an empirical question how controversy and situation relate to each other, and it is not a given that a controversy articulates the situation. Indeed, this for us is a key proposition for the social study of AI controversies today. The relation between a given knowledge proposition pertaining to AI, regarding for example, artificial general intelligence or the revolutionary capacities of AI-based predictive medical diagnostics, and the wider situation to which this proposition is relevant, is not fixed, but can take many different forms. The controversy can serve to explicate the situation, or indeed, it can serve to disarticulate it, or to disavow alternative framings. Assertions of controversiality can serve to distract, paralyse, freeze out (Dandurand et al., 2023), or conversely to problematise or to render explorable a situation (Marres et al., 2024). The relations between controversies and situations are manifold, and may involve operations of explicitization, implicitation, disavowal, disarticulation, problematization, distraction or paralyzation, among others.
We discern the following four possible relations between controversy and situation: (1) The controversy conceals the situation. (2) The controversy articulates the situation. (3) The situation articulates the controversy. (4) The controversy and the situation are mutually irrelevant. The AI controversy analyses assembled in this Special Issue discuss empirical instances of each of these different configurations of the relation between controversy and situation. The Twitter cropping controversy analysed by Shaffer Shane (2023) offers an instance of 2, as the specific controversy about algorithmic cropping on Twitter helped to explicate and demonstrate an underlying – diffuse, more general – political situation: algorithmic systems reproduce and amplify the structural phenomenon of racism. A case of 3 is put forward by Munk et al. (2024), when they discuss scientific papers on Deepfakes. The problematic situation of Deepfakes confers societal relevance on AI controversies, with AI as the technological dispositif that significantly aggravates the pre-existing problematic situation of mis- and disinformation, while at the same time being put forward as a solution to the problem, primarily in the form of technologies of detection. A potential case of 4 was identified during the Shaping AI workshop discussed in Marres et al. (2024). Here the public controversy about racial biases in the scoring software COMPAS that was used in U.S. courts was identified as a distraction from endemic forms of racism as an enduring societal phenomenon in the form of racialized inequality inscribed in the U.S. prison system. A case of 1 is identified by Sloane (2024), when she draws attention to the fact that public debates about the need for ‘participatory AI’ disavow the actual situation of AI development. As she points out, the appeal to such normative visions can easily be used to conceal the fact that popular generative AI applications are already profoundly participatory, as their development relies on training data extracted from user generated content, and on beta-testing in the form of user-led prompting.
Importantly, to inquire into the relation between controversy and situation enables a critical evaluation of how controversy operates upon wider social and political relations. But it can also help to shed light on a felt concern with the contrived nature, ‘fakeness’ or arte-factuality of public controversies about science and technology themselves. Indeed, this is why we chose the title artificial ‘intelligence controversies’? – with a question mark – as the working title for this Special Issue. During our conversations about AI controversy over the past years, interlocutors frequently reported a sense that contemporary so-called AI controversies were ultimately not really about AI, but rather involved attempts to intervene in wider political situations through or with AI-like technologies in various ways. This can lead us to further multiply the notion of AI controversiality.
We can for example distinguish between AI controversiality in situations where some machine learning technique is the cause of concern and those situations where such techniques are mobilised in relation to an originally non-AI concern. In the latter case, AI is often implicated by being proposed as the solution to a problem that has nothing to do with AI in its origin but still constitutes a situation. We see this in health care, for example, where financial and administrative pressures to effectivize treatment and diagnosis have enabled the adoption of various predictive systems in beta, or in the green transition where the distribution of power from unstable renewable sources to the grid today is widely framed as requiring machine learning. In these cases, AI technologies, with all their associated issues of unreliability, bias, and explainability, become embroiled in much wider political situations where the question is really something else, i.e how to sustain the health care system or mitigate climate change under conditions of financial-market induced drives to reduce public sector spending?
We can also distinguish between the situations where the agency of AI is actually observed as opposed to imagined or simply declared. Much of controversy analysis in STS, controversy mapping included, has historically begun from the precept that the fruitful situations to study are those in which actors have something concrete enough at stake to challenge the status quo so that established procedures can no longer be relied on and the black boxes of science and technology are temporarily unboxed (and the hierarchical playing field between lay and expert therefore temporarily levelled). But many AI controversies are not concrete in this way. Here, there is no ‘stuff’ of politics, as Bruce Braun and Sarah Whatmore (2010) call it. In contrast to the situations where public engagement happens when the socio-technical agency of AI and its consequences becomes observable in practice (generative AI in education, predictive algorithms in banking or insurance, racial bias in image recognition systems, the list is long), these are occasions in which future consequences are imagined, often in quite speculative ways by scientific and industry experts, and concern is thus produced without a present situation that invites public mobilisation.
This further challenges long held assumptions in STS about the productive role of controversy in techno-scientific democracy. This challenge extends to situations where regulatory frameworks co-produce concern almost by declaration. While AI-related interventions are creating real havoc in society in direct and indirect ways, not least by rendering precarious jobs more precarious in sectors from logistics to translation and causing environmental degradation as in the case of the construction of massive data centres required to enable the super compute needed for AI's development, these seems to be a growing preference in government and industry to focus policy engagements on speculative future scenario's in which AI becomes problematic by definition and the inquiry into how and why by necessity focuses on hypotheticals and can only unfold outside the realm of lived experience.
Re-working controversy analysis for a changed political situation
The contributions to this Special Issue address this situation by reworking concepts, methods and strategies of controversy analysis to grasp and evaluate the manifold relations between AI controversies and its implicated situations in delineated empirical areas. Some of these articles emerged out of a major international collaborative project ‘Shaping 21st Century AI: Controversies and Closure in Media, Policy, and Research’. 4 This project was designed to investigate how AI as a socio-technical phenomenon was taken up and imposed on contemporary societies during the formative decade from 2012 to 2021 and the role of controversies in shaping this process, in four countries: Germany, France, the United Kingdom and Canada. 5 Several contributions report on empirical research that was conducted as part of this project: Dandurand et al. (2023) on media controversies in Canada, Liebig et al. (2024) on AI policy-making in Germany, AI research as seen from the United Kingdom (Marres et al., 2024), and an experiment in participatory inquiry into AI in France (Gourlet et al., 2024). Beyond the project, other research articles and commentaries in this issue address other relevant sites of AI controversy such as AI science as inscribed in scientific data-bases and an experiment in de-colonialising AI in Chile. While the empirical settings are thus diverse, the contributions align in articulating several common themes in the wake of the watershed moment of AI controversy: the critical exploration of the deployment of AI's controversiality to strengthen the state and industry's hold over public discourse; creative deployments of situated forms of participation to recover controversy's redistributive potential; reflexive exploration of core concepts at the intersection of science, technology and society, such as participation and objectivity.
Dandurand et al. (2023) put forward the notion of the ‘freezing out’ of controversiality as a key political effect of AI media reporting. Focusing on legacy media, they show how controversiality becomes disarticulated as news reporting about AI is put in the service of the promotion of a national innovation ecosystem. Taking up Callon's (1998) notion of ‘hot’ and ‘cold’ controversy, they contrast the ‘hot’ situations of the early 2000s, when techno-scientific controversy proliferated, with our current cold situation, in which strategically positioned actors align their efforts to diffuse potential societal conflict in relation to AI. This cooling down is a tenuous accomplishment. As they put it, ‘the results are as thin as the ice flows on the St Lawrence River today in our increasingly hot world’ (Dandurand et al., 2023).
Liebig et al. (2024)'s study of AI policy in Germany demonstrates a similar logic of de-escalation in which the controversiality of AI is converted into a promotional drive to configure a national innovation stakeholder system. They suggest that the function and relevance of the policy sphere is precisely to operate this conversion: societal concerns are relegated to secondary – auxiliary – concerns within this logic, whereby addressing themes of inequality, discrimination or political economy serves the instrumental purpose of strengthening the rationale for ecosystem creation. Contrary to the observation of Dandurand et al. (2023) in Canada, however, in Germany more familiar logics of stabilisation and routinisation – i.e. de-controversialisation – seem to be at work. As Liebig et al. (2024) posit: ‘German policy is evading controversies by normalising artificial intelligence both with regard to taking artificial intelligence integration in all sectors of society for granted as a policy objective, as well as by accommodating artificial intelligence issues in the routines and institutions of German policy’.
Gourlet et al. (2024) demonstrate how the ‘sensationalisation’ of AI controversy provides an opportunity for the re-grounding of the study of AI controversiality, by exploring what they call a ‘situated problem space’. Activating a distinctive interdisciplinary strand in controversy mapping, they draw on work that combines STS and design research in order to re-situate controversy as a socio-material force of articulation. Outlining an ‘inventive’ strategy (Lury and Wakeford, 2012) of participatory inquiry into AI, they put forward the notion of nuanced, hesitant ‘soucis’ (‘worries’) formulated by implicated professionals and affected citizens, as providing an invaluable source for re-situating AI controversiality and its study. In so doing, the authors not only identify a way to redeploy participatory research as a pathway towards the ‘problematisation’ (Callon, 1980) of AI, they also contribute to re-figuring agendas within controversy analysis, where digital methods may contribute to developing situated approaches to the infra-structuring of AI’s publics (Dantec and DiSalvo, 2013).
Marres et al. (2024) advocate a similar approach of ‘inventive’ controversy mapping. Moving beyond descriptive approaches to controversy analysis advocated by Latour and colleagues in the early 2000, they outline a creative strategy of controversy elicitation: the active selection of controversies for further analysis based on their capacity to problematise AI across the science/non-science divide. Like Gourlet et al. (2024), Marres et al. identify visual and material approaches to participatory research developed in design research as critical to this task. They used a call-and-response methodology to identify and evaluate relevant AI controversies with an extended community of experts, supported by social media analysis, and identified a distinctive strategy for the problematisation of AI championed by tech and society activists: the coupling of situated AI frictions (e.g. automated vehicle crashes) with technical propositions (e.g. inscribed biases in machine vision) and normative concepts (‘data justice’; ‘corporate power’). Based on this analysis, they argue that AI presents a watershed moment for public controversy of science and technology because of the invention of a generalizing strategy of problematisation by civil society actors: by connecting specific concerns and technical issues with broad, normative concepts they established the significance of AI to technological democracy as a ‘super-controversy’.
Munk et al. (2024) investigate the role of the scientific literature in the ‘issuefication’ of AI, highlighting the manifold ways in which issue formation operates on AI situations reported in the scientific literature. Reporting on a semantic analysis of a large corpus of more than a million scientific publications about AI, algorithms and machine learning to investigate AI-problem couplings, they trace ‘how AI matters to a broader range of problems, and therefore also implicates a bigger hinterland of non-AI issues’ (Munk et al., 2024). The study also finds that AI-like tech is typically not staged as controversial in the scientific literature. Querying the literature for known AI controversies around, for example, misinformation, racial bias or surveillance, they found that machine learning is mostly configured as a solution to mostly non-AI problems. In this way AI is attributed agency in relation to broad societal phenomena, such as the climate crisis, health care, urban mobility, or cyber security. In contrast to Gourlet et al. (2024) and Marres et al. (2024), this contribution advocates a more symmetrical take on the objective of re-situating AI controversy. For Munk et al. (2024), the objective instead is to attend equally to situations in which ‘AI is rather uncontroversial, yet important to society and our lives’.
Sloane (2024) reminds us that AI is always already participatory, and that this throws up new challenges for the social study of public controversy about science and technology. It invites a broader examination of the modes and discursive contexts of participation itself. For one, the current default mode of participation in AI – the user-generated content and data that make AI possible in the first place – is inherently extractive, something which undermines rather than enhances the achievement of participatory goals like shared decision-making. In addition, current participation is cemented through dominant AI narratives that position AI as inevitable and do ‘not make room for diversity, multiplicity, unpredictability, and everything that “participation” might bring to the table’ (Sloane, 2024). For Sloane, the problem with controversies about AI and participation is that they often draw attention to problems after they occur, conscribing the scope of problematisation to existing AI systems but all too rarely to their ‘bureaucratic’ purposes and designs – the problematisation of which, however, is precisely necessary for participation in AI to realise aspirations of inclusion and democracy.
Suchman (2023) urges scholars to problematise the category of AI itself lest they participate in stabilising essentialist definitions assumed and propagated by its proponents. By accepting the very notion that ‘AI’ presents a given object of controversy, scholars become complicit in the inscription of AI as a self-evident, unquestioned technoscientific phenomenon, ‘closing debate regarding its ontological status and the bases for its agency’ (Suchman, 2023), and leaving the definition of reality and decisions about it to those who are bound to benefit from the presumption of the objective existence of AI and the ‘strategic vagueness’ it enables. To dismantle this ‘uncontroversial thingness’ of AI, Suchman argues, critical scholarship should adopt the intellectually and methodologically ambitious task of interrogating its very genealogies, materialities, and political epistemologies. For only when the category of AI is itself exposed as being neither inherently revolutionary, magical, or inevitable, does a public understanding of the related technological, economic and social implications become possible.
Shaffer Shane (2023) shows how particular AI incidents can spark controversies that surface and articulate broader societal issues. His account of a public controversy about an image cropping algorithm on social media identifies ‘networked reactions’ as a distinctive mode of participation in technological democracy. Drawing on Sarah Ahmed's concept of troublemaking (2017), ‘networked trouble’ involves the demonstration of the political agency of invisible algorithmic structures, creating shared orientations, and showcasing the production of societal harm that extends beyond its incidental occurrence. By enacting controversy in this way, networked publics can compel technology companies to (re-)act. As such, AI incidents enable a re-constructive technological politics as ‘AI may be an ally in its own contestation or refusal, as actors are increasingly coordinating with harmful algorithms outside of institutionally managed exercises’ (Shaffer Shane, 2023).
Tironi and Albornoz (2025) reflect on an experimental workshop designed to elicit non-Western imaginaries of AI but which did not achieve this goal. Taking up the notion of epistemic failure, they revisit this event to reflexively explore the relation between participation and problematization in and of technology from a decolonial perspective. Decolonising AI calls for a deconstruction of colonial assumptions, be they embedded in AI technologies, in design, or in social research practices themselves. This understanding brings into view new problems with participation in technological innovation processes, as advocated by some in STS and related fields: participation in problem-validating processes of previously defined issues risks to reproduce hegemonic AI imaginaries. Integrating pluralistic and local knowledges in collective problem-making instead makes it possible to generate productive frictions ‘that open up alternative perspectives on the subject’ (Tironi and Albornoz, 2025).
Conclusion: Special Issue contribution
While the change in ‘political atmosphere’ in science/society interactions in the wake of the AI tsunami has been dramatic, the reworking of conceptual approaches and methodological strategies in controversy analysis has been going steadily ahead. As the new AI – including its controversiality – is being used as an instrument for the (re-)assertion of institutional forms of authority in science, industry, and state, and for the (re-)inscription of societal hierarchies between experts and laypeople, it clearly no longer makes sense today to assume that public controversy about science and technology provides a site for knowledge democracy: the public staging of disagreement about techno-scientific issues does
Footnotes
Acknowledgements
We would like to thank the contributors to this Special Issue and all the members of the Shaping AI research team for inspiring conversations and exchanges, as well as the participants in the triple session on AI Controversies that we co-organised with Torben Elgaard Jensen at the EASST Conference in Madrid in July 2022 and the ‘Shifting AI Controversies’ Conference that took place at the Berlin Social Science Center (WZB) in January 2024.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the UK's Economic and Social Research Council (Grant No. ES/V013599/1).
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
