Abstract
This article introduces the concept of the Undersphere – a networked community brought together via creative exchange – to highlight how the increased proliferation of Generative AI poses risks not yet acknowledged by policymakers within emerging AI regulatory frameworks. Employing a single case study methodology – namely, exploring exchanges made on r/StableDiffusion, a known subgroup on Reddit – it illustrates the conceptual parameters of the Undersphere, outlines the potential for creative harm within the GenAI space, and counters these elements against the AI regulatory frameworks found within the EU AI Act. It concludes that a risk management framework that provides a more fluid approach to addressing risks, such as those found in governance frameworks aimed at eradicating climate change, could be better positioned to address insecurities manifesting from the GenAI space.
Introduction
On 7 September 2022, artist, musician, and artificial intelligence (AI) researcher Steph Maj Swanson, known on X as @supercomposite, encountered a phenomenon she described as ‘horror’ while interacting with a Generative Artificial Intelligence (GenAI) text-to-image model. Utilising negative prompt weights – techniques designed to redirect the AI away from the original text prompt – Swanson evoked an unsettling image she named LOAB. The artists characterised LOAB as an ‘off-putting’ representation of ‘a devastated looking older woman with defined triangles of rosacea on her cheeks’ (Swanson, 2022). Throughout the creative process, LOAB continued to infiltrate the artist’s images, with or without explicit prompts, exhibiting progressively macabre qualities.
Swanson’s invocation of LOAB catalysed a small community of GenAI users who endeavoured to replicate her results. Engaging on Reddit and Mastodon, two favoured exchange forums among AI creatives, users shared their approaches and built on existing attempts. Despite their inability to successfully summon LOAB, their exchanges reveal two elements within the broader context of growing GenAI diffusion. First, these interactions illustrate the capacity for communities to forge around creative pursuits. Second, they underscore that such pursuits frequently transcend the intended applications of GenAI programs, presenting inherent risks. In this article, we conceptualise these creative collectives as the Undersphere. Distinct from both normative and counterpublic spheres, and evolving from our understanding of networked and refractive societies (Abidin, 2021), the Undersphere may be defined as a networked community of practice, forged through the innovative utilisation of digital technologies, and motivated by a culture of produsage – or ‘the collaborative and continuous building and extending of existing content in pursuit of further improvement’ (Bruns, 2007: 101). However, unlike outputs that solely aim to reimagine technologies, many outputs emerging from the Undersphere possess both intentional and unintentional societal ramifications, thereby giving rise to new and evolving risks. Specific to this article are risks pertaining to an important tenet of liberalist democratic societies – namely, the rights of an individual – and how this fundamental principle is challenged via the proliferation of deepfake porn through privacy issues, ethical concerns, along with psychological and professional harms. Attempts to invoke LOAB can be perceived as unsuccessful creative endeavours within the Undersphere; however, deviations from conventional generative AI applications may result in considerably more adverse consequences, particularly to individual rights. We argue that these unknown risks can be categorised as complex, thus highlighting a need to readdress approaches to mitigating insecurities beyond high and low-risk regulatory frameworks. More specifically, complex systems of governance frameworks developed with unforeseen risks – such as climate change frameworks – are better suited for the complexities of emerging GenAI insecurity forged within Undersphere communities.
To conceptualise the notion of creative Underspheres, this article first delineates the concept of the Undersphere in relation to established frameworks such as the Public Sphere (Habermas, 1991), Counterpublics (Fraser, 1990), Networked Publics (boyd, 2010), and Refracted Publics (Abidin, 2021). Following this, the article defines the potential risks associated with Generative AI (GenAI) technologies by employing the lens of climate risk governance frameworks, which are designed to tackle the complexities of unknown and emerging risks. Analysing GenAI systems and the unpredictable, emerging threats they may entail through the lens of complex systems offers a more adaptive and applicable approach to addressing these risks. This article critically examines the implications of GenAI-derived deepfake pornography and its misuse in underspheric communities – via a case study exploration of r/StableDiffusion – to highlight the limitations of rigid regulatory frameworks, such as those outlined in the EU AI Act, in addressing the complex and evolving risks associated with AI technologies.
Conceptualising the Undersphere
The Undersphere and its interactions with both emergent and established technologies necessitate an initial discussion on the public sphere, which serves as its conceptual framework. In this context, the public sphere functions as a normative reference point, highlighting the characteristics of what the Undersphere is not. The concept of the public sphere was envisioned as an egalitarian domain of public discourse, characterised by the imperative that ‘everyone had to participate’ (Habermas, 1991: 37). It provided a platform for discussing contemporary issues and scrutinising governance, thereby enabling accountability of governmental authorities. Habermas (1991: 27) highlights that the convergence of private citizens as a public to engage in deliberation concerning matters of collective interest is essential for a well-functioning society. The defining parameters of the public sphere have often faced criticism for their conceptual ambiguity (Warner, 2021), idealistic tendencies (Fuchs, 2015), and exclusionary practices (Dahlberg, 2014). Nevertheless, the concept of the public sphere provides a directional framework for articulating the mechanisms that underpin a healthy society. Specifically, the public sphere is designed to unite individuals and furnish a forum for the airing of grievances, thereby facilitating the oversight of governing elites and maintaining societal functionality. In a parallel vein, Arendt (1998: 7) articulates the essence of the ‘public’ as a space forged through shared experiences or commonality. ‘To live together in the world. . .is between those who have it in common’ (Arendt, 1998: 7). Furthermore, this idea of the public can be construed as an interaction between governance and citizens who adhere to a normative code of ‘good manners’. From a Kantian perspective, Arendt further addresses this relationship concerning the adherence to ‘norms of justice’ (Arendt, 1998: 14). If we conceive commonality to be the essence of the public sphere, the Undersphere – while unified by a collective aspiration for the appropriation and creative enhancement of technologies – contrasts these parameters and does not represent a space characterised by homogeneity. Its constituents are not necessarily aligned in their efforts to promote societal betterment; many individual grievances remain undisclosed or relegated to a secondary status within such spheres – the task rather than relational ties bond members of the Undersphere. Moreover, the intended outcomes of their creative endeavours are not uniformly aligned with the overarching goal of societal improvement. Instead, Underspheric outcomes may be multifaceted, with some serving benign artistic or informative purposes, while others may pose risks and negatively impact individuals and societies.
In contrast, the formation of counterpublics or Subaltern counterpublics – as coined by feminist revisionist Nancy Fraser (1990) – are groups and spaces for public discourse generated by those outside the normative public sphere. More specifically, alternative spheres that contest ‘the exclusionary norms of the bourgeois public, elaborating styles of political behavior and alternative norms of public speech’ (Fraser, 1990: 61). Subaltern counterpublics function as a form of ‘regroupment and training grounds for agitation’ derived from those experiencing dominance and subordination by the ruling classes (p. 68). For example, women, the LGBTQIA+ communities, ‘plebeian’ men, and people of colour (Fraser, 1990: 64–67). Progressing Fraser’s interpretation, Asen and Brouwer (2001: 8) explore the ontology of counter in counterpublics and rest on a definition that includes persons, all persons, who articulate oppositional discourse. By applying queer theory, Renninger (2015: 2) positions counterpublics as circles of likeness forged by those who identify with or are allied to groups outside of normative culture. As such, counterpublics can be seen as direct opposition to the normative values that forge discursive platforms for specific groups within democratic societies, and not others. Nonetheless, the defining elements of counterpublics are their opposition to the normative public sphere. However, with Underspheres – a concept that progresses beyond the normative public and counterpublic sphere – the oppositional elements are not forged through a socio-political identity. Rather, opposition is developed through the creative use of digital technologies without regard for normative uses.
The idea and formation of ‘community’ have coexisted with technologies, and its operation as a ‘practice’ has foregrounded many technologically motivated movements. Rheingold (2000 [1994]) recalls the WELL community as a group of counterpublic individuals within a technological group. The WELL established how members communicate, the social norms of practice, and how the technologies enable certain behaviours and communication practices. It also resembles a community of practice in that there is an exchange of expertise among its members, similar to the relationship between masters and apprentices. Lave and Wenger (1991: 22) highlight that pedagogical legitimation is often more important than the teaching itself – indeed, in a specialised occupation, the master sponsors the apprentice within the community. The knowledge generated from this relationship is collectively informed by the cultures surrounding it and not solely by the technology or community alone (Du Gay et al., 2013 [1997]). With these characteristics as a backdrop, the relationship between technology and a community of practice forms the basis of a networked public that integrates social media technologies or platform media (Papacharissi and de Fatima Oliveira, 2012). Bruns (2023) articulates how these community formations as networked publics (public spheres, platform publics, personal publics etc.) move beyond groups of individuals communicating across a comprehensive communication infrastructure – something the Undersphere contributes to in the context of GenAI systems.
Underspheres is grounded in the foundational theories of network publics (boyd, 2010) and refractive publics (Abidin, 2021). boyd (2010) posits that networked publics are fundamentally transformed by networked technologies: ‘As such, they are simultaneously (1) the space constructed through networked technologies and (2) the imagined collective that emerges as a result of the intersection of people, technology, and practice’ (p. 39). Consequently, these networks are shaped by technological affordances, which determine user behaviours. However, Underspheres cannot be merely conceptualised as a byproduct of networked technologies. They have evolved over time through sustained engagement across technological platforms, via interactions deeply embedded within these digital contexts. Furthermore, while certain Underspheric outputs may generate ramifications in the physical realm, it is not evident that members of the Undersphere adapt physical realities or interpersonal relationships to their creative realms. Rather, they function as a space that denotes the formation of friend lists or the construction of profiles and personas. The Undersphere distinguishes itself from mainstream, counter, and networked publics by focusing exclusively on creativity, and the innovative manipulation of technology for personal expression.
Abidin (2021) revisits Boyd’s foundational concept of networked publics, subsequently introducing refracted publics. Refracted publics emerge from the disruptive effects of digitised insecurities – exemplified by data breaches – and the pervasive threat of information warfare, including disinformation and the proliferation of fake news. Refracted publics, characterised by dimensions of transience, discoverability, decodability, and silosociality, enable communities of shared interests to navigate the complexities of visibility/invisibility. Members can strategically manoeuvre to ‘evade detection on a radar, register within inconspicuous pockets while still being partially observable, or present a distorted representation on the radar altogether’ (Abidin, 2021). In parallel, the tendency to prioritise product over presence in Underspheric realities diminishes the emphasis on discoverability, allowing segments of these networks to maintain a degree of visibility on the clear web and invisibility when needed. Furthermore, while platforms for exchange – such as various subreddits – are often familiar to specific groups, the individual contributors typically remain less significant within the collective. This dynamic yields a decentralised and leaderless assembly of creatives who engage primarily through the value of sharing and collaboration.
In examining the conceptual geographies of public, counter, networked, and refracted publics, the Undersphere emerges as a domain that exists beneath mainstream contexts, remaining largely indifferent to the concerns articulated by counterpublics. It is within networked societies that the Undersphere evolves and may occasionally find residence among refracted publics. The relationships formed among its members are primarily a consequence of their engagement in the pursuit of creative outputs. As such, the Undersphere functions as an apolitical undercurrent relative to its politicised counterparts. However, while such characteristics may suggest that the Undersphere lacks a political dimension, this assertion does not imply that the individuals constituting it are devoid of political agency. Rather, the Undersphere signifies a collective that eschews specific ideological or political affiliations. It is a space that possesses the capacity to harness digital technologies either for creative advancement or for more detrimental purposes. In this context, individuals may coalesce around creative endeavours, yet simultaneously embody values opposed to mainstream ideologies or counterpublics. Consequently, their engagement with and (re)appropriation of digital technologies can stem from a desire to inflict harm, establishing the Undersphere as an optimal environment for acquiring the skills necessary for such activities. While the Undersphere can be construed as an apolitical entity, it harbours the potential to engender political outcomes, influencing the integrity of well-functioning democracies.
Generative AI and defining risk
The scope of how GenAI will be used creatively for social good or peril remains undetermined. This ambiguity begins with a lack of consensus in defining AI (Friedrich et al., 2022: 824) or GenAI, which impacts how regulation is shaped and what, in particular, aligns with its governance. As Wang (2019) asserts, ‘Without a clear definition of the term, it is difficult for policymakers to assess what AI systems will be able to do in the near future and how the field may get there’ (p. 2). Currently, definitions range from machine learning systems with the capability to produce new content (Aydın and Karaarslan, 2023: 3) to a powerful technology popularised by large language models such as ChatGPT (Lim et al., 2023), to systems dependent on human intervention (Peres et al., 2023: 271). To avoid adding to the broader definitional debate, we draw on Epstein et al.’s (2023: 3) work, who argue that defining GenAI systems as autonomous is misleading and that human intervention is required to engage with these systems. Thus, the notion of AI as ‘intelligent’ is superfluous. Instead, we consider AI technologies – and their GenAI subsets – as a system that mimics the production of intelligence and independent learning with prompts from human intervention. Second, we argue that much more work is needed to understand these systems, their affordances – which are necessarily contextual (Ronzhyn et al., 2023) – and more generally what occurs when humans and human systems interact with them (Epstein et al., 2023: 4). In short, Paul (2023) argues it is crucial to comprehend the possibilities and limitations of Generative AI (GenAI) to develop effective policies aimed at regulating this technology. More specifically, it is essential to consider the fluidity of GenAI programs and their diverse applications when placed into the hands of human users.
However, despite the need for a deeper understanding of these technologies and their impacts, government and governance bodies have progressed towards regulation in a way that could be described as swift, reactionary, and, oftentimes, ambiguous, with no real way of curtailing unforeseeable risk. Specifically, regulation often systematically categorises AI and its usages in ways that do not consider the differences between intentional and unintentional.
Risk, complexities and risk management frameworks
A relational theory of risk makes clear that it is not possible to assess risk objectively as risk is negotiated in specific cultural and social contexts, which shape the relationship between the ‘risk object’ and the ‘object at risk’ (Boholm and Corvellec, 2011). In the case of GenAI, risk-based governance is challenged because of an ‘indeterminate expansion of possible harms’ from these emerging technologies (Kusche, 2024: 2). Specifically, GenAI technologies lack both full ‘observability’ due to an inability to trace and understand how they work within the industrial and infrastructural ecosystem and ‘inspectability’ (Ferrari et al., 2023). AI researchers keep investigating how to interpret and explain the behaviour of their GenAI models, that is, the ‘risk object’, after they are released to the public. 1 This lack of transparency raises fundamental questions regarding the interpretability of AI-generated outputs and therefore the accountability of developers and providers. Beck (2009) theorised different forms of ‘non-knowing’ emerging from socio-technical systems for risk assessors. For Beck (2009), non-knowing is not simply the absence of knowledge but a complex sociological phenomenon that is a product of modernisation itself. In this sense, assessors need to account for ‘provisional non-knowing’, which will ultimately be addressed through ongoing research and scientific advancement as well as ‘organised’ and ‘intentional non-knowing’. Specifically, developers of GenAI are reluctant about disclosing the specific training datasets employed in the development of their models. But if we choose not to place our trust in the capability of AI researchers to understand their own software, risk assessors are also left to contend with what Beck terms ‘manufactured non-knowledge’. This state is not merely transitory; rather, it is a fundamental aspect of the ‘world risk society’, wherein the continual advancement of knowledge and technology introduces additional layers of uncertainty. Consequently, institutions, political structures, and modern science are perpetually challenged to address and resolve these complexities (Beck, 2009). Furthermore, GenAI models continue to be re-appropriated accidentally, as in the LOAB case, or on purpose by expert communities within the Undersphere. Specifically, the static ‘risk object’ identified for assessment by regulators will constantly and unpredictably splinter into a swarm of risk objects.
The context in which a GenAI is deployed and used for risk assessment is rarely removed from other socio-technical systems – that is, the ‘objects at risk’. Most likely, as in the case of deepfake porn, GenAI will be deployed in contexts that are dynamic components of multilayered networks of technological artefacts, people, and their socio-political arrangements (see Ferrara, 2024). That is, even if local risks configured by implementing GenAI applications into well-defined contexts can be mapped and predicted (something we doubt), the ramifications of integrating these environments into a social media ecosystem are inherently unforeseeable. Independently from their planned applications, GenAI risks are better classified as complex risks.
Critical in the definition of complex risk is the unforeseeability of features emerging from the interactions across the boundaries of conceptual system components (Simpson et al., 2021). GenAI should be expected to interact across a broader social media ecosystem, its social, political and economic layers, and other technologies for content creation and distribution (i.e. social media). 2 In this sense, measuring the risk of GenAI within the boundary of a single application and a single environment will not tell us how GenAI will interact with the broader social media ecosystem and its internal dynamics.
The concept of complex and emergent risk has been applied to describe and assess the impacts of climate change. The Intergovernmental Panel for Climate Change (see Field and Barros, 2014) defines a complex risk as a risk produced by the dynamic interaction of multiple components across complex systems. The diffusion of GenAI and its reconfiguration through the Undersphere can dynamically interact with other distant parts of the system, creating risks that are difficult to foresee and, therefore, control. For example, it is very difficult to assess and control the risk resulting from the interaction between evolving GenAI technology and the job market or the infrastructure of political communication, as well as the social interactions among children and teenagers. Instead of merely quantifying predictable or acceptable risk, a complex systems approach allows us to better qualify specific dimensions of the risks connected to the massification of GenAI and devise responses and interventions on multiple levels involving multiple actors. Using a typology developed by climate science (Simpson et al., 2021), we can understand the risks implied by the deployment of GenAI into social media ecosystems and their Underspheres as follows:
Interconnected risks, as GenAI applications connect through the social media ecosystem;
Amplified risks, as content created by GenAI can be expected to be amplified by social media algorithms as well as by mainstream media platforms (in fact, the response of content distribution algorithms will be likely used into a feedback loop to fine-train GenAI models to improve the reach of the content they create);
Cascading risks, as content created through GenAI but also its pre/fine-trained models are appropriated and re-appropriated by the Undersphere; and finally, as
Systemic risks, such as the production and dissemination of GenAI content, can have systemic social effects, such as the reduction of trust towards political processes or the quality of political conversations.
According to a review conducted by Luther et al. (2023), an effective risk management framework for complex systems should be defined by five attributes: functional abstraction to capture (unforeseen) emergent features and not only design failures, qualitative (instead of quantitative) risk assessment as unrealised negative events cannot be estimated statistically, system thinking to capture interdependencies, a recursive epistemic approach to ‘avoid the misconception of complete understanding’ (p. 7), and finally, capacity to communicate ‘uncertainty and nuance in complexity’ (p. 6). As for climate change, the impacts of the development of GenAI by corporations and the Undersphere will reverberate across all layers of our socio-political systems. Social media technologies facilitate the accumulation and diffusion of practical generative capacity, and the distribution of the generated artefacts. This is why we need a risk framework that can comprehensively embrace the scope of these impacts. We also predict that the most negative (and consequential) risks will not be caused in a market setting by corporate providers’ misuse or misdesign of the technology. In fact, most of the negative impacts are to emerge outside of the boundaries of a regulated market and outside of its logic.
Methodology: case studies and regulatory frameworks
We use a case study to examine the impact of rigid regulatory measures – lacking the adaptability of complex risk frameworks – on the dynamic landscape of GenAI technologies. It focuses on the widespread dissemination and manipulation of these technologies within the broader context of informal or unregulated environments. This approach aims to investigate ‘a contemporary phenomenon (“the case”) in depth and within its real-world context’ (Yin, 2013: 37). The analysis employs a single case design (Levy, 1988) to qualitatively examine and portray a unique, yet increasingly prevalent pattern in the proliferation of GenAI. Utilising the EU AI Act as its regulatory example, the study focuses on the subreddit r/StableDiffusion – a subreddit example of a forged Undersphere – to illustrate how regulatory frameworks in current AI governance inadvertently overlook vulnerabilities manifesting within the Undersphere. We identify the emergence of this community and its dynamics as a ‘critical moment’, where implicit and explicit norms overseeing the technology ‘break down’ (Hofmann et al., 2017) as the ‘risk object’ escapes the perimeter of regulation. In this respect, a more adaptive governance approach, akin to those found in climate change governance frameworks, is essential to address the evolving risks associated with the proliferation of GenAI technologies. While it is acknowledged that single case studies cannot offer insights that multiple case studies may provide (John, 2024: 51), the singular case examined in this research is a critical benchmark to consider how the concept of the Undersphere could be employed to progress our understanding of the dynamics of technological appropriation/misuse while accentuating the necessity for more flexible governance that can effectively respond to the complexities of this rapidly evolving landscape.
r/StableDiffusion, deepfake porn and insecurity
r/StableDiffusion 3 is a subreddit with over half a million members and is a prime example of the Undersphere. Stable Diffusion is an open-source, deep-learning text-to-image GenAI program. This subreddit is a platform where members can share their creative works, ask questions, and explore ways to modify the program to produce different results. The focus within r/StableDiffusion is transactional, revolving around sharing expertise and seeking recognition. Participation is driven by the exchange of knowledge, rather than the desire to form deeper connections, foster communities, or unite under a common belief system. The main incentive for participating in r/StableDiffusion, as in any Undersphere, is the acquisition, dissemination, and application of knowledge, rather than social interaction or community building. Member relationships are minimal, often conducted anonymously or from a distance, and interactions are generally limited to specific questions, answers, and the exchange of information and data.
The content shared on r/StableDiffusion may seem harmless at first glance, featuring images of wizard cats, upgraded food photos for social media advertising, and some pictures of scantily clad women. The community standards 4 clearly outline what is and isn’t allowed, with a strict no-tolerance policy for political discussions. Thus, r/StableDiffusion – much like an Undersphere – serves as a united collective for creative pursuits and a place for learning and elevating creativity. However, on the very same community standards page – that attempts to maintain the community’s apolitical nature – is the promotion of subreddits such as r/unstable_diffusion, 5 r/sdnsfw 6 – or spaces for creating and sharing NSFW (not safe for work) AI images. While certain subreddit NSFW variants have been banned 7 for violating Reddit’s content policies on ‘consensual intimate media’, they are quickly replaced, some boasting 150,000 members.
In addition to encouraging splinter groups with less benign intentions, the discoveries, programming codes, prompts, and generated content found on r/StableDiffusion are not limited to that subreddit alone. They are also stored on other platforms like HuggingFace 8 or shared on mainstream platforms such as Discord 9 and not so mainstream such as 4chan (unavailable in some states but mentioned frequently on r/StableDiffusion), rentry.org 10 (for sharing prompts) and catbox.moe 11 (a crowd-funded service for sharing files). These codes and pre-trained models are often used for benign purposes. However, they have also been used for less benign purposes, including creating NSFW and adult content without the consent of the individuals who might encounter it. Disturbingly, Stable Diffusion has been reported to be used for creating child abuse material (Crawford and Smith, 2023).
Through r/StableDiffusion, we can observe the defining mechanisms of the Undersphere at work. That is, the Undersphere is a collective of creatives united by the desire to exchange creative works, knowledge, and expertise. It is a group that does not seek to form communities or deeper relationships with its members. It functions as a counter public, employing counter public logic. Specifically, Underspheres provide a space for learning, though not necessarily a space for agitation. Instead, its members explore the various ways in which creativity can repurpose technology. The organisation of additional threads leading to more graphic content – some lacking the consent of those featured – illustrates how Underspheres function beyond platform logics, how they spread to more obscure realms of the Internet, and how outcomes can be relatively unknown. While presenting as apolitical, the outcomes of the Undersphere’s creative pursuits can carry harmful intent, deviating from the values of well-functioning democracies (such as rights and respect for privacy) and creating insecurities within these political systems. However, regulatory measures regarding GenAI and the potential insecurities resulting from its widespread dissemination seem largely overlooked, as evidenced by the recently ratified European Union Artificial Intelligence Act (AI Act). 12
The EU AI act: regulatory rigidity
The EU AI Act adopts a risk-based approach to ensure the trustworthiness of AI systems. Initially, the Act aimed to categorise AI applications as ‘unacceptable’, ‘high’ and ‘low’ risk (European Parliament, 2023a). For instance, ‘high-risk’ AI applications included various applied AI systems, as well as those used in toys, aviation, cars, medical devices, and lifts (European Commission, 2023: 4). GenAI was not initially categorised. According to earlier parliamentary notes, GenAI was considered ‘low risk’ as long as the program ensures that users understand they are interacting with AI-generated content such as videos, images, and audio (European Parliament, 2023a). After extensive deliberation, the final revision presented a risk-based approach developed through the lens of probable uses and their impact, focusing on regulatory interventions ranging from guidelines to outright bans (Paul, 2023: 2). With GenAI programs such as ChatGPT, the Act’s final version also included uses that could harm democracy and the rule of law. For example, Chapter II, Article 5 13 prohibits – among other usages such as biometric categorisation systems, social scoring and the broader use of remote biometric identification (outside law enforcement) – the proliferation of manipulative deployment that could ‘impair’ human behaviour and their ability to make an informed decision. It aims to regulate high-risk applications such as those outlined in Chapter III, 14 including critical infrastructure (transport), essential private and public sector applications, migration and border control, and the administration of justice or democratic processes (e.g. Court rulings). Within Recital 53, 15 a definition of limited risk is found, which considers applications aimed at enhancing/improving the result of human activity, such as proofreading, copy editing, or, in specific contexts, decision-making. While a watershed in AI policymaking, the Act’s definitional risk parameters and risk-based approach have been widely critiqued through various lenses (see, for example, Leufer and Hidvegi, 2023; Ruschemeier, 2023). However, we argue that the potential for substantial risk rests in the decision to frame risk itself as probable, and thus known. Specifically, while Underspheres such as r/StableDiffusion illustrate the various ways in which deviation from intended use can occur, AI regulation, such as the recently derived European Union variant, continues to regulate what is known, forsaking the risks in the unknown or less probable.
During the negotiation phases of the EU AI Act (see European Parliament, 2023b), members asserted that regulating AI protects ‘health, safety, fundamental rights and democracy from its harmful effects’. The EU’s aspirations are not an exception. Most regulatory frameworks and principles seek to safeguard the tenets of liberalist models of governance and well-functioning democracies, such as human-centricity, trustworthiness, and transparency. However, human centricity, in the way of protecting an individual’s rights – as a tenet of liberal democracies – appears void within the contexts of regulatory measures aimed at protecting democracies in wake of increased technological diffusion. As seen in the example of r/StableDiffusion, evolving technologies come with evolving risks, and heightened expertise in applying these technologies for intentional or unintentional harms, shared among a wider audience and increasing speeds. With technologies paving the way for affordances that could be used for harm, focusing on probable risk-based regulatory frameworks could leave individuals, and the democratic frameworks they reside in, vulnerable to insecurities derived from largely unregulated AI applications. These lessons, applied to the realm of GenAI, can be evidenced through digital histories and events that could illustrate the potential harms and how they came to develop, such as deepfake technologies.
Deepfakes, a precursor
Deepfakes highlight the potential insecurities – namely, the risk to privacy and rights – stemming from appropriation/misuse of technologies within underspheric spaces. Deepfakes also act as a precedent, or a means to highlight the implications of developing AI regulation based on intended/probable use with technologies whose outcomes – once in the hands of the Undersphere – are unpredictable.
Deepfakes has attracted significant attention since the term started circulating in the 2010s. As is now well known, these technologies were intended to produce computer-generated videos that were both highly realistic and fake or synthetic (Vaccari and Chadwick, 2020). They allowed users to portray a real or fictional person as performing some action by replacing the face in an existing video or creating a new one from scratch based on some model. However, despite its intended use, a subsection of users employed the program to deviate and produce pornographic videos and images, removing the faces of adult movie stars and replacing them with the faces of other women, celebrities, singers, entertainers, and politicians.
The first user-generated deepfake porn was disseminated on another subreddit, r/deepfakes, in 2017, by user ‘deepfakes’ (Gamage et al., 2022: 3). In these stages of deviating applications, the content was seemingly harmless iterations of a user’s fantasies. From its origins, deepfake porn moved beyond the parameters of subreddit communities and into mainstream use. By 2019, 14,678 deepfakes were visible online (Ajder et al., 2019). By 2023, a recent analysis of 95,820 deepfake videos (see Security Hero, 2023) found that the figure had swelled by 550%, with most – the lion’s share of 98% – classed as deep fake pornography videos.
Deepfake porn was quick to evolve, and the intent of its production presented heightened levels of harm. While porn can be considered a liberalising experience – relinquishing consumers of feelings of repression, opening minds to new sexual experiences, and increasing tolerance to other people’s sexuality (McKee, 2007: 88) – there are notable criticisms. According to critical feminist scholars, pornography can be seen as misogynistic propaganda, presenting women as submissive and existing largely for the pleasure of men (Van der Nagel, 2020: 425). This concept is illustrated through statistics. Namely, while men experience being doctored into deepfake pornographic videos and imagery, 99% of deepfake-generated porn features women (Hurst, 2023). The proliferation of deepfake porn and its non-consensual nature leaves women with a sense of feeling violated and helpless (Hao, 2021). Deepfake porn presents a risk to individual rights and the right to privacy, impacting on a fundamental principle of liberal democracies. Jaiman (2020: 96) argues that deepfake porn can inflict reputational harm that can devastate an individual’s ability to maintain their livelihood and pose a security risk, with cyber stalkers employing fake sex videos to torment their victims. Last, deepfake porn can destabilise democratic processes by undermining candidates (Maddocks, 2020). As such, deepfake porn – similar to GenAI-produced porn – is an inherent risk to several tenets of liberal democracies, from a violation of individual rights, freedom from harm, the right to opportunity and enterprise, and security.
However, similar to GenAI, policymakers did not initially address these appropriations of deepfake programs. As access to deepfake technology increased, regulators were slow to respond. It was not until 2019 – years after the emergence of deviance away from intended use – attempts to regulate began to emerge globally, albeit sporadically. Similarly to the risk-based punitive measures evidenced in the EU AI Act, iterations of deepfake regulation – such as those developing in South Korea 16 – sought to criminalise or prohibit the dissemination of deepfakes. For example, attempts were made to strengthen criminal measures concerning deepfakes within the UK Online Safety Act 2023. 17 However, deepfakes and their capabilities increase with each technological improvement. According to experts (Van der Sloot and Wagensveld, 2022), 90 per cent of all digital content will be manipulated in a few years. However, regulatory measures continue to display rigidity regarding what elements of deepfakes can be regulated. There is a fracturing in response among states, impeding unified efforts to protect democracies from the impacts of deepfakes globally. With increases in AI technology and its malleability among individuals who share their expertise in chatrooms in domains of the Undersphere, rigidity and slow-to-respond regulatory mechanisms could hamper any means to protect from risk. As such, we propose a risk governance framework that would, as mentioned, provide fluidity and flexibility to address risks yet known in GenAI.
Applying complex risk governance frameworks
Hofmann et al. (2017) note, governance, understood as a form of social and reflexive coordination, is preferable to regulation, which primarily concerns itself with ‘intentional and goal-directed interventions into a policy domain with the aim of influence others’ behavior (p. 1418)’. The case of deepfakes exemplifies the inadequacy of regulations to anticipate and address the emergence of ‘critical moments’ wherein existing practices and norms may disintegrate, necessitating a revision of rules by the involved actors (Hofmann et al., 2017). Given the intricate and multifaceted risks emerging from the progress in AI technologies, coupled with the collective experimentation in the repurposing of these technologies ongoing in the Underspheres and the rapid scaling of their distributed expertise, these ‘critical moments’ are likely to occur, as guardrails (assuming providers decide to set them) are eventually by-passed or broken-through. Ferrari et al. (2023) identify the essential components that an effective governance framework must encompass: observability, which enhances the understanding of the broader structural dynamics inherent in the AI industry, and inspectability, which allows for rigorous examination of all constituents within GenAI systems, ranging from training data to neural architectures, as well as their modifiability. A framework for reflexive governance serves as the arena where complex risks can be politicised, and where interventions can be deliberated and defined (Ferrari et al., 2023). This framework challenges the AI industry’s assertion of ‘epistemic purity’, wherein ‘social problems are cleansed of complexity and uncertainty’ (Hong, 2020: 8).
Conclusion
LOAB currently resides in an almost forgotten state of digital folklore. The occasional Reddit thread will allude to her, with users occasionally announcing that they have found her, yet still refuse to share the intricate process of negative prompt weights on a still unknown GenAI program. However, the lesson illustrated by Swanson and the production of LOAB remains. The vast unknown and unpredictability of GenAI technologies, when placed into the hands of creative communities or Underspheres, can lead to misappropriation, which can produce negative results. This article introduced the concept of the Undersphere to provide a conceptual framework for understanding how the misuse and appropriation of such technologies – whether for creative good or harm – can impact the fundamental tenets of liberal democracies by presenting significant risks to an individual’s rights, privacy, livelihoods, and safety. Policymakers, with firm sights on producing regulatory mechanisms to protect the broader public from the impacts of misuse, produce policies that are yet to consider these unknowns or base recommendations on the observation of digital histories. The risk-based approach of the EU AI Act, which considers probable uses, may not address the potential threats stemming from GenAI and its (mis)appropriation. It is these unknowns that policymakers should take into account, particularly through the governance of Underspheric communities that could result in creative use for the greater good or irreversible harm to fragile social and political systems.
Footnotes
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
