Abstract
The recent statement of future (human) extinction by Artificial Intelligence (AI), made by the Center for AI Safety, crystallises techno-capitalists’ lack of care and responsibility for environmental and social harms that digital technologies already produce. Rather than identifying their collective roles and responsibilities in addressing these harms, the Center’s statement highlights future risks, and ties these to global extinction threats. There are, however, alternatives to such an approach, as articulated in research at the intersection of geographies of care and digital geographies, including that which is bringing together more-than-human and more-than-real approaches. While the software and hardware of AI continues to present amorphous challenges to the same techno-capitalists who benefit from them, we could instead prioritise care, repair and responsibility in/of AI to address current problems in this area.
Introduction
In June 2023, the Center for AI Safety communicated a threat of mass extinction from AI and tied this possibility to other global-scale threats. As a text and socio-political strategy, the statement offers insights into the flawed framing of much techno-capitalist digital world-making, and is a continuation of well-worn universalising paths that have been roundly critiqued. It is, however, also a new public rhetorical gesture, with as yet unknown consequences, as it aligns AI with global level threats from an unspecified source. In this instance, the framing involves centring AI as the causal agent and removing responsibility from corporations and governments to manage the effects and impacts of the components that render AI as an identifiable entity, including data, digital infrastructure such as digital devices and ‘the cloud’ (Amoore, 2018), algorithms (Maalsen, 2023), poles, satellites and wires. And that this entity, or assemblage, would lead to extinction.
The wording of their preamble and statement follows: AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.
The Center communicated this statement with hundreds of endorsements from highly placed employees in techno-capitalist corporations and many academics (mostly from the discipline of computer science). The aim of the statement was, they purported, to elicit action to stop this globally scaled peril and it received widespread mainstream and social media attention and critique. The purpose of this commentary is to examine this statement and to imagine a way that the future of AI could be differently conceptualised and communicated, including in terms of managing its (unintended) risks. Situating this statement within an empirical analysis of the Center is beyond the scope of this commentary but could be a fascinating future research project.
The subject of extinction, or even the species that is at risk of extinction, is not articulated in the statement, and we are left to presume that it is humans that will be exterminated. Such a conclusion seems reasonable given that the day following the statement’s circulation, mainstream news carried stories of how an AI drone ‘killed’ its operator (see, for example, Whelan, 2023) in a United States-based army drill, which was quickly found to have only occurred in a simulation. There is a form of human exceptionalism in the framing of the statement, and the associated distancing from the multiple species and biomes already lost, damaged or threatened with extinction as a result of human activity; the current global environmental crises, produced by colonial and capitalist trajectories (Todd, 2015; Whyte, 2018) are not a part of this AI picture. Furthermore, the narrative of impending doom is all encompassing and disallows recognition of the specific ways that AI lands, similar to how universalist modernist imaginings foreclose recognition of a patchy Anthropocene (Tsing et al., 2019).
Who or what is responsible for caring for humans, non-humans and AI in this context? And what might previous research on geographies of care (and carelessness) illuminate with respect to this latest rhetoric on planetary scale extinction? This commentary outlines a pathway for addressing these questions, drawing on recent care-full geographic and digital geographies scholarship to help illuminate the ways in which responsibility of/in AI is generating a new wave of panic and concern, while simultaneously extending pre-existing damaging practices. Care-full geographic and digital geographies scholarship has not yet been explicitly brought together to talk to discourses and practices of AI expansion; this mapping out of an alternative pathway to unfettered fear generation offers a starting point that may be of interest to scholars of both.
I begin by situating AI within long-standing debates on digital technologies and the Anthropocene and then outline geographies of care literature that is engaging with responsibility and repair as key tropes. I then discuss the emergent geographies of AI literature that offers constructive alternatives to formless, vague future threat-making. Finally, the article offers possibilities for theorising human geographies of AI by considering them as more-than-real, drawing on recent scholarship by critical and generative digital geographers on that concept. In doing so, I centre responsibility, care and repair in relation to AI, and continue geographic work that integrates these considerations. I propose that we need to change the conversation on AI futures, to shift from alerting people of amorphous and unspecified risks and to begin considering the more-than-real geographies of AI in the here and now, bringing into focus issues of responsibility, care and repair in and of these digital technologies.
Situating responsibility in AI: Digital tech or tech bros killing us all?
We are all at risk of extinction, but no one is responsible, the statement on the threat of AI effectively claims. Consequently, the existential threat from AI runs in parallel with that generated from the unsustainable practices of the moderns, or capitalists, depending on where you sit with the Anthropocene concept. The end of civilisation, again, as if such endings have not transpired, and the flattening of variable levels of accountability and responsibility, again, as if such oversights are not now commonplace. And without reference to the origins, causes and potential possibilities of what that really means, and whether wrangling, resisting and challenging, again, that which might ‘kill’ is feasible. Furthermore, it is without a possibility of alternative futures, different paths, full of thriving otherwise (Elwood, 2021), digital worlds where responsible AI machines sit alongside other forms of digital technologies, entangled with human and more-than-human worlds in generative rather than inevitably destructive relations. Just as the Anthropocene concept has been critiqued for flattening and limiting opportunities for rupture and remaking (Head, 2014; McLean, 2016), so too does this statement deflect attention from those who most benefit due to its creation and expansion, and how they need to take responsibility for its impacts.
AI as a global level threat is conjured here as if the massive and definitely present global environmental threats have not been felt, as unevenly as they have. Furthermore, it is as if the vast scholarship on the Anthropocene simply does not exist, including that which reconfigures what the proposed epoch means with respect to the digital (McLean, 2020a; Parikka, 2014). Of course, within most disciplines – including the sciences, social sciences, humanities, education, law and the creative arts – the multiple global level environmental challenges that now shape the daily lives of so many, have been debated, contested and reframed. The Anthropocene has been pervasive and ubiquitous yet somehow held at bay in this declarative statement. Rather, pandemics and nuclear war are invoked as likely extinction threats for humans.
By overlooking the Anthropocene epoch, deliberately or otherwise, the rationale for putting forward a statement forecasting human destruction without identifying the most immediate and tangible forms of threat to human (and non-human) life have been inadvertently made clear. In simpler terms, this future forecast of human destruction by AI is a red herring as it distracts from the everyday and persistent unsustainability of big tech. Instead of drawing attention to troubling business models and practices that are tied into imperatives for limitless growth that structure our techno-capitalist systems, and that also play a significant role in producing Anthropocene conditions, they proffer the extinction of humans by rampant AI. In some ways, this fear of AI machine-induced destruction is nothing new: Wajcman (2004: 94–95) argued that The machine that transcends its programming and becomes autonomous is a common figure in contemporary science fiction. This recurring story about how we have lost control over, and are even destroyed by, the machine we have created is the stuff of our collective unconscious and our nightmares about the future.
The statement of extinction via AI walks along that science fiction path but somehow escaped mentioning that it is Center’s members are the ‘we’ who ‘have created’ these nightmarish futures.
At the same time as ignoring the Anthropocene, the statement makes rhetorical moves that are on a par with reductionist framings of this epoch. The universalising language of the statement is a similar conceptual move to that which enlivens Anthropocene thinking: the age of ‘man’ as a force of geologic change and agent of environmental harm on many fronts attributes blame across all humans, without recognition of racial discrimination, settler colonial structures and imperial powers (Yusoff, 2018). As Indigenous scholars including Kyle Whyte have argued, the Anthropocene signals a looming threat for all humans without recognising that such challenges have landed through invasion and colonial force and that Indigenous peoples have endured and resisted these processes. Instead of imagining a homogeneous ‘man’ that is both responsible for generating and is then subject to the Anthropocene, Whyte (2017) suggests that Indigenous peoples envisage climate change futures from their perspectives (a) as societies with deep collective histories of having to be well-organized to adapt environmental change and (b) as societies who must reckon with the disruptions of historic and ongoing practices of colonialism, capitalism, and industrialization. (Whyte, 2017: 154)
These collective histories of organised societies and capacity to reckon with massive disruptive forces are the crux of Indigenous climate change science for Whyte. In contrast, the statement of AI-induced extinction threats perpetuates modernist rhetorical gestures but with a veneer of concern for preventing future harms.
The uneven distribution of care and responsibility captured in the statement maps on to how human-environmental crises are conceptualised. The Anthropocene flattens out responsibility for the production of global environmental crises, and overlooks past and present settler colonial and imperial socio-cultural relations. Similarly, the statement does not identify which individuals or corporations contributed to, and benefit from, the techno-capitalist system that propels AI, and it generalises the existential threat to all. The effect of such obfuscation is a diminishing of nuance, abolishing of responsibility and absolving of care. Other ways of thinking are possible here, as Todd (2015) reminds us with respect to the Anthropocene, that not all humans are equally implicated in the forces that created the disasters driving contemporary human-environmental crises, and I argue that not all humans are equally invited into the conceptual spaces where these disasters are theorized or responses to disaster formulated. (p. 244)
The statement could have led with recognition of the vast already happening environmental and social harms rendered by AI and other digital technologies but instead conjured harm as future oriented and apocalyptic.
Geographies of care
While risk is mentioned several times in the statement and preamble, care is not and this is a missed opportunity, especially that significant care will be needed if we do want to avoid such possible extinction. An alternative orientation to the future harms of AI could draw on the extensive scholarly engagement with care, including on the geographies of care, that considers the multiple ways in which care is formed by complex and place-based relationalities (see, for example, Power and Williams, 2020; Power et al., 2022; Raghuram et al., 2009). Practices of care are enabled and constrained by structural realities, including economic resources, and shaped by cultural and social norms, and hence are place-based. Here, I discuss the geographies of care literature in the following three main areas: care as a practice that is situated and contingent; care within digital worlds and stretched across spaces; and the tensions in negotiating care.
Offering a broad and inclusive take on care, Fisher and Tronto’s feminist political definition has been widely taken up across different disciplines and particularly in urban geographic scholarship. Fisher and Tronto (1990) state that care includes everything that we do to maintain, continue, and repair our ‘world’ so that we can live in it as well as possible. That world includes our bodies, our selves, and our environment, all of which we seek to interweave in a complex, life-sustaining web. (p. 40)
Repair is a key component of this definition – care does not involve a set and forget orientation, or a sense that problematic relationalities can just be deferred or sidelined. Rather, to care is to enable living as well as possible and that may require fixing broken systems or practices. The aspirational component of this definition is important: Fisher and Tronto ask us to think about how ‘we seek to interweave’ these various components and implicitly acknowledge that these are never complete, but always being made.
Care and repair are considered in conversation between Carr (2023), Osborne (2023) and Stein (2023) in a Dialogues of Human Geography contribution on the work of climate crisis, considering social processes of both adaptation and mitigation. Carr (2023) begins her analysis by outlining the Tradies for Fire Affected Communities (TFFAC), a social media enabled group that began in early 2020 during our last bushfire crisis in southeastern Australia. Within 3 days, over 5000 people had joined the Facebook TFFAC group and started the work of reconstruction after the immediate fire threat had abated. Carr makes the compelling argument that the workers who organised and contributed to TFFAC were well equipped, in terms of their skills and capacity, to respond best as ‘civic subjects’, and that this capacity can be read as being in paradoxical tension with their positioning as workers who make livings off the same industrial complex that has contributed so much to our climate crisis (many of them were coal miners). More analysis is needed, Carr (2023) argues, on how people work within industries that are carbon emission intensive, and to better understand how those people have applied these skills when adapting to climate change realities such as the devastating bushfire crisis. Interestingly, the social networking platform that enabled the coming together of so many skilled people, and so quickly, is not framed as a crucial part of this repair and care relationality. For Carr (2023), care is work, and that work is partly contingent on lived experiences rather than digital affordances.
In response, Osborne (2023) turns to critical disability and crip studies to read these relationalities with a view to unpacking imperatives of care that are held in tension. Osborne (2023) invites us to look beyond moments of crisis so that we consider ‘the work of care, community organising, community support, and the practicalities of living and surviving and beyond in a world not meant for you, that is hostile to your existence, is work already underway’. Queer, crip and anti-settler colonial approaches attend to these care work activities and have long considered how surviving power inequities and structural violences occurs. For this Dialogue, the politics of caring are situated: questions of who is enabled to engage in repair work, and how this is patched together despite impediments, are positioned in relation to Anthropocene moments and ruptures. Resonant of Gibson-Graham’s (2011) invitation to think about belonging in the Anthropocene anew, these contributions consider the roles of care and repair as emplaced and contingent.
Careful digital kinship offers another productive avenue for thinking differently about human–digital relations. Building on geographies of care scholarship, Hjorth (2022) develops the notion of careful digital kinship to explain the entanglement of digital, social and cultural worlds in material and immaterial contexts, and to emphasise relationality and continuity rather than disruption. As a concept, careful digital kinship builds on this geographies of care research in a productive way with an emphasis on social media care relations.
Earlier geographies of care research has considered how place and space shape possibilities and experiences of care. For instance, Bartos (2018) takes up Fisher and Tronto’s definition of care and suggests that ‘relations of care are relations of power’ (p. 67). This confronts the often made assumption that care is a private activity, a point also taken up by Power and Williams (2020) as they evaluate whether cities can be caring places. Place is crucial for Raghuram (2016) who argues that different ways of living invites a consciousness of the plurality of care, care is situated and contingent. Furthermore, the contributions of scholars that work to decentre the Global North has emplaced care in meaningful ways. For instance, by positioning conceptualisations of care beyond the Global North, Raghuram (2016) has offered contributions to the geographies of care scholarship that ‘trouble’ how care is understood, stating that it is important to emplace an ethics of care if we are to fully appreciate its variability. Care practices change depending on context (Raghuram et al, 2009) and is an embodied and ethically situated activity (Popke, 2006).
Care-full geographic approaches involve positioning care relationally by ‘exploring its complex connections to responsibility, ethics and feelings, its political and cultural economies and materialities, and the ways in which it is lived as lacking for many and/or abundant for others’ (McEwan and Goodman, 2010: 109). The uneven geographies of care are important to note as to assume that care is available to all, in the same way, is a careless assumption in and of itself. Human geographic research has clearly offered much in the way of the spatiality and place-based qualities of care and bringing this into conversation with the way responsibility is positioned in AI systems, and in relation to AI processes, may help to continue this work.
The ubiquity of care is due to its ordinariness, in part, but also due to the acute consequences that arise when care is not shared or experienced evenly. The everywhere-ness of care is a concern that Puig de la Bellacasa (2017) discusses as the ‘ambivalent significance’ (p. 2) of care when introducing her analysis of naturecultures and our/their entanglements with technoscience. She elaborates upon three key dimensions of care, engaging with and building upon Fisher and Tronto’s aforementioned definition: labour/work, affect/affections, ethics/politics. Labour/work dimensions of care foreground doing care and the labour involved in giving care; affect/affections relates to the emotional terrain of care and complicating the feminine definitions of who does care work; and the ethics and politics of care are contested and well-studied, explicitly inviting us to think about how to care. While this article poses the provocation of who cares for AI, the ‘how’ of doing such care work is also crucial which is why Puig de la Bellacasa’s thinking is invaluable here.
Digital worlds and the care/carelessness found therein has been recently considered in reflective analysis of algorithms: Maalsen (2023) considers how ‘algorithms are working for, with and against us’ in a paper on harm, care and situating algorithmic knowledges. Drawing on a feminist approach to care, informed by Tronto (1993) and Puig de la Bellacasa (2017) theorising, Maalsen (2023) puts forward that the entanglements between humans and non-humans are a strong focus in geographic research and that the ways that care moves between agents can be extended to algorithms. As algorithms are a key part of how AI functions, the complexities that Maalsen evaluates here are informative for any efforts of centring care and repair in this context.
Emphasising the caring ways that humans relate through the digital is emerging as a strong strand in digital geographic research but began before this more recent turn (a turn considered by Ash et al., 2018 and McLean, 2020a). For instance, Longhurst (2013) studied how mothering over long distances is made easier through digital video calls. Care across distances (Raghuram et al., 2009) is enabled rather than impeded by the digital in this case. In compelling and related research on mobile phones as a tool of everyday care, Hall (2022) demonstrates that smartphones support effective parenting practices and reduce the sense of isolation that can accompany the start of parenthood.
Analysis of the sociotechnical systems and the practices that undergird these is found in Massey’s (1998) reading of the binaries that construct work practices, gender relations and care possibilities in the context of ‘high-tech’ in Cambridge. Massey found that those working in this industry problematically hyper-separated their work for high-tech companies from care responsibilities, deepening the common distinction between ‘abstract and completely “mental” labour on the one hand, and the “rest of life” on the other’ (Massey, 1998: 72). The distinction between intellectual/mental work and whole-of-body work rests upon well-worn tracks of modernist scientific masculinity, Massey argues, and is reiterated in the statement on AI-driven extinction. The mental work of calling out future existential threat is performed in the statement but the caring for the ‘rest of life’ sits with all those who hear, and are exhorted to respond to, the Center’s statement. More recent work by Wilmott (2023) also queries the structural and representational issues of computational politics, inviting geographers to ‘draw more deeply from ongoing interventions in Black, queer, and Indigenous perspectives in critical computation beyond geography’ (Wilmott, 2023).
Powerful research by Benjamin (2019) on race relations and digital technologies demands us to think about how to remake digital worlds so that racial bias is not codified. The placeless, faceless nature of AI, both today and in future imagined digital worlds, furthers the injustices relating to colonial and imperial processes that accompany techno-capitalist realities. In a paper on how to weave together more accountable digital geographies, Rivera (2023) foregrounds anti-colonial relationality, where actual reparations and refigurings of material realities, including returning land and centring Indigenous data sovereignty, work to overcome past and present injustices.
The onto-epistemological dilemmas posed by the statement on likely AI-induced extinction are significant and show a limited understanding of possibilities of alternative futures. If we bring an understanding of responsibility and care to reflecton how this existential threat was constructed, and communicated, the ramifications of it are more evident. Puig de la Bellacasa (2012, 2017) examines care as a politics of knowledge for where technoscientific-naturecultures meld. In this sense, her writings on care are influential here as they bring to light power relations within entangled more-than-human worlds in compelling ways, adding to Raghuram’s geographic inflection of ‘troubling care’. Taking these modes of working with care into the future-oriented world of AI asks us to move away from alarmist rhetoric to more measured and considered positionalities. Existing geographic research on AI provides constructive groundwork for this alternative pathway.
Emerging geographies of AI
The problematic neocolonial and racial logics of AI have been trenchantly critiqued in recent digital geographic scholarship. For instance, Nost and Colven (2022) analyse how climate change action through AI often involves digital solutionism and greenwashing rather than substantive reductions in carbon emissions. These tendencies track with other digital interventions in environmental dilemmas, where another layer of managerialism and more data-based information is emphasised rather than addressing political and structural problems (McLean, 2020a, 2020b). Emergent geographies of AI call out tokenistic digital solutionism and ask big tech for better justifications when pursuing endless innovation.
In an illuminating chapter on geographic scholarship of AI that offers complementary lines of argument to this article, Birtchnell (2021) describes how AI is moving apace but has yet to demonstrate that it can apply common sense, or ‘sound judgement in practical matters’ (Birtchnell, 2021: 18). Such a lack of common sense has impacts in a range of digital geographic contexts that human geographers are studying, including the consequences of flawed AI in smart cities applications that deepen mobilities inequities (Birtchnell, 2021). Geographic approaches, Birtchnell (2021) argues, will bring analyses that centre specificity and context to bear on our understanding of current and future AI–human relations. Aligned with this, geographies of care and responsibility could situate and ground the fraught debates that are ongoing about these digital systems. While Birtchnell (2021) does not focus on care and responsibility in his chapter, the consequences of a lack of common sense in AI applications are saturated in these themes. For example, in the context of AI-supported gaming like Pokémon Go, Birtchnell (2021) and Woods (2021) articulate how AI places another digital layer on public spaces that facilitates corporate territorialisation, albeit in a playful way. Unchecked expansion of such AI-enabled presences would dramatically transform the lived experience of public places.
The declaration of (possible) extinction at the hands of AI is a continuation of threats including mass unemployment and robot takeover but the reality of the mundane qualities of actually existing AI are somewhat different from these imagined worlds. A recent and detailed examination of what AI looks like today is found in Crawford’s (2021) Atlas of AI, which, despite its title, does not engage extensively with geographic literature. Crawford examines how AI relies upon unsustainable extraction and waste management processes that are distanced from the usage of these tools. Rather than accepting the multiple injustices that are inherent to current AI systems, Crawford (2021) asks whether we should be using AI at all, shifting questions from where we could use such digital technologies, to why we are proposing to use them. The agency of humans in renegotiating AI presences and futures is a strong theme in Crawford’s analysis.
Within geographic research, the contradictory processes of AI are examined in varying contexts, most successfully when emplaced. For example, McDuie-Ra and Gulson (2020) analyse how AI has the potential to both reduce and exacerbate current realities of uneven development. In a case study on AI in India in the context of development interventions, McDuie-Ra and Gulson (2020) propose a distinction between precision AI, operated by the World Bank, and populist AI that is a ‘second-tier’ digital technology assemblage, a form of good enough tech from and for Indian citizens to use. They write that the unevenness of development, industrialisation and agricultural transitions are related to AI’s ‘concentrated geographies around tech hubs’ that will be celebrated as sites of success, while the costs of these transformations ‘in both the primary and ancillary workforce will be along the backroads, far from view’ (McDuie-Ra and Gulson, 2020: 631). Their analysis reflects the shadow places of the Anthropocene (McLean, 2020c; Potter et al., 2022), building on Plumwood’s (2008) theorising of how capitalist processes that deliver benefits to the Global North are associated with out of mind, out of sight costs for those not as privileged.
In a landmark special issue on Geographies of AI, editors Walker and Winders (2021: 164) describe how the collected articles examine the spatiality of AI and the ‘limitations and radical possibilities of AI’. The questions of whether AI fails or works effectively, and what sort of futures are rendered possible while others are abandoned, are all considered in setting up the special issue on geographies of AI but questions of responsibility are not at the core of this issue. As part of that SI, Walker et al. (2021) summarise how AI has not been a strong focus of study within geography but that it has been the subject of scholarship in other disciplines, including computer studies and organisational studies. They point out how ‘Artificial intelligence shapes and drives a number of increasingly popular topics in geography – social media platforms, apps of all sorts, the Internet of Things, banking, and finance’ (Walker et al., 2021: 203). The background nature of AI is worth highlighting in terms of the future existential threat we are thinking about here. If AI is in many socio-technical assemblages that are mundane and part of our daily lives, which one of these will kill us? More pointedly, Walker et al. (2021) show how AI is used to counter surveillance strategies and undermine digital injustices. They mention Comiter’s (2019) research on how the structural flaws in AI algorithms leave them open to attack and that this is a serious cybersecurity issue.
In another contribution to that SI, Lynch (2021) examines Artificial Emotional Intelligence (AEI) and suggests that social robotics, in the form of emotion recognition systems and emotion augmentation, will intensify previous problematic socio-technical assemblages. Specifically, ‘social robotics represent an intensification of existing practices of algorithmic governance and control by potentially linking the extensive systems and infrastructures of digital capitalism with the intimate spatial and affective practices of human-robot interaction’ (Lynch, 2021: 185). While the Center’s statement is not focused on emotional AI, we can certainly see echoes of Lynch’s argument in how the prior discursive tracks of alarmist futurism, such as the trope of robots taking over the world, are elevated to an extreme level in its rhetoric. Lynch suggests that the possibilities of AEI may extend beyond what the designers and engineers intend but that there are likely to be glitches in these systems, drawing on Leszczynski (2020), that mean this is unlikely to be seamless. The ruptures that will limit these overreaches and make them less predictable include ‘the complexity of the real world, the limitations and contradictions of the systems’ internal logics, and the indeterminacy and co-evolution of human-robotic affective capacities’ (Lynch, 2021: 197).
Lynch’s recognition of the complex dynamics of human–robotic relations are well-considered and prescient in the context of AEI. It is noteworthy, however, that the duality between AI in the lab, and AI in the ‘real world’ is a part of the formulation of how and why it may not play out as expected. The imagined and tested qualities of AI ‘in-the-lab’ are constructed as non-real in this framing, and continue a distinction that is often found between future digital technologies, and sociotechnical systems more generally, and the rest of the world (Massey, 1998).
More-than-real spaces and AI
Recent digital geographic research has offered the more-than-real concept as a way to navigate contradictory and paradoxical spaces that bring humans, technologies and more-than-humans into generative and destructive relationality. As a feminist digital geographic concept, the more-than-real builds on more-than-human scholarship, arguing that the new modes of human–digital relations that are a part of everyday life are not well understood if positioned within binaries (McLean, 2020a; McLean et al., 2016). Rather than thinking of humans as real and the digital as not-real (or virtual, immaterial, intangible), the more-than-real concept suggests if we position the digital as reworking, remaking and renegotiating some aspects of our spatiality and place-based relations, then a more fulsome appreciation of the costs and benefits of digital technologies such as AI can emerge. Inspired by Massey’s (2004) critique of ungrounded geographies of responsibility, the more-than-real concept invites us to think of the roles of place and space in digital spaces. Massey (2004) argued that A regular litany of words accompanies the characteristic evocation of place; words such as ‘real’, ‘grounded’, ‘everyday’, ‘lived’. They are mobilised to generate an atmosphere of earthiness, authenticity, meaning. (p. 7)
Space, in contrast to place, Massey says, is conceptualised untethered and free-flowing. By querying this binary, Massey challenges us to think differently about these two key geographic notions. The more-than-real helps bring this thinking into digital geographic scholarship, by saying that digital spaces are also place-based, and no less real than non-digital spaces. If we think of the digital as more-than-real, we can counter the ‘placelessness’ that comes with thinking about, and making statements about, geographies of AI as well.
The more-than-real concept resonates with techno-feminist approaches, including Wajcman’s (2004) substantial contributions to the field. Situating AI involves grappling with the complex worlds that it winds through rather than perpetuating science fiction narratives of likely doom. Drawing on computer language, Wajcman (2004) says that ‘an emancipatory politics of technology requires more than hardware and software; it needs wetware – bodies, fluids, human agency’ (p. 77). As well as wetware, we need to better acknowledge more-than-human agency in all its forms, including groundware, dirtware, metalware, gritware, non-human-critter-agency. Making space for these multiple- wares’ agency would enable us to build solidarity with more-than-humans (Gibson-Graham, 2011) and support their lifeworlds rather than continue to exploit and override these. There are tensions between calls for more open data, more information, more digital systems and the Center’s statement that AI will ultimately produce (human) extinction; there are forms of extension and contraction that come from all sides and do not quite work, that are irresolvable. And underlying this tension is an assumption that economic growth can run forever, and that innovation is required to both allow that growth and manage it somehow. Here, the more-than-real lens clarifies such tensions and suggests the ways that we imagine current and future human–digital relations might be otherwise.
More-than-human and digital geographic scholarship is cross-fertilising to produce new ways to think across these often separate realms, paving the way for similar thinking with respect to AI. For instance, Searle et al. (2023) analyse how the ‘digital peregrine’ is forged by an intermingling of digital technologies and more-than-humans, including nestcams and predatory birds. A generative form of urban conviviality is produced as a result of these new interrelations, bringing the intimate and wild lives of these raptors into proximity with humans through digital devices. Care is not an explicit concept in Searle et al.’s analysis of the digital peregrine but it is implicitly present in the conviviality generated by the nestcam-bird assemblage. We could read this form of human and more-than-human relationality enabled by digital apparatus as a care infrastructure (Power and Mee, 2020) as it makes visible, and in intimate detail, the daily lives of birds of prey most humans would not otherwise have access to.
Elsewhere, the digitalisation of consumption offers Liu (2023) opportunities to reflect on more-than-real geographies in a number of ways, built around the recognition of how dominant the role of prosumer is now. The prosumer is a blend of a producer and consumer position, where the digital facilitates opportunities for doing both simultaneously. Liu (2023: 4) gives the example of Alipay, a part of the Alibaba group that includes a ‘game-like platform that rewards its users with “green energy points” each time they reduce their carbon emissions’. With these points, prosumers can plant a tree on an app that is matched with a living tree being planted by a partner non-government organisation.
In more-than-human geographic research, Prebble et al. (2021) examine processes of engaging with and the making of smart urban forests in Australia. Their critical policy review of multiple examples of smart urban forest infrastructure found that sociotechnical assemblages ‘that facilitate smart urban forests tended to reinforce and re-solidify Western values’ (Prebble et al., 2021). At the same time, the agency of plants can be facilitated by more-than-real spaces as the digital can translate and capture more-than-human agency, through sensors (Gabrys, 2022) and communication infrastructure such as emails (Phillips et al., 2023). Gabrys (2022: 15) describes smart urban forests as ‘automated systems mitigating, ventilating, and conditioning the effects of environmental change’, critiquing extractive logics within networked green infrastructure in Mexico, while Phillips et al. (2023) emphasise the generative ways that expressing gratitude is facilitated by people writing emails to trees in Naarm/Melbourne. Both these examples demonstrate the possibilities of more-than-real spaces to affect more-than-human and human relationality: one deleterious, the other more constructive. The more-than-real concept offers an alternative to technological determinism as it suggests that, similarly to more-than-human thinking, there are a range of possibilities that come with sociotechnical assemblages. The (digital) future is not yet written.
Further alternatives to AI doomsaying
In terms of taking responsibility for how digitally mediated worlds emerge, the European Union is continuing its efforts to regulate these spaces. The EU AI Act is proposed to categorise AI in terms of the following three levels of risk: unacceptable risk (and proposes banning these), high risk (and suggests regulating these) and not high risk (no regulation; Future of Life Institute (FLI), 2023). FLI, a not-for-profit organisation, share developments and analysis of the AI Act and, on their front page ask, ‘Why should you care?’ The categorisations proposed will likely carry their own strengths and weaknesses but it is evident that there are alternative pathways to engaging with AI than what amounts to a version of fear-mongering.
Another constructive pathway is to centre how Indigenous peoples have critically engaged with AI opportunities. The Indigenous Protocol and AI Working Group (IPAIWG) have met to discuss the possibilities and constraints of AI and ask these questions to guide their dialogue: From an Indigenous perspective, what should our relationship with A.I. be? How can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and A.I.? How do we broaden discussions regarding the role of technology in society beyond the largely culturally homogeneous research labs and Silicon Valley startup culture? How do we imagine a future with A.I. that contributes to the flourishing of all humans and non-humans? (IPAIWG, 2019)
The Group offer ‘Abundant Intelligences’ as an alternative to AI, building AI based on Indigenous Knowledges. Indigenous scholars, mostly from Turtle Island and Aotearoa/New Zealand, put forward a way to ethically create AI futures that are responsive to, and supportive of Indigenous onto-epistemologies.
Related to the Abundant Intelligences approach, we could also foreground Zoe Todd’s approach to material entities in her piece ‘Indigenizing the Anthropocene’. Todd (2015) invites us to think about material-as-bridge, drawing on engagement with Indigenous artists’ work that centres human and non-human relationality that emphasises the agency of all things. The materials that are a part of the artworks that Todd considers are not ‘mere actants’ but ‘enlivened with spirit. With relationship. With sentience, will, and knowing’ (Todd, 2015: 248). Todd believes that by taking this stance, a different understanding of this Anthropocene moment can be established, that decentres Euro-Western ways of knowing and doing, and looks at the agency of materiality as playing a role in alternative future-making. Bringing Todd’s thinking into conversation with the more-than-real (McLean, 2020a; Searle et al., 2023) also productively continues Kinsley’s (2014) thinking on the materiality of the digital, arguing that the virtual is never immaterial.
Conclusion
Is there any way we could consider this statement as an attempt to exercise care by seeking repair of unruly AI? The Center is asking us, in an escalated way, to consider the damaging potential of the multiple digital components that make up the constellation that is AI and we could take this as a gesture of seeking for others to care for this issue. However, there is evidence of persistent carelessness at play in this domain as major stakeholders in AI perpetuate techno-capitalist models while simultaneously flagging future risks of this digital technology. For instance, Perrigo (2023) shares that, at the same time as OpenAI, a company that researches and deploys AI and whose CEO was a lead signatory of the Centre’s statement, had been pushing for greater awareness of the likely harms of rampant AI, they had also been asking for ‘watering down’ of recent European Union legislation to regulate AI. The White Paper sent by OpenAI to the EU to make this case included the following remarkable paradox: ‘By itself, GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high-risk use cases’ (OpenAI, 2023).
It is worth thinking here about what Wajcman (2004: 106) states when concluding her book on technofeminism: ‘technological change is a contingent and heterogeneous process in which technology and society are mutually constituted’. If those that wrote the statement had thought about this contingency and heterogeneity, then the proposition of extinction may never have been made, and especially in not such an alarmist, globalist and vague way.
The more-than-real spaces that are entangled in human and more-than-human worlds are complex, nuanced, ubiquitous, mundane and surprising. The contradictions of the more-than-real abound. By turning to digital geographic and geographies of care scholarship which is building bridges to navigate complicated sociotechnical systems, we build our understanding of the more-than-real and might think, again, about responsibility for future possible existential threats. So while the software and hardware of AI continues to present challenges to the same techno-capitalists who benefit from them, we can turn our attention to the wetware, groundware, dirtware, metalware, gritware, non-human-critter-agency that are also affected by these systems.
Future work in this area could also build bridges between digital geographic scholarship and critical computational approaches, extending Wilmott’s (2023) clear thinking on why digital practices and outputs tend to lean towards surveillance, oppressive and military-industrial infrastructures rather than engaging with, and enabling, more equitable digital worlds (Benjamin, 2019; Noble, 2018). The Center’s statement fails to offer an alternative path than those already well established by techno-capitalists but the digital geographic, more-than-human and geographies of care literatures are mapping this out in productive ways. Geographies of care and responsibility can illuminate the spatial, place-based experiences of digital technologies, including AI, while the more-than-real concept helps navigate the contradictory conceptual and policy-terrain emerging in relation to these same systems. Returning to Birtchnell’s (2021) critique that AI cannot (yet) demonstrate common sense, we can see that the Center for AI Safety also seems to demonstrate a lack of common sense in this wild statement. Furthermore, we might ask if AI, and its proponents, can demonstrate a capacity to care and take responsibility in a meaningful way, rather than evade doing so.
Footnotes
Acknowledgements
Many thanks to the reviewers and managing editor for their feedback and constructive support of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
