Abstract
The last 2 years have seen a marked rise of geographical work engaging with generative artificial intelligence (AI) across the discipline. Despite this recent surge of interest, there is a much longer history of geographers contributing to debates on AI and actively asking what automated technologies mean for the discipline and what geography can add to our knowledge on the implications of AI. In this report I provide a brief historical overview of discussions on AI within geography before illustrating how geographers have been thinking through the epistemological, empirical and methodological implications of AI. The report points to future research directions and opportunities that AI presents to geography alongside the need for geographers to continue to ask critical questions of AI and actively engage with what these questions reveal.
Introduction
There has been a burgeoning of articles about AI and its application in various geographic fields, including but not limited to cities (Cugurullo et al., 2023), labour, surveillance and activism (Walker and Winders, 2024) and across geographic techniques such as GeoAI (Janowicz et al., 2020). Engaging in this conversation despite its ubiquity is important because, as (Coucelis 1986, 9) observed nearly 40 years ago on AI and geography, “(i)t is not often that geography is touched by a development having the potential to affect substantially all of the practical, technical, methodological, theoretical and philosophical aspects of our work.” My aim in this report then is not to reinvent the wheel but to synthesise the work that geographers have been doing across three lenses -- the conceptual, the empirical and the methodological – illustrating how geographers are engaging with AI but also asking what opportunities AI opens for geographers.
In what follows, I first introduce how AI is understood within geography, tracing the historical discussions within the discipline to the present. Next, I summarise the debates on AI that suggest it is driving an epistemological shift through the way it actively shapes the world and generates new forms of knowing. I then illustrate how geographers have engaged with AI empirically, identifying three themes: AI as a site of extraction; as existential threat; and everyday encounters with AI (these themes are not exhaustive). The report then addresses the methodological implications of AI. It addresses this in relation to how AI is impacting traditional geographical methods and practice but I also pay attention to the methods geographers use to understand AI and its impacts. Finally, I reflect on the opportunities that research with and about AI offers for digital geographies and conclude with some questions to prompt future research directions.
Defining AI and how did we get here
In a technical sense, AI represents a form of computational “thinking” that shifts from the rules-based or numeric decision-making logics of computer programming to solving problems using example-based or symbolic reasoning (Campolo and Schwerzmann 2023). Because of this shift in reasoning, it is in a sense, able to “learn” autonomously (Campolo and Schwerzmann 2023; Smith 1984). While AI has been a subject of research since the 1950s (and arguably has its conceptual routes tracing back to Aristotle) (Smith 1984), it has taken advancements in the capacity and power of computing systems and availability of data among other things, to arrive at how we think of AI currently. Certainly, in public discourse, the rapid rise of ChatGPT in 2023 brought with it both the hype and the critique which underpin the discussion of AI’s powers of social reconfiguration. This has correlated with a surge of interest in AI as an area of geographical enquiry. A preliminary search of the Web of Science core collection using the search terms SU = (Geography) AND (Artificial intelligence) for the period 1980-2026 inclusive, clearly illustrates the sharp spike in publications addressing the topic from 2023 to 2024 onwards (883 in total), similar for the fields of social sciences (59, 355) and earth and planetary sciences (14, 162) (searched July 2025 – the latter two searched for “artificial intelligence” limited to subject areas).
Despite this recent spike in interest, AI is not a new topic of discussion within Geography. For example, there were a series of provocations and discussions published in The Professional Geographer in the early to mid 1980s, and which were revisited 10 years later. As part of these discussions, Smith made early predictions about AI’s impact on geography in terms of explanatory, engineering and pedagogic functions (1984, 153). AI was seen as having potential for modelling spatial decision-making behaviour (explanatory), developing tools for spatial data analysis and decision-making including image processing (engineering), and in the uptake of computing systems that structured searching processes, knowledge and learning (pedagogical) (Smith 1984, 153-155). By the early 1990s Cromley (1993) was reflecting on the advancements in GIS enabled by growing computational power and access, moving the field towards the “automated geography” that Dobson (1983) had outlined in the same journal 10 years earlier.
Geographical debates on automated technologies have been emphasised at times of technological change including the quantitative revolution of the 1950s and 1960s, the growth of personal computing in the 1980s, GIS in the 1990s and the big data revolution of the 2010s (Attoh et al., 2021, 178). However, while there is a long history leading up to this current moment of AI in geography, it appears “that recent advancements in AI do appear to be positioning us at the precipice of social transformation” (Maalsen et al., 2023, 1) and are prompting further critical investigation. For Janowicz et al., 2020 what distinguishes the current success of AI approaches in geography is not just the technical components of software and hardware but “a new culture of data creation and sharing” (2020, 625). This itself is enabled by the growth of big data over last 15 years but especially important according to Janowicz et al. (2020) is the new orientation to sharing data evidenced by an acknowledgement that data is more useful if it is accessible (2020, 626). Situated in the field of GeoAI Janowicz et al. (2020) argue that this new culture of sharing and accessibility is critical for bringing together different domains focused on understanding and using spatiotemporal information (632). This has implications for geography more broadly, opening up opportunities for making connections across different data sets and to reveal ‘new insights, question existing theories, or even propose new theories altogether” (Janowicz et al. (2020), 627).
Arguably, AI’s rise as a hotly debated topic is because of its visibility across multiple aspects of life – cities, labour, nature, education, health and home among others – all of which are areas of key geographical interest. This increasingly pervasive characteristic has led Walker and Winders (2024) to approach AI “as a societal transformation that cannot and should not be contained to one field or subdiscipline within geography,” leading them to argue, that “this emerging technology must be drawn into conceptual debates within all parts of our scholarly community” (2024, 227). Such a sentiment reflects earlier discussions on digital technologies being empirically, conceptually and methodologically relevant to all geographical disciplines (Ash et al., 2018).
As a way of anchoring geography’s relationship with technologies of automation over the last three decades, Attoh et al. (2021) describe two co-constituted strands of engagement. The first, ‘automated geographies’ which refers to the impact of technologies on the practice of geography research; and second, ‘geographies of automation’, which describes work that addresses the way technologies reshape the production of space and social and economic order (Attoh et al., 2021, 17-168). Attoh et al. (2021, 178) predict that AI will continue to prompt new geographical debates and questions including two key recurrent questions: “How will automation and AI alter the discipline of Geography, and how, in turn, will they create new geographies that require geographic inquiry?” In what follows, I will illustrate precisely how geographers have been interrogating these issues.
AI epistemologies
Is AI a continuation of existing technological orientations that inform our conceptual frameworks or does it represent an epistemological shift? While some fields treat AI as just another technical layer, some areas of geography are leaning towards accepting it as an epistemological rupture. Partly, this is due to the assemblages of hardware, software, data and social attitudes that have coalesced to produce the kinds of actions and technologies that have long been anticipated and which have been central to provocations on the powers of AI.
Louise Amoore’s work provides a theoretical foundation. Having consistently pointed to the changing landscape of the machine learning political order (2022), Amoore’s work that ranges across algorithmic shaping and policing of borders (2006), to the impacts of AI on the politics of possibility and alternative worlds (Amoore, 2023; Amoore et al., 2024), has allowed Amoore et al. (2024), to argue that AI represents an epistemological shift. Bringing together new arrangements of computational logics and political architectures, AI is understood as actively determining different ways of knowing and making the world. Amoore and colleagues identify four aspects of AI that endure across both computational and political logics and underpin AI’s epistemological work (Amoore et al., 2024, 8). These are: (1) generativity, which produces a particular knowledge of the world that generates certain courses of action while foreclosing others; (2) latency that, through the compression of data, surfaces previously unknowable relationships, sites or objects that can be governed; (3) sequences, as a political ordering device which, in the context of AI, can deal with non-linear and broken sequences, increasing its capacity to pay attention to multiple spaces of information and generate new political imaginaries (2024, 6); and (4) pretraining and fine tuning, which moves beyond rule-based logics to prompts which elicit desired behaviours (2024, 7). More than just a technical layer, from Amoore’s perspective AI is an epistemic force that acts on the world in political ways, producing particular knowledges at the expense of others.
Distinguishing AI’s authority from previous forms of computational knowledge, Campolo and Schwerzmann (2023) suggest it is AI’s logic of example-based rather than rule-based programming that is responsible for AI’s authority and distinct epistemology. This means, that while “programming rules prescribe explicitly in advance, in machine learning’s artificial naturalism, norms emerge recursively through exposure of models to examples at scale.” (2023, 5). Importantly, the examples used for machine learning are not equivalent to data but sit somewhere between data and norm, being a “complex assemblage by which data is aggregated, formatted, and related to an objective so that norms can emerge to enable predictive or classificatory activity” (Campolo and Schwerzmann 2023). The political implications of an epistemological artificial naturalism are critical to interrogate as they appear to emerge from reality through “the mediation of a technological assemblage of models and examples” rather than human-made commands (Campolo and Schwerzmann 2023).
Epistemological shifts such as those detailed by Amoore, and Campolo and Schwerzmann are also argued to generate a situated algorithmic knowledge (Maalsen 2023). In suggesting this, Maalsen argues algorithms can be positioned as collaborators, moving beyond human-centric models that see technologies as either tools or extensions of human agency (Iapaolo and Lynch 2025, 12). This turn to de-centering the human, builds upon Hayles' (2017) work on “unthought”, which pays attention to both human and nonhuman cognisors and the work that they do. Hayles’ work, as Iapaolo and Lynch (2025, 17) argue, is a productive way for geographers to interrogate AI by allowing them to trace “cognitive associations within broader assemblages” and thus to bring a wider range of actors into view, allowing for deeper insights into the production of, negotiation with and resistance to AI epistemologies. An AI’s situated knowledge acknowledges that AI brings particular insights based on its prompts, learning, and data access that can bring to light things we have previously been unable to surface, akin to the latency to which Amoore et al. (2024, 8) refers. However, by understanding this knowledge generation as situated, Maalsen (2023) argues we are better able to grasp the partiality and fallibility of an algorithmic viewpoint.
Recent work by Indigenous scholars has also been grappling with the epistemological effects of AI in relation to Indigenous worldviews. Pawlick-Potts (2021) for example, brings attention to the relational ethic of north American indigenous worldviews, which eschews the anthropocentrism of much discussion around technologies, and instead situates humans as equal partners with other entities, collectively responsible for pursuing social and environmental sustainability. A history of “practicing reciprocal relationships of mutual respect and aid with animate and inanimate entities” Pawlick-Potts (2021, n. p.) argues, provides a paradigmatic base to build an approach to AI that accounts for its development and participation in society. The importance of relational approaches is also emphasised in a 2025 communique by Aboriginal and Torres Strait Islander researchers, community members, practitioners and allies, who gathered to envision an AI future relevant to their communities. Among other points, they make it clear that the health of country is critical in their vision of an AI future, including addressing the environmental impact of AI technologies (Barrowcliffe et al., 2025, 4). Country, central to Australian first nations ontologies, is a “vibrant and sentient understanding of space/place” inclusive of entities beyond the human and is an active participant in the world (Bawaka et al., 2014, 270). Here, AI must be developed in a way that prioritises the wellbeing of country and its role in world making.
The work of Black scholars such as Ruha Benjamin and Safiya Umoja Noble has been critical to bringing attention to the epistemological work of algorithms, particularly in the reproduction of racial discrimination. Benjamin (2019, 5-6) labels this “the New Jim Code”, referring to technologies that reflect and perpetuate existing inequities while being perceived as objective or progressive, because of the supposed neutrality of technology. The guise of neutrality depoliticises the very material impacts that algorithms have on the world. In Algorithms of Oppression (2018), Noble brings an intersectional approach to show how search engines continue to promote racist narratives and material disenfranchisement of Black women (see also Noble, 2016). This work shows how the legacy of colonial classification systems continues to do work through technological coding. In response to the need to produce a technological ecosystem that can generate an alternative worldview and by extension an alternative way of organising the world, Benjamin puts forward a set of abolitionist tools of resistance. This includes for example a reimagining of technology, via speculative methods, that can help to anticipate and intervene in new forms of subjugation (Benjamin 2019, 195).
Empirical entry points to understanding AI
The pervasive nature of digital technologies more broadly and increasingly their enhancement with AI means that there is no shortage of geographical work focused on AI (for example, see Walker and Winders, 2024; Caprotti et al., 2024). It is beyond the scope of this report to address them all and so instead I focus on three areas that highlight current debates and offer instructive empirical entry points. These are: AI as a site of extraction; AI as existential threat; and everyday encounters with AI.
AI as site of extraction
A key site of enquiry is work that unpacks and critiques the extractive nature of AI. AI’s embeddedness in extractive relations highlights the intricate entanglements across environmental and capitalist extraction. This work spans the environmental impact of AI, from critical mineral extraction to the exhaustive energy needs of data centers (Cellard et al., 2025; Levenda and Mahmoudi 2019), and to the implications for capital and labour (Alvarez Leon 2021). For those focusing on the environmental impacts of AI, this broadly comes under the umbrella of “climate AI” (Nost and Colven 2021).
AI and automated technologies are the “material engines of capitalism and its spatial organization” (Alvarez Leon, 2021, 221). Indeed, the tech sector has a vested interest in producing climate tech, positioning the climate crisis as another site of commodification through which to expand and grow (Nost and Colven 2021, 24; Bakker and Ritts 2018). Climate AI is thus a site of capital and natural resource extraction, underpinned by an uneasy tension in that it “reproduces the very problems that it claims to be solving: those of the climate crisis” (Nost and Colven 2021, 24). AI and data-driven climate technologies are incredibly resource intensive. Further, AI enabled decision making also, as Nost and Colven (2021) observe, unevenly distributes climate funding, knowledge and resources leading to broader social justice questions (2022, 24; see also Machen and Pearce 2025). In questioning these impacts, geographers are continuing a long-held critical engagement with the extractive nature and infrastructure of technology (see Taffel, 2023; Wong, 2022).
The infrastructure that supports AI, such as data centers, have received significant attention for their energy intensive and real estate resourcing needs (Cellard et al., 2025, 13). For example, Levenda and Mahmoudi (2019) show how the natural resources such as water for hydro-electricity and cooling, alongside large amounts of cheap land have centralised the location of big tech data centers in the mid-north west of America. They show how digital capital reduces nature to that which it finds useful: “cheap energy, cheap water, cheap land and green imagery” (2019, 2). Geographers continue to interrogate AI’s position at the centre of extractive relationships, highlighting the uneasy tensions between environment, labour and capital and its role in the ongoing exploitation of crisis.
AI as existential threat
Another empirical entry point is work that discusses the existential threat of AI and its imaginaries. An underlying theme of these debates focuses on questions surrounding the ceding of human autonomy and its implications. Much of this work engages with and critiques the existential threat of AI in the sense that discourses of evolutionary competition between humans and machines, poses the adoption and impact of AI as inevitable deflecting attention from alternative pathways and uneven experiences (Alvarez Leon 2021).
In response to the Center for AI Safety’s (2023) statement on mitigating an AI-driven mass extinction, McLean observes that such narratives of fear risk “distraction from the everyday persistent unsustainability of big tech” (2024, 5). Geographers have also identified how the threat of AI is used to distract critique from systemic causes of disruptions to employment markets. For example, focusing on automation and robots as the driving factors of job losses, obscures the structural factors that have created the uneven power geometries of economy, and the contingent and multiple ways this plays out across different geographies (Alvarez Leon 2021, 227).
Geographers do not always counter claims of AI’s existential threats, however. Cities have been a site where some of these debates have played out. For example, in Frankenstein Urbanism Cugurullo (2021) suggests that AI could so significantly reshape the city through the replacement of human jobs, activities and swathes of the population, that the city as we know it may cease to exist. Here it is not only jobs that are being threatened but AI is also posed as radically reshaping our definition of a city. The urban robot is one such agent that is reconfiguring the city and eliciting responses that reflect the ways AI is perceived as threat or friend. In their research on sense of belonging in public spaces in Finland, Savela et al. (2024) reported that participants felt more anxious and had a lower sense of belonging in urban spaces with robots than they did if it was shared with people or even if it was deserted (2024, 6).
While such narratives of threat are integral for bringing critical attention to the nefarious, dystopic and extractive characteristics of AI, as Leszczynski (2019) has warned in relation to platform urbanism, such focus risks a reductionist narrative of AI. Feminist approaches to technoscience have been proffered to broaden discussion of the existential possibilities of AI that pays attention to the relational, mutually contingent and messiness of AI in practice (Jackman, 2024; Lynch, 2024; Prakash, 2024). Doing so brings attention to the unevenness of the experience of AI and as Prakash observes to the inability so far, of AI to “do without human labour” (2024, 1281).
The potential existential threat posed by AI has offered an entry point for geographers to consider the impacts of AI world making. For some, it means actively bringing attention to the radical reshaping of spatial experiences it might catalyse, while for others it is an opportunity to bring our attention back to structural conditions which often underpin these threats. Regardless, both approaches remind us to pay attention to the relations of power that circulate within and through AI. It is only by calling attention to these that we can respond to the threats posed by AI.
Everyday encounters with AI
Geographers are also paying attention to the “mundane” applications of AI, highlighting the importance of understanding how AI is being deployed across and within the ordinary and familiar. The mundane, as Leszczynski reminds us, is a direct entry point for understanding the implications and consequences of digital technologies as they are experienced by people in the everyday (2020, 1195).
The urban is one such site of emergent everyday applications of AI, that both hint towards its future but simultaneously reveal the imperfect nature of its rollout. At the level of urban governance, land use planning, regulatory and compliance checks, zoning and permitting are just some examples of urban domains where AI is being integrated (Sanchez et al., 2023, 182). These are important bread and butter tasks of urban planners and are key to the functioning of the city. While these applications might not incite the same existential threat outlined above, they still require critical attention. For example, the powers of discretion for planners to make decisions in regard to regulations, local context and for the benefit of the public is a key part of street level bureaucratic work (Bullock 2019, 791). Yet research has shown that automation has led some bureaucrats in public organisations to report having less perceived discretion (De Boer and Raaphorst 2021, 53).
AI’s impact in urban governance can also be clouded by urban practitioners’ lack of understanding around AI which can make it more challenging for “local governments to navigate a complex technological transformation” and limits the “creativity of policymakers in implementing relevant strategies and their ability to overcome challenges” (Yigitcanlar et al., 2021, 2). Ethical challenges such as bias, privacy, equity and transparency have been identified as priorities to address for the responsible integration of AI across urban governance (Sanchez et al., 2024; Yigitcanlar et al., 2021).
Paying attention to everyday encounters with AI problematises the notion of AI operating autonomously. Using people’s encounters with robots as an example, Sumartojo (2023) argues that these experiences are not seamless but rather depend on the human having to act in ways that assist the robot. Reflecting on the frictions that arise from the messiness of everyday life and robots’ limited capacities to deal with such unpredictability, Sumartojo shows how people learn to anticipate and accommodate robot limits (2024, 165). This might mean for example, moving furniture for a robot to better navigate (Vincent 2021) or in the case of delivery robots in Milton Keynes, people adapting to share the footpaths with them (and in some cases pet them) (Valdez et al., 2023, 142).
Encounters with AI are therefore often characterised by mundanity and messiness, but this does not lessen the need for critical attention. The normalisation of these technologies both in terms of the application to everyday tasks and our everyday encounters with them have the effect of making their presence feel inevitable (Sumartojo 2023). But much like McLean’s (2024) warning about fear as a distraction, we must be cautious not to be complacent because of this familiarity.
AI and geographical methods
AI is also influencing geographical methods. As Coucelis observed, AI is more than just a “promising young methodology” but something which is “making possible some things that were never thought possible before” and forcing us to rethink ways in which we categorise our world and practice (1986, 9). We can consider this in two forms. First, are the affordances of AI and the larger data sets that geographers can now access. Second, are the methods used to understand AI including the increasing uptake of creative methods to gain insights into the effects of AI and the knowledges it produces itself. I address these two different lines into AI’s methodological implications for geography.
AI shaping geographical methods
As noted earlier, this current moment is not the first time AI has shaped geographical methods. Janowicz et al., 2020, 625) points us to Openshaw and Openshaw's (1997) work on how AI would change geographical enquiry and the discussions in the 1980s on the implications of AI for geography (Couclelis 1986; Smith 1984). What is distinctive about this moment, according to Janowicz et al., 2020, 625, 626) is not just the creation of larger data sets, exemplified by big data (Kitchin 2014) but importantly, new cultures around data collection and sharing. This new “data culture” (Janowicz et al., 2020, 626-627) is characterised by enhanced data availability, reuse, synthesis and analysis. Within geography, this culture has been encapsulated by the subdiscipline of Geospatial artificial intelligence (GeoAI) which integrates AI into GIScience and is described as having a three-pillar foundation of computing, geospatial big data and AI (Li and Ning 2023).
AI does heighten the capabilities of computational cartographic work. GeoAI techniques such as machine learning can perform better at complex cartographic tasks than traditional statistical and computational methods. This includes for example, better identification of roads, building and other geographic objects on maps (Kang et al., 2024, 600). The use of synthetic data to train AI can also overcome challenges in cases where data is scarce, sensitive, expensive to collect or where privacy is a concern, increasing the shareability of data as result (Romano, 2025, 3-4). Further, GeoAI can support the creativity of cartographic design work, modelling and transferring aesthetic and style elements into the mapping process (Kang et al., 2024, 600).
At the same time, Kang et al. point to the growing ethical concerns of using AI in cartographic work including commodification, bias, geoprivacy, responsibility and transparency (2024). Indeed, the cartographic objects and knowledge produced at the intersection of AI and GIS should not be accepted uncritically. As work by Zhao et al. (2021) and Lin and Zhao (2025, 506) illustrate, deepfake satellite imagery and intentionally falsified geographical information has serious implications and amplifies the long history of maps being manipulated or distorted in pursuit of certain agendas (see Monmonier 1991; Thatcher et al., 2024).
Methods for understanding the work of AI
While the preceding section focused on how AI is enhancing geographical methods, here I will pay attention to long standing and emergent methods through which geographers are attempting to understand AI and the work that it does. Fortunately, geographers are well equipped to understand the impacts and workings of AI. The practices of making the digital material, visible and felt as discussed in the first report on digital geographies Maalsen (2024) and Leszczynski's (2019; 2018; 2019) three reports on digital methods, in this journal, are instructive. I expand the methodological discussion by focusing on the emergence of creative and interdisciplinary methods that are being applied to understand the work of AI.
Creative and arts-based practices are an increasingly popular way to evoke and understand peoples’ experiences with and attitude towards algorithmic technologies, such as AI.
‘Research creation’ – the use of “creative and post qualitative methods” – is an example of this type of approach, and useful for exploring entanglements of data and digital technologies (Lupton and Watson, 2021, 467). Underpinning the application of creative methods in research is the assumption that “the act of creative making is not simply a medium to facilitate or communicate research findings: it is a research generation practice in itself” (Lupton and Watson, 2021, 467). In trying to understand “the felt presence of algorithms in everyday life” for example, Lupton and Watson used collaborative zine making as a tactile and hands on approach to disrupt understandings of algorithmic impact as individualised and mediated by screen (2021, 470). Zine making and its associated practices of collage, writing, and illustrating became a conduit for the researchers to better understand how participants encountered algorithmic impacts and framed personal data issues (Lupton and Watson, 2021, 470).
Others have approached understanding AI through speculative techniques which are well suited to the emergent nature of AI technologies. In one project on robots, for example, Sumartojo et al. used collaged images of public spaces as “technologies of imagination” for sparking discussion on “how it might feel to be together in public space with robots” (2021, 101, 102). To make the collages, the research team layered images of robots onto familiar representational spaces, orientating their position in ways that showed the robots as purposeful. Research participants used these collages to explore how they would feel if they encountered robots in the contexts depicted, including whether their inclusion in certain spaces “felt right” (Sumartojo et al., 2021, 102). Online speculative co-design workshops have also been used to explore participants’ assumptions about robots and opinions on what work they could do (Sumartojo et al., 2024, 7). Such approaches are useful for making sense of autonomous futures because it was shaped not just by the technologies but by the futures participants wanted to see. As Sumartojo et al. note, “speculative research mode is not only about materialising or stabilising possible futures, but instead a use of creativity to feel our way forward towards the forms of relation we want for the future” (2024, 15).
Interdisciplinary collaborations in pursuit of a better understanding of AI’s impact on spatial production and experience are also increasingly common. For instance, Lynch et al.’s (2025) exploration of robotic spatiality, brought together an interdisciplinary project team assembled from human geography, social robotics, digital art, and curatorial practice, along with undergraduate and postgraduate student assistants from diverse disciplines (2025, 1365). Drawing upon multiple methods including ethnographic research, interviews, the development of a socially aware navigation system for the robot, and reflective discussions, the team analysed their findings across different scales from the small-scale human-robot interaction through to social spaces, and larger institutional and regional scales. The value of doing so was to assert the importance of broader approaches to understanding the impacts of “robotics in place and the places of robotics” (Lynch et al., 2025, 1365).
Exciting methodological opportunities exist for geographers asking questions of AI. The turn to creative and interdisciplinary methods offer a way for grappling with the challenges of studying technologies such as AI that are emergent, rapidly changing and are which are sometimes difficult to articulate. Creative and interdisciplinary practices are critical for understanding the changing landscape and meanings that technologies such as AI generate.
Conclusion: We have been here before but we need to look where we are going
Almost 40 years on, Coucelis’s words that “there may be much more to the ‘computational revolution’ in geography than the mere popularization of a powerful and versatile data-handling tool” are still relevant (1986, 9; see also Dobson 1983; Smith 1984). In this report I have shown how geographers’ engagements with AI have been shaping geographical practice, opening new lines of empirical enquiry, and expanding epistemological boundaries. Such shifts have been occurring over a period of decades and dialectically – it is not simply a case of AI shaping geography but rather, as Attoh et al. (2021) observe, a process of mutually constituted geographies of automation and automated geographies.
Moving forward it is imperative that geographers continue to critically engage with the world-shaping implications of AI and associated technologies, as well as reflecting on how it might make possible “some things that were never thought possible before” (Coucelis1986, 9), and the implications of this for research practice. This includes being open to working not only with other disciplines but with AI technologies themselves as (falliable) collaborators (Maalsen 2023). But it also means that more than ever, we need to hold the application of these technologies to account. By this I mean, we need to hold to account our use of AI within the everyday work and research within the discipline but also in terms of the subjects of our empirical enquiries. The extractive nature of AI as highlighted above, demands that we do not only ask questions of AI and its impacts but that we actually act on the insights that our enquiries generate. It is not enough to merely write journal articles about AI and its negative impacts – we must take care as a discipline to translate our research in ways that shares our insights and can work towards policy reforms. The work of Indigenous scholars which decentres the human and highlights a relational view of worldmaking that prioritises social and environmental sustainability may offer a way forward (Barrowcliffe et al., 2025; Bawaka et al., 2014; Pawlick-Potts 2021).
Considering the epistemological reorientations AI geographies may require, it is an opportune time to be curious about elements of critical post-humanist theories and Indignous ontologies that decenter the human. These theories and ontologies may offer new viewpoints and allow us to ask a wider range of questions as argued by Rose (2017) and Iapaolo and Lynch (2025). To conclude then, I take my own advice and ask how can digital geographies move forward the debates and application of AI in ethical and conceptually groundbreaking ways? What are the radical opportunities thinking with and about AI offers us? And finally, what alternative pathways might arise, and could this include a rejection of AI and might that be the most ethical way forward?
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
