Abstract
In cities, the application of artificial intelligence (AI) is being directed towards transforming different aspects of urban life. These applications take material form in urban spaces, with autonomous vehicles (AVs) providing a prominent example. AI systems rely on large volumes of data on their surroundings to refine the algorithms and enhance the accuracy of prediction for operational efficiency and safety. However, such algorithmic learning and execution can present challenges when dealing with the unpredictable, complex, and dynamic aspects of urban spaces. Nature is a paradigmatic example of such unpredictability, because natural phenomena usually defy consistent patterns and precise data-based modeling. This paper introduces the idea of “frictional urbanisms” to examine the tensions between the smooth operational demands of AI and the inherent roughness of urban environments. In Southeast Asia, Singapore stands out as a first mover in smart city innovation. Despite Singapore's reputation as a regional and global leader in digital transformation, the testing of AVs has faced considerable challenges, particularly due to nature-based factors. By drawing on semistructured interviews with diverse stakeholders in the AI and AV testing landscape in Singapore, the paper shows how these challenges manifest in practice and examines their broader implications in and for the field of AI urbanism. Our study reveals that integration of AI in urban spaces is fundamentally shaped by persistent frictions, which are not exceptional circumstances but constitutive of everyday urban life in autonomizing cities.
Introduction
The implementation of artificial intelligence (AI) in urban environments has started to garner significant scholarly attention. An evolving body of literature engages with urban AI, which encompasses a diverse collection of AIs such as robots, autonomous vehicles (AVs), city management systems, and software agents that acquire, process, and interpret data about the urban environment (see Batty, 2018; Bratton, 2021; Cugurullo, 2020; Cugurullo et al., 2023; Luusua et al., 2023). 1 The incorporation of urban AI into the everyday life, governance, and planning of cities and how it shapes these aspects has been termed “AI urbanism” (Cugurullo et al., 2023). The city is an important training ground for AI, as it provides both large volumes of data such systems require and the material environment in which AI operates (Cugurullo et al., 2024). AVs exemplify the material manifestation of AI in cities, embodying its technological advances and the complexities of deploying it within urban environments. When dealing with dynamic, unpredictable features of urban environments, frictions in AI integration may occur.
By examining Singapore’s experimentations with AVs, we highlight how these challenges manifest in practice and examine their broader implications for the evolution of AI urbanism. We specifically offer insights into the complexities of integrating AI with the unpredictability of urban nature. While AV trials in Singapore serve as the empirical focus of this paper, they are treated as a case study through which we explore a broader set of tensions that emerge from the deployment of AI in cities. We develop the idea of frictional urbanisms to capture the tensions that arise between the digital and urban realms while integrating AI to the city. Frictional urbanisms provides an analytical lens for understanding the slippages between data-driven AI technologies, which are designed to solve urban problems, and the material city and its infrastructures, political systems, sociocultural practices, and environmental factors. In this paper, we focus on one aspect of such frictional urbanisms: AI–nature frictions. While “nature” is a multifaceted and highly contested term (Castree, 2005), here we refer to nature as consisting of both animate and inanimate elements, including flora, fauna, weather systems, and other environmental processes that are not created by humans. The focus on AI–nature frictions emerged inductively from our empirical research on smart cities, the details of which are provided in the methodology section. By attending to this specific dimension, this paper offers a grounded entry point into the broader analytical framework.
The effective integration of AI to urban environments requires a smooth operational landscape determined by regulatory and governance frameworks, sociocultural practices, the built environment, and nature-based elements. Gaps and variations in these components can lead to frictions in AI integration. This issue becomes pronounced in the context of urban nature and the associated consideration of nonhuman lives, climatic conditions, ecological processes, and so on (Hannam, 2024), where their unpredictable characteristics, coupled with other social, political, and technical factors, make accurate predictions challenging. AI or ML (Machine Learning) algorithms are trained on datasets through which they identify patterns, autonomously generate their own rules, and learn to produce outputs based on differential inputs (Allen, 2020). When faced with the unpredictability and roughness of urban nature that is not appropriately represented by historical data, the smooth operational landscape of AI algorithms is disrupted. The balance between leveraging the smooth operational demands of AI algorithms while accommodating the inherent roughness of nature remains a critical challenge for the future of AI implementation and demands closer examination.
Discourses around AI are often shaped by anthropomorphic assumptions, overlooking more foundational and cross-cultural understandings of AI as a material process that manifests through specific applications, infrastructures, and interactions embedded in the physical world (Bratton, 2021). AI's interactions with urban nature remain an underexplored topic in the field of AI urbanism. While smart cities strive to regulate environments for smoothening the operational landscape of AI (Gaio and Cugurullo, 2023; Marres, 2025), urban nature remains less controllable, leading to frictions in AI integration. AI used in AVs relies on significant amounts of data on their surroundings to refine the algorithms and enhance predictive accuracy for operational efficiency and safety. The quality and consistency of data are crucial for algorithmic decision-making (Vovk et al., 2005) to allow AVs to outperform conventional human drivers by reducing human error (Lim and Taeihagh, 2019). In cities, special testbeds are allocated to provide controlled environments for AV trials. Nevertheless, when faced with unpredictable and complex scenarios triggered by the assertions of urban nature, AI can struggle to predict outcomes and execute driving tasks effectively. These interactions between regulated environments for smooth AV testing and the roughness of urban nature demand closer examination to understand how AIs interact with, adapt to, and conflict with the unpredictable elements of nature.
Singapore is recognized for experimenting with cutting-edge technologies to address urban issues. AI is no exception in this regard. The government has adopted a multi-faceted and balanced approach to AI implementation and governance that could serve as a common reference point globally (Lim and Chng, 2024). An emerging area of AI experimentation in Singapore is the testing of AVs. Singapore faces significant urban mobility challenges, including land and workforce constraints and an aging population (Diao, 2019; Koh et al., 2015). Due to these challenges, implementing innovative solutions like AVs has been deemed crucial (Land Transport Authority Website, n.d.). In KPMG's AV readiness index for 2020, Singapore was ranked first globally in terms of the presence of policy and regulatory frameworks, advanced technology, and consumer readiness (KPMG International, 2020). Despite this, AV testing and implementation have faced obstacles, especially in the area of optimizing the sensing capability of AVs under complex scenarios (Yong and Conejos, 2023). One significant obstacle has been navigating the natural environment and its unpredictability. Specifically, Singapore's tropical climate presents a range of severe weather conditions, including heavy rain, intense thunderstorms, and strong winds. The city-state also experiences two monsoon seasons a year—the Northeast Monsoon (December to March) and the Southwest Monsoon (June to September)—during which these conditions are heightened, posing significant challenges to AV trials.
The rest of the paper comprises five parts. The following section reviews the relevant literature on the integration of AI into urban environments, with particular emphasis on the concept of frictional urbanisms and the frictions encountered in AV testing. This is followed by a contextual overview of AI and AV implementation in Singapore. Subsequently, the methodology is outlined, followed by an empirical examination of Singapore's AV experimentation. The paper concludes with an overview of findings and reflections on the broader implications of frictional urbanisms.
Urban environments, AI integration, and nature-based constraints
Urban AI and AI urbanism
Cities worldwide have adopted different technology-driven forms of urbanism, including models such as the digital city (Couclelis, 2004; Ishida, 2002), the intelligent city (Komninos, 2011; Mitchell, 2007), and the smart city (Angelidou, 2014; Hollands, 2008). The smart urbanism paradigm has been widely adopted across contexts, significantly influencing urban planning, transforming urban spaces, and shaping everyday life. Nevertheless, the rapid advancement and integration of AI have introduced new possibilities of transforming cities, signaling a shift towards AI-driven urban futures (Caprotti et al., 2024). While AI has been around for several decades, its application in and to cities has only become prominent in recent years, spanning sectors like mobility, energy, and public safety, among others. Urban AI can be understood as artifacts operating in cities, which are capable of acquiring and making sense of information on the surrounding environment, eventually using the acquired knowledge to act rationally according to pre-defined goals, in complex urban situations when some information might be missing or incomplete. (Cugurullo, 2020: 3)
AIs deployed in cities have been categorized as urban AI because they function in urban environments and rely on resources and infrastructure like big data, Internet of Things (IoT), electrical grids, and server farms that are concentrated within cities (Cugurullo et al., 2023). AI integration also creates new imaginaries of urban life. An example of this is Yokohama, Japan, which was selected as a “Future City” and offers a perspective of government-initiated experimentation in the realm of AI and robotics to achieve “Society 5.0” (Boenig-Liptsin, 2017). The city has served as a regulatory sandbox for AI in eldercare, with government support facilitating trials like Fujitsu's UBIQUITOUSWARE, a multi-sensor platform for monitoring the elderly, to enable intelligent healthcare and integrating AI-enabled robotic caregivers (Boenig-Liptsin, 2017). AI's integration into cities shapes urban planning, resource allocation, and regulatory frameworks. At present, the issues of AI ethics and governance have been prioritized around the world (Lim and Chng, 2024) to enable its seamless deployment with minimum risks. With increased AI deployment, understanding their impact on community dynamics, privacy, and security also becomes essential.
While cities increasingly depend on AI to address urban problems, AI also relies on the city for learning. Cities provide the largest volumes of real-world data for AI training, rather than carefully curated datasets modeled on computers. Therefore, AI can learn and develop further by analyzing complex urban dynamics (Cugurullo et al., 2024). AI's integration into urban spaces reflects a significant shift from traditional automation to advanced forms of autonomy. Based on the inferences drawn from datasets, advanced urban AI can operate autonomously and make decisions without direct human oversight (Cugurullo, 2020; Tiwari, 2025). This transition to autonomous functioning separates AI urbanism from earlier paradigms of technology-driven urban development, such as smart urbanism. While urban AI involves application of AI within cities, AI urbanism explores how the implementation of AI technology influences city life, governance, and planning, and attempts to capture the transformations that will take place in future cities as a result of urban AI (Cugurullo et al., 2024).
In positivist smart city discourses, AI is frequently positioned as a revolutionary solution to urban challenges that can seamlessly manage various functions through automation, prediction, and optimization (IBM Newsroom, 2025). However, this narrative overlooks the intricate interplay of smart systems with urban infrastructures, social networks, natural systems, and more. Challenges associated with related technologies, such as the IoT, illustrate these complexities, as unreliable communication due to device mobility, sensor failures, large-scale coordination problems, and adaptability all hinder the seamless implementation of digital systems in urban spaces (Omrany et al., 2024; Zikria et al., 2021). AI is not synonymous with IoT (though it often depends on IoT-generated data), and it involves more complex processes of data analysis, learning, and autonomous action. These already complex functions are becoming even more complex due to the dynamic, unpredictable nature of urban spaces. Palmini and Cugurullo (2023) argue that the idea of the city as an autonomously calculable unit is problematic and undermines other forms of intelligences, such as human and more-than-human, that are present within the city. Narratives of AI's superior analytical capabilities can marginalize alternative, yet vital forms of knowing and relating to urban environments and their interdependencies. This technologically reductionist view of urban life risks oversimplifying the complexities of city dynamics. Building on this critique, we show how overlooking nature-based elements can hinder AI integration in cities, causing disruptions. Effective deployment requires acknowledging the dynamic interactions and frictions between AI systems and the natural environment.
Understanding AI–nature frictions through frictional urbanisms
In geography, the idea of friction is typically used in relation to mobility, where frictions (or obstructions) can slow down movement but also shape how movements occur (Cresswell, 2014; Marston et al., 2005). Separately, anthropologist Anna Tsing (2005) considers friction as a “grip of encounter” through which universals or globally circulating ideas like capitalism can gain real-world traction. While Tsing uses mobility of universals to critique the idea of a frictionless world, broadly her analysis centers around the complex encounters and tensions between global forces and local specificities. Friction has also been used to explain data friction, which entails the social, technical, and political challenges that occur in data flows between people systems or organizations causing delays, distortions, and losses (Bates, 2018; Edwards et al., 2011). Furthermore, Rose (2016) uses the idea of friction in the context of digitally mediated cultural production and circulation, where friction occurs in the case of interfaces between social, digital, and technical aspects. While previous works on friction draw on its physical origins and speak to various notions of mobility, here we use friction in a more conceptual sense to understand the tensions or nonalignment that occur between implementation of AI, which is considered a universalist, revolutionary technology, and dynamic, diverse urban contexts. Therefore, our notion of friction resonates more with Tsing's (2005) ideas of friction that capture interactions between universalist logics of global processes and contextual complexities. These zones of interaction never work smoothly (Galloway, 2012, as cited in Rose, 2016) but encounter various kinds of friction.
Shannon Mattern (2017) writes that the current paradigm of comparing a city to a computer is appealing “because it frames the messiness of urban life as programmable and subject to rational order.” However, complex urban dynamics sharply contrast with such technological metaphors of order and smoothness. These complexities cannot be reduced to calculable units, challenging the deployment of data-driven technologies like AI. This nonalignment can disrupt technology integration, resulting in what Leszczynski (2020) calls “glitches” in the system. Unlike traditional smart technology, AI systems need to make autonomous decisions based on datasets (Cugurullo et al., 2023), with the goal of developing superintelligence that goes beyond human capability. AIs’ promise lies in their adaptability and learning capacity—traits that also make it particularly vulnerable to the irregularities that cannot be easily modeled based on learnings from datasets. These disruptions challenge AI's smooth operational landscapes in high-stakes urban domains such as transportation, public safety, and environmental management.
Cities are inherently complex entities because of overlapping systems and concentration of stakeholders with varied interests, leading to unpredictable outcomes of various interactions. While technologies such as AI attempt to smoothen the roughness of such interactions, this is frequently beyond its calculable capacity, leading to frictions in its urban application. Our notion of frictional urbanisms captures how the deployment of AI-driven autonomous systems encounters challenges due to incompatibilities between the smooth operational logic of AI and the complexities of existing political, social, and environmental systems within cities and the interactions therein. As urbanism is a study of what happens inside cities, its form and function (Rogers, 2020), frictional urbanisms reflect on what unfolds within autonomizing cities. It entails the understanding of unforeseen outcomes, errors, disruptions, or failures that occur while integrating AI with the material city and its multiple systems. It entails both technical disruptions and structural challenges that are encountered in autonomizing cities. Leszczynski's (2020) conceptualization of the glitch covers “accidental,” “momentary” dysfunctions in platform-urban configurations, that are unexpected and lead to both “error and erratum” (Russel, 2012, cited in Leszczynski, 2020). They represent marginal platform experiences that open up opportunities for “mundane tactical manoeuvres” (Leszczynski, 2020: 197). Ideas around “glitch-thinking” (Carraro, 2023) have been extended by scholars to understand how disruptions in platformization can lead to openings for alternative practices and resistances (Carraro, 2023; Leszczynski, 2020; Leszczynski and Elwood, 2022). Scholars have also discussed breakdowns and disruptions in smart city systems that arise from unstable technologies, fragmented policies, and complex urban realities, producing unexpected and often nonsensical outcomes that challenge dominant narratives (Gabrys, 2016; Liu, 2025). However, the frictions in AI's materialization are not just about user resistance or creative maneuvers. They reveal systematic failures of alignment or mismatch between the smooth operational logic of AI and the real-world systems it is meant to operate within. The idea of frictional urbanisms relates to broader tensions and challenges inherent in integrating the digital and the material in autonomizing cities which can be social, regulatory, environmental, or technical. While ideas of glitch, disruptions, and breakdowns reveal ongoing instabilities within digital-urban systems, there is a hint of exceptionalism in these concepts. In frictional urbanisms, the focus is on the nature of AI technology itself which is based on data-based calculations and predictions. Frictions are not necessarily exceptional circumstances, but are rather constitutive of the everyday dynamics in autonomizing cities, where the urban and digital realms are in a constant state of tension and ongoing adaptation.
Table 1 presents a broad classification of types of frictions that can occur in AI implementation. These categories are not exhaustive, nor are they fixed. Rather, they are overlapping and open-ended, intersecting in complex ways.
Categorizing “frictions” in frictional urbanisms.
Through its various manifestations, frictional urbanisms highlight the situated misalignments that shape interactions between AI and urban spaces. As a conceptual lens, it pushes scholarship on AI urbanism to adopt a more grounded, granular reading of how autonomous systems encounter and navigate real-world complexities. It also brings to focus the centrality of “frictions” in the operation of urban AIs, not only in digitally advanced contexts like Singapore but across diverse cases. We use the plural “urbanisms” to emphasize the multiplicity and diversity of frictions or combinations of frictions that can occur in autonomizing cities.
As stated earlier, we are focusing on one aspect of frictional urbanisms in relation to the natural environment or nature-based frictions. Despite advances in AI research on environmental predictability, further efforts are needed to build more trustworthy AI components, which ensure greater accuracy (Albahri et al., 2024). For example, AI-based forecasting of natural phenomena like natural disasters has limitations because of the “roughness” of nature. Roughness refers to the inherent unpredictability and erraticness of natural phenomena that defy consistent patterns and resist precise modeling. This includes factors like extreme weather conditions, irregular vegetation growth, and unpredictable behavior of living organisms that cannot be determined smoothly. While AI shows promise in weather forecasting, its ability to predict extreme events intensified by climate change, is hindered by over-reliance on historical data (Hannam, 2024). As temperatures rise, phenomena like heatwaves, hurricanes, and wildfires become harder to predict (Harvey, 2023). AI faces similar challenges in dynamic, unstructured natural settings like wildlife monitoring and conservation, hindering real-time adaptation and decision-making (Rathi, 2025). The lack of clear patterns in such cases could hinder AI to process variations and respond to certain environmental changes, revealing the limits of AI models.
These frictions raise critical questions about relying on AI for high-risk decision-making, such as disaster response and transportation. A similar concern exists with AVs which require a high level of safety and trust in AI to allow them to navigate various, sometimes unknown, situations effectively. Understanding how AVs interact with their surroundings involving natural components is crucial, as it highlights issues that can impact performance and safety. It is not only the extreme weather conditions that present exceptional challenges to AI implementation, but even seemingly everyday, mundane obstacles can significantly disrupt their operations. Yet, much of the existing literature on the intersection of AI and nature in urban environments focuses primarily either on positive or negative implications of AI on the environment. Some of the literature focuses on AI as a potential solution to environmental problems like climate change (Cowls et al., 2023; Leal Filho et al., 2022), while others examine AI's impact on the environment itself through excessive energy use, waste generation, and heating (see Brevini, 2020; Wang et al., 2024). This emphasis on environmental repercussions can overshadow the equally important need to understand how AI must adapt to the unpredictable elements of urban nature.
Frictions in AV testing in urban settings
AV trials build on the long trajectory of integrating smart technologies into urban environments that have reshaped governance and material space (Shelton et al., 2015; Townsend, 2013). But they also mark a shift, as the AI systems embedded in AVs are not only collecting and communicating data, they are designed to learn from and independently act within urban environments. Driverless cars have transitioned from a futuristic idea to a reality, with trials and pilots underway. The promise of AVs has generated significant buzz for their potential to revolutionize urban transportation by facilitating long-distance and challenging commutes (Cusack, 2021). It is also expected that they will enhance the mobility of the elderly and differently abled, and reduce road accidents and carbon emissions. Despite such optimistic projections, AV trials face many hurdles. AV trials are not just technological experiments: they are also urban experiments as testing involves interacting with urban infrastructures, socionatures, and regulatory frameworks (Dowling and McGuirk, 2022). Urban spaces are where AI's actions are increasingly materialized (Cugurullo et al., 2023). The testing of AVs provides a way to understand how these actions materialize and highlights tensions. This is crucial for addressing safety concerns and effective navigation in the dynamic urban environments.
AVs are fueling ongoing debates about responsibility, liability, and accountability for errors within complex systems of AV testing and implementation that involve interactions between autonomous machines, urban environments, and a wide range of human actors, including users, companies, programmers, regulators, pedestrians, and human drivers (Aoyama and Leon, 2021). Ethical and technical concerns have been raised over algorithmic decision-making for AVs, including the complexity of AI's nonlinear learning from datasets or inheriting biases from datasets (Lim and Taeihagh, 2019). These concerns have been heightened by accidents and malfunctions involving AVs. For example, after the recent launch of Robotaxis in Austin, Texas, there were reports of the AVs driving erratically—entering wrong lanes or stopping unpredictably, despite having safety monitors onboard (Quiroz-Gutierrez, 2025). AV testing is often limited to controlled environments like specific roads, precincts, or dedicated test-beds (Dowling and McGuirk, 2022), which can exclude complex social, political, and environmental interactions that influence AI's operational landscape. AVs encounter various challenges in making decisions within dynamic environments that are filled with uncertainties and unpredictable movements (Pendleton et al., 2017). When it comes to nature-based components in urban environments, such as nonhuman lives, vegetation growth or precipitation, algorithmic prediction and decision-making in AVs can be challenging. Adverse weather conditions such as hailstorms, heavy fog, and snowstorms can hamper the ability of the sensors used in AVs to perceive the surroundings and cause difficulty in executing perception tasks such as object recognition (Zhang et al., 2023). Initial AV testing was done only during clear weather conditions for safety concerns (Sundarajan, 2016). However, as AV technology advances, there is an increasing focus on incorporating these unpredictable environmental factors into testing protocols for more accurate decision-making.
The contrast between the idealized conditions of controlled testing environments and the complex, variable nature of nature-based factors in urban environments creates tensions in the AV testing landscape. These tensions mean misalignments between AI systems and the material and ecological variability of cities, and their mundane and everyday nature that constitutes the basis for frictional urbanisms. In the next sections, we examine the case of AV testing in Singapore. We identify the constraints of the natural environment, and the challenges it poses when integrating AVs and AI into urban environments.
Evolution of AI integration and AV testing in Singapore
Singapore aims to establish itself as a global leader for AI research and implementation (National AI Strategy 2.0, 2023). In 2019, the Singapore government launched the first National AI Strategy (NAIS) with the aim of deepening the use of AI for economic transformation. With this, massive investments were made in AI Research and Development, along with launching of the first model AI governance framework in the world (AI Verify Foundation, 2024). Subsequently, and with further developments in the field of AI, the National AI Strategy 2.0 (NAIS 2.0) was released in 2023, aiming for AI excellence in crucial fields such as health and climate change, as well as empowering communities and businesses (National AI Strategy 2.0, 2023).
A key aspect of Singapore's vision for AI deployment is the adoption of AVs, that was publicly introduced in June 2013, envisioning driverless electric cars as a replacement for private vehicles (Tan and Taeihagh, 2021). AVs are envisioned to be a sustainability fix in the city-state owing to their capacity to optimize route selection and reduce carbon emissions. Like other infrastructural and technological domains, the AV development and testing landscape in Singapore is dominated by the public sector, with the LTA playing a key role. The LTA leads the development of Singapore's AV deployment roadmap and has facilitated on-road AV trials since 2015 to enhance the public transportation system through AI technology. Moreover, LTA has formed a joint partnership with A*STAR called the Singapore Autonomous Vehicle Initiative (SAVI) for providing a technical platform for industry partners and other stakeholders to carry out R&D work and test AV technology (CETRAN website, n.d.).
To support AV trials in Singapore, the LTA, Nanyang Technological University (NTU), and Jurong Town Corporation 2 (JTC) launched the Centre of Excellence for Testing and Research of Autonomous Vehicles (CETRAN) in the year 2017. CETRAN is responsible for developing Self Driving Vehicle (SDV) standards by running the AV trials. CETRAN manages a 1.8-hectare AV test center, constructed by JTC Corporation to facilitate the advancement and implementation of AV technology in the city-state (CETRAN website, n.d.). In CETRAN, NTU is in charge of operating the test circuit and AV prototypes to navigate various local conditions, including traffic rules, traffic behavior, road design, and Singapore's tropical climate (Smart Nation Website, n.d.). CETRAN's testbed thus attempts to create real-world-like conditions that AVs will eventually encounter in public spaces, allowing the vehicles to learn and adapt to these conditions in a controlled setting before being deployed for public use.
With the launch of Singapore's Smart Nation Initiative in 2014, smart urban mobility solutions like AVs have been prioritized for enhancing public transportation in Singapore (OpenGov Asia, 2017). Smart urban mobility is one of the national strategic projects of the Smart Nation, under which the government aims to harness AV technology for enhancing convenience and accessibility of public transportation (5 national projects for 1 Smart Nation, 2018). Building on this foundation, with the recently launched Smart Nation 2.0 initiative, the government places even greater emphasis on the deployment of AI-based solutions such as AVs in Singapore.
Over the years, both the number of trial locations and the range of roads available for testing have steadily increased in Singapore (Land Transport Authority Press Release, 2019). Small-scale AV trials have been taking place in areas such as Sentosa Island, the National University of Singapore campus, Jurong Island, and One-North (Land Transport Authority Website, n.d.). AV implementation for enhancing public transportation in Singapore is currently in between testing and pilot phases, with areas like Tengah, Punggol, and Jurong Innovation District identified for the initial deployment of AV buses and shuttles (Land Transport Authority Website, n.d.). Furthermore, CETRAN has approved smaller AVs to be put on public roads for carrying out different functions like the delivery of food and the surveillance of public places (Yong, 2022). Yet, concerns remain about AI's ability to accurately recognize human behavior in complex situations or the impact of extreme weather conditions, intensifying the focus on safety issues related to AV deployment.
Methodology
This article is based on a larger research project on the politics of smart city knowledge transfer in Southeast Asia (see Das et al., 2024, 2025a, 2025b, 2025c, 2025d; Grimley et al., 2025). The fieldwork for this project was carried out across multiple cities in the region. Singapore served as the foundational case for this research, given its prominent role as a regional leader in technocratic governance and exporter of smart solutions. Within Singapore, we explored the dynamics of adoption of cutting-edge technological solutions and related challenges, including the evolving landscape of AI implementation and AV testing.
A total of 67 semistructured interviews were conducted in Singapore between August 2022 and February 2025. This paper draws on 18 of these interviews carried out with public sector representatives, private players, and researchers. We interviewed government representatives and researchers involved in AV testing, such as Jurong Town Corporation (JTC), A*STAR, and CETRAN. Additionally, our sample included both public and private stakeholders engaged in AI governance, research, and implementation, such as the Infocomm Media Development Authority (IMDA), Surbana Jurong, and Amazon Web Services. The sampling followed a snowball approach, starting with pre-existing contacts. In the later stage of the project, purposive sampling was adopted to fill the gaps. While the identifiable details of the interviewees cannot be shared due to ethical considerations, a general breakdown of the participants is given in Table 2.
Breakdown of research participants.
The interviews explored broader themes related to AI research and integration in urban environments, while also delving into specific progress and challenges associated with AV testing in Singapore. An interview guide was prepared, which was then adapted for each of the interviews. It was used flexibly to allow for open-ended responses and the exploration of emergent themes. The questions broadly covered a range of topics, including the interviewees’ professional backgrounds, their experience with AI and AV implementation, the status of AV testing and the challenges associated with it, and their perspectives on future developments in the field. The interviews were analyzed using thematic analysis, allowing for the identification of key themes and patterns related to participants’ experiences with testing AI solutions and AVs, along with the nature of challenges involved. The process involved manually coding the transcripts and developing broader themes such as: nature-based constraints, unpredictable elements in urban settings, hypersensitivity of AI, and the challenges of AV testing in tropical environments. The study was approved by the university's Institutional Review Board. Written consent was taken from every participant using a participant information sheet and consent form. The names of the interviewees have been removed to maintain anonymity.
Urban complexities, nature-based constraints, and AI deployment in Singapore
In this section, we discuss the integration of AI in urban environments, with a focus on the nature-based constraints in Singapore that affect AV testing efforts. While the main case study is about AV trials, the discussion extends to broader dimensions of AI deployment in the city, drawing on diverse stakeholder perspectives to advance the conceptual framework of frictional urbanisms. The section is divided into three parts. First, we explore the frictions that arise when AVs are tested in dynamic urban spaces in Singapore, where unpredictable more-than-human factors disrupt the functioning of autonomous systems. Second, we discuss the issue of AI adaptability in real-world contexts, highlighting the tensions between the smooth operational demands of AI and the complexities of urban life. Finally, we turn to the issue of hypersensitivity in AI and the often mundane nature of tensions encountered in AV testing in Singapore, demonstrating how even seemingly basic problems and incidents reveal deeper structural misalignment inherent in autonomizing cities.
AVs in dynamic natural environments
Geographies of digital technology extend beyond the digital realm and shape material spaces (Sheppard et al., 1999, as cited in Ash et al., 2015). What distinguishes AI from other technologies is its more direct and tangible impact on urban spaces. For example, AI artifacts like autonomous cars and robots need substantial urban space to effectively operate (Cugurullo et al., 2023). This in turn impacts the design and planning of urban spaces to make them more conducive to AI operations (Marres, 2025). AI systems require carefully designed landscapes with minimum frictions to learn and adapt. AVs and delivery robots reveal AI's material impacts, as they require urban practitioners to significantly adjust infrastructure, as well as traffic and pedestrian regulations (Koh and Yuen, 2023). As cities try to embed AI technology into the materiality of everyday urban life—a shift is underway towards creation of more controlled or “manicured” environments. However, most urban conditions remain dynamic, and the possibilities of encountering disruptions in AI systems remain high. We spoke to a roboticist from A*STAR (Agency for Science, Technology and Research) working on service robots in Singapore, who shared his views on the interactions of autonomous robots with dynamic environment in the city. Highlighting the importance of these exceptions, he said To robots if there is 1% failure you are bound to get that 1% all the time. For instance, you may not recognize A, may not recognize B because the training data did not contain that environmental aspect or there is a variant that it has never seen before. So the norm does not matter, the exception does. (Personal interview, 13 November 2024)
Even small exceptions can cause significant frictions as AI systems operate in real-world environments. The margin for error in the case of autonomous systems is low, leading to frequent frictions in autonomous cities. AI can struggle to recognize new objects or variations if they were not present in the training data, leading to significant challenges. These frictions are more pronounced while dealing with the natural elements, including flora and fauna in the city. AI's interactions with humans have garnered scholarly attention (March, 2021; Mou and Xu, 2017), especially with various AI-enabled virtual assistants, chatbots, and robots interacting with us every day. AI used in AVs require constant awareness of behaviors of human drivers and pedestrians to navigate safely. However, the challenges of the smooth integration of AI into urban spaces transcend human interactions. The adaptation of AI needs to take into account the complex ecosystem of cities, which are spaces of cohabitation for more-than-human life (Edwards et al., 2022). Urban flora, such as trees and vegetation, can significantly shape the material environment on which AI operates. These elements can create obstructions since they are not static and are much less predictable than built environments. In this context, the physicist from CETRAN, whom we interviewed, explained the impact of these natural elements on AV testing in Singapore: The moment you have no buildings but a lot of trees, the opposite happens. I can see the tree and I roughly know where I am. If you and I walk through a park, you roughly know where you are, you recognize this plant and the other things. The computer base mapping doesn’t work, because the plant is growing all the time, so if a plant gets 10 cm taller in three weeks, the computer doesn’t recognize it anymore, because it still expects the plant that was mapped. (Personal interview, 26 April 2023)
Technical assumptions of AI's smooth, predictable operation in cities are limiting, and can easily lead to frictions while operating in green urban spaces. Singapore's environmental protection goals resulted in the state-led “garden-city” initiative (Han, 2017) that focus on building parks and planting trees. With the transition to “City in Nature” approach, Singapore has intensified its efforts to integrate nature to urban spaces by expanding park networks, naturalizing management of green spaces, and bringing nature closer to the built environment (Singapore Green Plan 2030, n.d.). While much of this involves the creation of manicured green landscapes, not all aspects of nature can be fully controlled. There have been many instances of wildlife causing nuisances or fallen trees blocking roads, creating obstructions in the material environment (Low, 2021). But these nature-based factors can create disruptions in the digital realm as well. The physicist from CETRAN gave the example of Gardens by the Bay, an urban park in Singapore, which operates an autonomous shuttle service for visitors called “Auto Rider.” The Auto Rider is able to navigate independently along a pre-determined route. Since there are not many concrete features in Gardens by the Bay, and it is covered with greenery and plants, the landscape keeps evolving. Therefore, they need to measure and remap the entire route area almost every month to ensure seamless operation of the vehicle (personal interview, 24 April 2023). This reveals the gap in AI's ability to adapt to the evolving natural landscape autonomously. It regularly needs human intervention through updated mapping for ensuring accurate operation.
The “transurbanist” view of AI (Cugurullo, 2020; Palmini and Cugurullo, 2023) that recognizes its possibility of going beyond the capabilities of humans overshadows the fact that there are other critical intelligences (human and more-than-human) that can influence AI and its operations. The interplay between these intelligences can generate frictions as each operates on different logics and priorities. Urban intelligence can take multiple forms and cover information embedded in the city's built and natural environments (Mattern, 2017). Dismissing the dynamic aspects of natural environment that is a key component of the city's ecology can cause multiple glitches in the autonomous systems, hampering its decision-making abilities. For the safe deployment of AVs for public use, these autonomous systems must effectively perceive and respond to the natural intelligence found within the city.
Adaptability of AI to real-world scenarios
AI's interaction with the urban environment, which begins from the testing phase, is a critical aspect of AI deployment to ensure its adaptability to the material city. AI must not only sense the surroundings but also continuously learn from these interactions and adapt to make informed decisions in real time. We interviewed various researchers working on the implementation of AI technology in Singapore who spoke about the dynamic process of AI adapting to its surroundings. A professor from the National University of Singapore's Mechanical Engineering department, working on AI and robotics, outlined four key elements necessary for effective AI integration: A robot has four important components. Firstly, understand the environment. Number two, plan what to do. Number three, execute the plan. Number four, learn from it so that next time you do these three things, you will do it better in a way, just like how humans develop as we grow, right?… Bringing the vision of robotics out of the factory into everyday life, we need to address these four things. (Personal interview, 8 August 2023)
AI's ability to perceive, adapt, and improve through ongoing interactions with the complex material urban environment is vital for the functionality and safety of AI deployment. In the case of Singapore, with its material landscape featuring a high-density urban population, diverse traffic conditions, and unique climatic factors, the adaptability of AI systems becomes even more critical with possibilities of multiple frictions between the material and digital realms. As such, the integration of AI into Singapore’s urban landscape requires not only technological innovation but also careful consideration of unique contextual conditions and characteristics of urban infrastructure and spaces by the AV testing bodies and policymakers. Another interviewee, a physicist who has been working in CETRAN since its inception, stated, For Autonomous Vehicles, we need completely different data. We need to understand how traffic behaves and how vehicles behave because we need to figure out whether our autonomous vehicles go into traffic, will it behave correctly, and we don’t even know what correctly is. Will it be safe? Is it predictable? Is it smooth behaviour? Is it causing problems? (Personal interview, 24 April 2023)
Tech developers aspire to achieve a perfect system, but there remain inherent blind spots in fully accounting for the complexities of real-world environments. There remains considerable uncertainty in determining the adaptability of AI to real-world factors such as traffic behavior as well as in defining what constitutes “smooth” or “safe” behavior for autonomous systems. There have been instances where the AI system does not perform as intended, posing risks to humans. An AI engineer from Huawei Singapore discussed the criticality of safe operation from sensing technology in transport. For the Smart Nation, so many software technologies need to be used, some can be very critical, some can be less critical, right. For the critical one, like the transport traffic, if some sensor goes wrong we need to pay a very high cost. (Personal interview, 28 September 2022)
The engineer's concerns regarding the high cost of sensor failure in transport systems underscore the critical need for AI to adapt with precision to real-world urban complexities. Within the framework of frictional urbanisms, this highlights how even small technical lapses can reveal deeper structural frictions in AI deployment. Despite some positive results from small-scale trials in Singapore, the uncertainty surrounding the reliability of AI remains a challenge. The high-density urban landscape of land-scarce Singapore poses further challenges for the AI systems of AVs, especially in terms of navigation and route planning (Yong and Conejos, 2023). For example, Singapore has over 1.4 million parking lots spread across multistory and underground car parks. While AVs are able to navigate street addresses using GPS maps, vertically spread-out car parks present a significant challenge for AV navigation, creating a gap in its ability to autonomously and effectively execute “last mile” operations (Yong and Conejos, 2023). In this case, AI's limitations are revealed when faced with navigation of contextual peculiarities or new urban dimensions such as the vertical realm. Due to the uncertainties in its operations, the status of AV experiments is still in the pre-piloting stage in Singapore.
While state experiments in digital technology aim to transform Singapore's infrastructure and urban spaces into technological test-beds (Das and Kwek, 2024), this ambition is fraught with challenges. As AV testing still largely takes place in controlled environments or even through simulations, the contrasts between these settings and real-world conditions in Singapore have further complicated and delayed the deployment process. One interviewee, a researcher working on AI modeling for AVs using simulated environments, commented, Simulators cannot be easily converted to real world-situations. Even if we have some good algorithms based on simulations, there are still gaps. There may be some areas that are not covered by the simulator. (Personal interview, 24 January 2025)
While simulations offer a degree of control in developing AI algorithms for AVs, aligning with their smooth operational logics, they can fall short in navigating real-world scenarios. AI's adaptability therefore remains a critical challenge in AV testing, particularly in terms of its operation in dynamic environments.
Artificial Intelligence, stupid problems
AI's over-reliance on data-driven decision-making can lead to hypersensitivity to external environmental factors. This can result in the system becoming needlessly cautious, unable to take simple decisions. The physicist from CETRAN shared an incident when the autonomous shuttle on NTU campus was unable to navigate through the route after a heavy thunderstorm: We have one case, we were driving a bus, in the dry run in the morning, it worked fine. But in the afternoon, suddenly the bus stops in the middle of the route. Why? Turned out, at one time, there was a heavy thunderstorm … the rain comes down, the tree branches get all wet and heavy, and drop by a metre or something. And the sensor in the roof of the bus suddenly (perceives) this tree which was above the bus now is half in front of the bus, and boom, the vehicle stops because there's something in front of me. A normal bus would just hit the tree branch and continue driving. Autonomous bus—hey, there's something in front of me, I need to stop. Those kinds of quite funny things happen. (Personal interview, 26 April 2023)
This incident shows the everyday, mundane nature of frictional urbanisms that can even result from movement of natural components such as a tree branch in pre-determined routes for AVs. It also shows the persistence of unpredictability and friction even within controlled test settings, which are often designed to minimize the impact of complex environmental, social, and political interactions. While testbeds can eliminate complex interactions between sociopolitical and environmental factors, such incidents reveal limits of “controlled environments.”
A key area where the glitches in AV systems become visible is navigation of changing weather conditions. AI technology used in AVs still lacks the capability to function effectively in extreme weather conditions. The changes in environment create discrepancies between the sensing results and the mapping data, affecting AV's ability to navigate (Zhang et al., 2023). AVs face various challenges when it comes to navigating rainy weather. These include adjusting the driving capabilities to slippery road conditions, sensing puddles on the road, or even sensing heavy rain through its LIDAR sensors (Why weather is a problem for autonomous vehicle safety, 2025). These often result in mismatch between the vehicle’s perception of its environment and the actual conditions, hindering accurate and safe decision-making.
Singapore is a tropical city with hot and humid climate and frequent rainfall. These tropical climatic conditions of the city-state have particular implications on the deployment of AVs. Given that weather conditions can change throughout the day in tropical cities—from intense sunlight to heavy showers—the AVs must be advanced enough to adapt to the shifting environmental conditions. Reflecting on this challenge, the physicist from CETRAN remarked, “You still have some stupid problems and there are other stupid problems popping up” (personal interview, 26 April 2023). He mentioned that the computer software expects plants to be rigid like concrete buildings, but they are always moving due to rain, wind, and so on. These problems have been referred to as “stupid” because the inherent inability of the autonomous systems to detect the most basic issues and environmental changes. With the increasing use of AI, concerns have been raised about “Artificial Stupidity,” wherein autonomous systems make flawed decisions or are unable to make decisions, which is considered even more problematic than their potential superintelligence (Falk, 2021). This resonates with Tironi and Valderrama's (2018) idea of “idiotic data,” which shows how smartness is co-constituted through an ecology of disruptions and breakdowns in everyday life. Idiocy can take the form of computer programs that “inadvertently restrict when they should allow access or in the form of those who would find themselves unable to traverse the city at any point, in an idiotic deferral to sensorized environments” (Gabrys, 2016: 224). Nevertheless, such “stupidity” of autonomous systems can undermine human decision-making and potentially create risky situations. Frictions due to gaps between AI's data-driven logic and the messiness of urban environments can create a space where such “Artificial Stupidity” can occur. These technical gaps do not just disrupt the operation of AI systems but highlight the complexities of integrating AI to material environments. As AI assumes agency in cities and reshapes urban environments (Palmini and Cugurullo, 2023), it also creates conditions for the occurrence of such failures. The concept of frictional urbanisms helps make sense of these occurrences, not as mere errors, but symptomatic of deeper nonalignments between AI systems and urban environments that characterize AI deployment within cities.
The incident shared by the physicist shows that AI systems can create perception errors in the case of everyday environmental changes, such as movement of tree branches during or after rain, which human drivers can usually navigate with ease. AI has been considered path-breaking in solving many problems due to its impressive computational abilities, making decisions that humans sometimes cannot arrive at. Yet, AI can fail to detect fundamental problems in nature-based aspects of cities. Weather conditions have their own patterns and rhythms that shift throughout the day. As a result, AVs can struggle to navigate their way even through the familiar routes, as demonstrated by the example of NTU's AV shuttle trial. Even in Singapore, with developed advanced infrastructure for AV testing and model regulatory frameworks, AV deployment still faces challenges stemming from the risk of technological errors. According to a professor from the National University of Singapore who works on autonomous technology research, while Singapore is an AV-ready country, it is still confronted with technical issues of AI. He said, “So, the challenge is mainly the technology, right? We still have to solve some technical problems such as improving its intelligence” (personal interview, 8 August 2023).
These technical glitches in AI caused by natural environmental changes become more prominent in Singapore's tropical conditions, where strong winds or heavy rain can add to the frictions of embedding AI to the city. However, these challenges can also present opportunities for potential advancements in the field of urban AIs. The scientist from A*STAR working on service robots stressed that these constraints in AI's sensing of weather conditions in Singapore can indeed be useful: Solving the perception issue in the case of heavy rains, will also provide us the first mover advantage. Especially because of climate change there are major changes in weather patterns everywhere. So solving these problems will become advantages for us. (Personal interview, 13 November 2024)
As the impacts of climate change are felt globally, similar challenges related to autonomous technology are likely to emerge in other cities. Therefore, beyond establishing AV-ready infrastructure and regulations, the government needs to focus more on solving fundamental, “stupid” technical issues and strengthen its position in the domain of AI integration. However, to achieve this, it is important to acknowledge the points of frictions in integrating the digital and material infrastructures of autonomous systems in the city. This includes developing nuanced understanding of the nature-based constraints and other forms of urban intelligence operating in the city that AI needs to perceive and learn to navigate.
Conclusions
Examining the case of AV testing in Singapore, the paper discusses the frictions encountered in AI deployment in cities. The paper shows how the smooth operational demands of AI, which reduce the city to algorithmic predictions, experience different obstacles due to the complex, unpredictable nature of real-world urban dynamics. The integration of autonomous technologies to urban environments is thus not straightforward, but requires careful navigation and balancing between AI's data-driven logic and the contextual complexities of urban spaces. To understand these dynamics, we develop the idea of frictional urbanisms that examines the tensions or frictions between the digital and material realms of the autonomous city. These frictions are not just irregularities and accidental glitches that open up alternate possibilities in platformization (Leszczynski, 2020), but are constitutive of the everyday urban life in autonomizing cities. The idea of frictional urbanisms draws attention to the nature of AI integration, and highlights the deeper systemic nonalignments such as between AI and natural systems, AI and regulatory frameworks, or AI and social aspects. Frictional urbanisms, therefore, is not just a critique of AI technologies, but also a framework for understanding how the digital and urban dimensions co-evolve through disruptions and continuous adaptation. This adaptive process is not incidental, but fundamental to the way AI is tested, resisted, and reshaped within urban contexts, requiring policymakers, tech experts, and planners to design for flexibility and contextual complexity rather than seamless rollout of AI solutions.
We focused on one dimension of frictional urbanisms, which is AI–nature frictions. While Singapore is generally considered “AV-ready,” the trials have faced challenges due to natural elements. This also shows a contradiction between Singapore's “City in Nature” vision and its aspirations for being a leader in AI. Both agendas reflect efforts to exert control over the environment through manicured landscapes, and over mobility through autonomous systems. Yet, both encounter persistent disruptions that resist complete rationalization and control. Apart from the city-state's tropical weather with heavy thunderstorms and strong winds, shifts in nature-based components such as height of vegetation or movement of tree branches in rain, which may seem mundane to humans, become obstacles in AI's hypersensitive navigation. Singapore's case underscores the importance of considering not just technological advancements in AI but also its alignment with environmental and material conditions that can significantly impact the deployment of these technologies.
While AI–nature frictions represent one dimension of the frictional urbanisms, there are other areas of frictions in autonomizing cities. These include, but are not limited to, the nonalignment of regulatory frameworks, governance structures, traditional infrastructures, and sociocultural factors. These dimensions also demand critical attention as cities move towards autonomous futures. Frictional urbanisms challenge the technological assumption of reducing everything to algorithmic calculations and predictions (Mattern, 2017; Palmini and Cugurullo, 2023), which can lead to stupid problems and failures in AI integration. As shown in the empirical analysis, understanding frictional urbanisms can also provide valuable insights and opportunities for improving AI deployment in cities. By acknowledging and understanding these frictions, more nuanced and context-specific frameworks can be developed for safe and effective use of AI. Policymakers and practitioners should move beyond purely technical measures and incorporate environmental, infrastructural, and social factors into AI regulation and planning. This means developing adaptive testing standards, inter-agency governance mechanisms, and resilient design protocols that account for everyday, mundane frictions in urban settings.
Footnotes
Ethical considerations and informed consent statements
This study was approved by the Institutional Review Board of Singapore Management University with approval number IRB-22-111-A060(822). Informed consent was obtained in writing from all participants involved in the study. In cases where written consent could not be obtained, verbal consent was taken instead.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Ministry of Education, Singapore, under Grant MOE T2EP40221-0007.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
The data that support the findings of this study are not publicly available due to ethical restrictions to protect the privacy of the research participants.
