Abstract
Artificial intelligence (AI) has captured the interest of academia and a range of industries. Much of this appeal is driven by the obsession with military and political supremacy, and a desire to control people and their movements. This article looks at the expansion and impact of AI systems deployed along physical borders on the mobility of illegalised non-citizens and border security in the Global North. Using the case study of Silicon Valley’s Anduril, the article focuses on the US–Mexico border and assesses AI’s role in hindering illegalised mobility and reconfiguring border control. Two key contributions are made in the paper. First, AI advances must become the focus of border criminologies, examined within the milieu from which the technology emerged. Second, virtual walls are commercial, political and anti-humanitarian. They are opaque, resembling alchemy, flawed but with profound consequences. The article adds to the debate on whether AI experiments should be permitted in border control.
Introduction
There seems to be a growing conviction in public discourse that technological advances could ‘completely modify the future course of the humanity’ (Ghimire, 2018: 6). As the Fourth Industrial Revolution unfolds, computers and devices based on automation and artificial intelligence (AI) have captured the interest of academia 1 and a range of industries. Indeed, the technology seems to ignite both utopian and dystopian visions of the future, from advances in medicine and tackling climate change, to inequality-promoting algorithms or even ‘killing machines’ that could threaten the entire human race.
Whether simple or complex in physical design (viewing suggestions vs social learning robots that use natural language and facial expressions), we use smart devices every day. Examples of narrow or weak artificial intelligence such as Amazon’s Alexa are so pervasive and convenient that, even when we are irritated by technology we continue to rely on it, captured by devices’ ease, accuracy and speed. Technology’s capacity to advance from inputs/learning sets, own experience and the environment, as well as its ability to process, analyse and act on a vast amount of data seem unprecedented. But is it? How much of it is hype, powered by profit-hungry venture capitalists (Horgan, 2020; Katz, 2020; Larson, 2021)? Recently, the director of the US Customs and Border Protection (CBP) Innovation Team suggested that ‘there’s [no] organization on the planet that doesn’t want to do something more efficiently using AI’ (Ghaffary, 2020). While such assessments are exaggerations, commentators seem to agree that life in the mid-21st century will fit the formula ‘take X and add AI’. Yet, much of this contemporary interest, as Paul Virilio (cited in Wall and Monahan, 2011) conveys, is driven by the obsession for military and political supremacy, and a growing desire to control people and their movements. Should we, then, continue to develop and deploy AI in all areas of human engagement and governance?
In times of accelerated global flows of people, only temporarily hampered by the COVID-19 pandemic, the promise that decision making by semi or fully autonomous algorithms could expedite authorised and prevent unauthorised mobility remains unabated (Beduschi, 2020). Building virtual instead of physical walls is gaining bipartisan support across the Global North as a cheaper and more humane way to deal with the illegal migration ‘problem’ (Ghaffary, 2020; see also Vukov and Sheller, 2013). The emphasis is on the promise of algorithms in preventing migrants’ arrivals and reinforcing non-entrée policies at border crossings, as well as land and sea borders. By approaching the border, people become targets of a range of systems that perform surveillance and reconnaissance, assess risk, and make a preliminary decision on their lawfulness. However, the assurance of hi-tech borders that ‘enable the fantasy of total security’ (Bourne et al., 2015: 313; or security continuum – Vaughan-Williams, 2010) comes with many caveats: overpromise of technology, discrimination, privacy violations, black-boxing (non-transparency and incoherence), privatisation of border control and security, and ultimately many harms caused to people with no legal means to cross borders.
Method
In this article, I explore the expansion and possible impact of AI systems and devices deployed along physical borders (which I also call border lines) on the mobility of illegalised non-citizens 2 and border security in the Global North. While externalisation of border control and the expansion of places of bordering beyond the geographical lines of demarcation have been the focus of academic debate in the last couple of decades, 3 I bring the attention back to borders as ‘continuous line[s] demarcating the territory and sovereign authority of the state, enclosing its domain’ (Walters, 2006: 193). Using the single-case study of one of the youngest yet leading AI companies in border policing 4 – Silicon Valley-based Anduril – I zoom in on the US–Mexico border and analyse AI systems’ role in countering illegalised mobility and reconfiguring border control. I selected the US–Mexico border because it is one of the biggest testing grounds for the application of autonomous algorithmic systems in countering unauthorised movement of people, with the private sector at its helm. Given the scarcity of empirical research and contributions on AI enterprises in border security, the paper draws on publicly available news articles, institutional and government reports, policy documents on Anduril from the company’s foundation in 2017 to the time of writing in February 2022, as well as material provided by the company and its co-founder Palmer Luckey on social media. 5 While artificial intelligence powers a range of frontier technologies, in this article I focus on code, the Internet of things (IoT) and unmanned aerial vehicles (UAVs; drones) given their current and predicted application in border control.
I make two key theoretical contributions to border criminologies. First, I suggest that AI advances must become the focus of border studies, given technology’s likely harmful impact on managing illegalised migration and despite the overpromise of current innovations in this space. Scholarly analysis ought to include the milieu from which the technology emerged: a new order of young, white, conservative, utopian, and often geeky Silicon Valley men, tycoons with commercial ambitions and strong ideology. While privatisation of social control and criminal justice is certainly not new, and while the technology they offer is not (yet) radically different from previous attempts in border control, the prominence of ideologically clad AI start-ups, whose main motivation is not only profit, is novel. New is also the process of knowledge production (Martins and Jumbert, 2020) and the decisive solution for illegalised mobility: AI-powered, smart borders.
Traditionally, Silicon Valley avoided partnerships with the defence and border security sector. 6 Contrary to their predecessors, new leaders in the field do not shy away from such partnerships or pretend to/simulate partisan neutrality. Quite the opposite: they seek such alliances and are open about their political affiliations. 7 These new business leaders are also clear about the mission and purpose of technology they develop and deploy at the border: the political and military supremacy of ‘forces of good’ in the new technological world. They secure billion-dollar contracts regardless of who is in the office, by approaching the border as an engineering problem that can be solved (Weston Phippen, 2021), but with an ambiguous definition of what constitutes a ‘success’ in border control (for more discussion on the failure of border technology, see Lisle, 2018). The article offers a critique of this unfolding AI insurrection and tech-solutionism in mobility management in the United States and beyond.
The second key point I make in this paper is that the process of assembling virtual walls is not just commercial and political, but anti-humanitarian. It is also opaque, just like alchemy. Alchemy in this article is understood as a practice of transmuting base metals into gold. Similar to the promise of medieval alchemists, contemporary entrepreneurs claim that, by using AI to process big data at the border, they can achieve a total visibility and increased automation in border control, reaching an ideal of smart and secure borders (Heyman, 2008) at low cost. The final product – an AI-generated ‘full picture’ of the border – is sold to us as gold, transmuted from base metals of border control (border paraphernalia, surveillance devices, border guards). Yet, as I demonstrate below, we have no idea what is actually delivered, or what lies under the bonnet. What we do know, however, is the likely impact of such developments on people on the move and ultimately, on Western democracies. As such, we ought to ask important ethical questions, such as whether AI experiments should be permitted in border control. Critical advances in facial recognition technology and biometrics deployed at border crossings are intentionally omitted from this article, as they are beyond the scope of this inquiry and require a more comprehensive focus than I can offer here. I begin with a brief overview of the significance of AI in the 21st century.
The AI society
Artificial intelligence refers to many things. Experts advise that, when you hear someone talking about AI, you should always ask what exactly they mean by it (Broad, 2018). Computer scientist and mathematician John McCarthy coined the term artificial intelligence in 1956 as ‘the science and engineering of making intelligent machines’ (cited in Goodman, 2016: 469). Physicist Max Tegmark (2017) described AI as a non-biological intelligence, while Russel and Norvig (2016) call it the process of planning and building intelligent agents that receive precepts from the environment and through taking actions change the environment. Yet, some recent attempts reveal just how hard it is to define AI. An European Union White Paper, for example, suggests that AI ‘is a collection of technologies that combine data, algorithms and computing power’ (European Commission, 2020: 2). To avoid confusion, I define existing (weak) AI as a computer system that receives data from the environment and, with a degree of autonomy, acts to achieve complex goals that would otherwise require human intelligence to be completed successfully. Artificial intelligence, thus, emulates a specific human ability, skill or sense (such as calculation, optical or audio recognition) and uses heuristic models of learning to solve the problem with accuracy that is good enough for our current use (Osiński, 2020). In AI, we tell the machine what to do, but not how to do it – or at least not every step of the way. Nevertheless, AI always has a ‘human in the loop’ (Ugwudike, 2021).
Big data enables a mode of artificial intelligence known as machine learning, in which algorithms detect and learn from patterns in a large quantity of data and act automatically and autonomously to find the best solution for the given problem. Given that data created by us and about us doubles every year, and having in mind that AI funding doubles every 2 years (O’Neill, 2020), the machine learning revolution might be just around the corner. Although we are yet (if ever) to see the birth of artificial general intelligence (AGI – strong AI) that understands, learns and performs any intelligent task a human being can with no human input, smart devices powered by weak AI are everywhere. Virtual assistants, facial recognition software, and smart home devices are just some examples of our overwhelming AI reality. They identify objects and faces, learn from experience and data/input, use natural language, and make decisions with a varying degree of human interference.
Similar to other technological advancements, the question of whether specific technology will be beneficial or detrimental to humankind is difficult to answer, until we see it in action (Matthewman, 2011). Artificial intelligence is no exception, although there are warning signs about its current and future use (O’Neill, 2016). The push for a ‘good AI society’ or beneficial AI has been heavily promoted in government, business and policy circles (see Cath et al., 2018). However, contemporary AI is much like a black box, or even alchemy (see Katz, 2020; O’Neill, 2016): a mysterious practice of purifying, transforming and perfecting matter. Just like alchemists in the Middle Ages strived to transmute lead to noble metals, AI’s ‘scientific’ processes and decisions are predicted to solve many of our problems. Similar to alchemy, AI mechanics are obscure and impenetrable to the outsiders’ gaze and understanding (Broad, 2018; O’Neill, 2016) and the outcomes not quite what it was initially promised. While the underperformance of technology has been flagged in previous instalments of border security (Heyman, 2008), the quest for omnipotent yet explainable AI continues. Yet, the more intelligent AI becomes, the more opaque its design turns out to be (Larson, 2021). Critically, as ‘we make [AI systems] more and more complex, we will be less and less able to understand how they arrived at a decision and we’ll be trusting those decisions’ (Kevin Kelly, cited in Danaylov, 2016: Section Kevin Kelly, Subsection The Dangers of AI). Experts and industry leaders do little to dispel this techno-fog: only about 15% of AI studies share their code with experts and the wider community (Horgan, 2020).
Another concern in AI development is the limitation of learning sets and coders. 8 The foundation of every AI system is data sets from which code learns and develops. If this learning set is skewed or limited (intentionally or unintentionally), or if the people developing technology embed bias into it (again, deliberately, or not), the learning process will likely be skewed. Although we do not know what learning and testing sets developers use in developing AI, we are told to trust the outcome. Finally, the development of AI has been happening in a regulatory void, in which technology, it is argued, serves to advance neo-colonial and capitalist projects by furthering consumerism, sanctioning dispossession and land accumulation and fostering mass incarceration and surveillance (Katz, 2020; see also Joh, 2017; Lemberg-Pedersen, 2018; O’Neill, 2016). The risk that the private sector will fill this regulatory, legal, strategic and research void, push AI development towards their vision of the ‘good AI society’, and take over the sovereign decisions of the border is of utmost importance for border studies (Amoore, 2013; Cath et al., 2018). It is also one of the key concerns I raise vis-à-vis border infrastructures of the future.
From virtual reality to virtual walls: Technology and illegalised mobility in border control
The contemporary state’s preoccupation is to know your identity and movements. A range of techno-social networks established to regulate mobility have been deployed at border lines and beyond to ascertain people’s identity and assist in assessing the lawfulness of mobility projects (see Adey, 2012; Broeders and Hampshire, 2013; Csernatoni, 2018; Jumbert, 2016; Milivojevic, 2019). Sophisticated, increasingly autonomous and profitable devices seek to render people visible and legible, ascertain risk and regulate the flow of citizens, non-citizens, goods and trade (Broeders and Hampshire, 2013; Ceyhan, 2008; Cote-Boucher, 2008).
Given the importance of borders in contemporary political and social contexts, it is hardly surprising that the issue of finding a technological solution for illegalised mobility ‘problem’ is a holy grail of contemporary politics. Countries of origin and transit are the principal sites for the detection, surveillance, and classification of border crossers (Singler, 2021). When such efforts fail, the focus returns to physical borders equipped with scanners, security cameras, sensors and guards who monitor the border and respond to human and non-human threats.
The United States has been at the forefront of this process for over two decades. Surveillance from the sky emerged as a preferred method for strengthening risky borders in the 1990s (for a comprehensive historical overview, see Andreas, 2003; Kanstroom, 2022; Nieto-Gomez, 2014). The first drones patrolled the 2000 miles long US–Mexico border early in that decade (Jumbert, 2016; Koslowski and Schulzke, 2018). After 9/11, the US legislators and some nongovernmental organisations called for an expansion of technology in border control (Koslowski and Schulzke, 2018; Wall and Monahan, 2011).
In the 2000s, the management of the US–Mexico border has seen the quest for ‘system of systems’ – a range of integrated devices such as radars, cameras, mobile robots, and sensing towers (Ceyhan, 2008; Vukov and Sheller, 2013). The first operational system of systems – the Secure Borders Initiative Network (SBInet) – was ‘one of the most ambitious deployments of surveillance technology ever tried in borderlands’ (Nieto-Gomez, 2014: 199), costing the American taxpayer US$3.6 billion in 3 years (2006–2009). It comprised integrated fixed towers with sensors and communication equipment, command centres, aerial surveillance, border patrols and other paraphernalia deployed to thwart the movement of illegalised non-citizens. Yet, the system was riddled with false alarms, ever-rising costs and technological malfunctions. In 2011, the project was shut down as a management failure (Nieto-Gomez, 2014). Nevertheless, development of virtual walls continued, under the pretence of combating crime and saving lives.
Today, the ever-growing border security assemblage includes AI-powered technologies such as digital command platforms, the IoT and mobile robots. They enable a new ‘operative vision’ (Dijstelbloem et al., 2017) at the border that is strategic, schematic, and systematic, but go a step further by producing a ‘composite image’ (Amoore, 2013) of the border, with pre-approved subjects and transgressors clearly outlined on the digital map. The promise of AI in border control is the one of total visibility and increasing autonomy: AI pledges to distinguish humans from animals and objects, prompting humans to observe and act, when needed. In the future, algorithms may act autonomously to remove the threat, with potentially disastrous consequences for people on the move. We can expect a lot more alchemy in the future; transparency, fairness and humanitarianism – although nominally upheld – are not, and are unlikely to be the backbone of virtual walls. These new, supposedly seamless borders have been fashioned by small technology start-ups from the Silicon Valley, led by pioneers of virtual reality.
‘The flame of the West’: Total visibility, automation and privatisation of border infrastructure
Tech-business expansions in areas of social control and corrections have been dogged by controversy. Academia and social commentators questioned whether such a role traditionally reserved for state agencies should be privatised at all, given the morality of profiting from punishment, the quality of services provided in private prisons and the link between privatisation and growing imprisonment rates (for an overview of key issues and recent debates, see, for example, Bean, 2020). The rise of private entities in border control has also been scrutinised in academia (see Andersson, 2016). Early developments saw big businesses such as Boeing, Lockheed Martin, or Israel’s Elbit Systems win multimillion-dollar ‘borderscaping contracts’ (Lemberg-Pedersen, 2013, 2018) to create SBInet at the US–Mexico border (Brustein, 2020; Chambers et al., 2019; Ghaffary, 2020; Vukov and Sheller, 2013).
Recently, there has been a notable change on the market, as small defence start-ups put their hats in the ring. Led by young, white men of conservative affiliations, these new powerbrokers claim to be cheaper, more efficient, and, given AI applications, less reliant on human participation (border guards). Their workforce is also different: there are no unhappy workers, like at Amazon and Google, who rebel against partnerships that enable ‘immoral’ government policies (Ghaffary, 2020). 9 Below, I investigate a meteoric rise of one such start-ups, and a new leader in the field of border control – Anduril.
Anduril 10 is a defence technology company founded in 2017 by Palmer Luckey, an American entrepreneur, virtual reality pioneer and an enthusiastic Donald Trump supporter (Harley, 2020). His new venture, dubbed ‘tech’s most controversial start up’ (Brustein, 2020), is backed by Peter Thiel, 11 a billionaire tech investor, former Trump advisor and co-founder of Palantir Technologies, infamous for its controversial big data analytics software Palantir Gotham, used for predictive policing. Anduril develops and builds hardware and software for military and border security applications. Its tools for detection and ‘target classification’ are autonomous surveillance drones, battering-ram UAVs, sensor surveillance towers, VR headsets, and IoT network. These AI-powered structures are set to transmute data into gold, via a system of systems called Lattice (see Photo 1).
As suggested on the company’s website, 12 Lattice as a ‘seamless networked system’ can help ‘understand the world’ (detect, classify, and track vehicles, persons, drones, or other threats) and enable ‘fast response and save lives’ by identifying risks in real time. Lattice gives us ‘a single autonomous picture, alerting [border] agents in real-time and providing intelligence for rapid and accurate response – whether administrative, medical or tactical’ (Anduril, 2021: my emphasis). The company’s virtual border wall, thus, has three key levels:
Continuous surveillance from sentry towers or drones.
When targets (people, vehicles, objects) are detected, AI systems autonomously process data and identify whether a target is a threat (human).
When humans are identified, border guards receive a push message on Lattice (on mobile phone or a computer), with rendered image and its location.
Altering base metals (the vast amount of unfiltered data, ‘noise’ at the border) into gold (an accurate and instant image of the border, when needed) is critical as ‘every reasonable person (including every politician I have ever spoken with on both sides of aisle) can agree that we need to know what happens on our borders so we can stop the bad guys’ (Palmer Luckey’s Twitter feed, 4 July 2020).
Unlike the previous systems, AI-based interventions slowly push humans out of border surveillance. This was the pitch of Anduril: providing border security for low cost. Technology observes 24/7. It needs no holiday, or a bathroom break. It is cheaper than human workforce and could potentially reduce the size one of the biggest law enforcement agency in the country (Heyman, 2008; Weston Phippen, 2021). Anduril’s towers are solar and mobile. They don’t melt in the harsh desert sun (Photo 1). And critically, there yield a few false positives as AI systems learn from data sets. During a trial, Anduril systems ‘helped customs agents catch 55 unauthorised border crossers, a notable figure for a system still in development’ (Levy, 2018). Three years after it was founded, in July 2020, the company’s worth was estimated at US$1.9 billion, with over 30 contracts in military and border security sector in the United States and the United Kingdom (Brustein, 2020).

Sentry tower (top) and Lattice.
Anduril’s (2021) Lattice became, as one of the CBP agents suggested, ‘the eyes in the back of my head’. Elimination of drug trade and human trafficking, as Luckey suggested on Twitter, is at the heart of Anduril’s efforts (Palmer Luckey Twitter feed, 4 July 2020). Yet, the technology’s key performance indicators are unknown. Moreover, as one reporter noted after witnessing new AI borders, [i]t struck me . . . that, aside from the drug smugglers they helped intercept on the border, I had not heard the founders mention the people who might get caught in their omniscient zone. What is the right way to treat those individuals? What of the children and parents who are now being torn apart while crossing? . . . [I]t is increasingly the case that the people who build new technologies trigger political consequences. . . . It now seems obvious that tech was never going to make us better human beings; we are still our flawed selves. Instead, those same technologies that once seemed full of promise are finding their way into all-too-human clashes – led by a company named after an avenging sword. (Levy, 2018: my emphasis)
Muted in the debate are harms experienced by people caught in this and similar political and commercial games, as well as harnessing technology for personal and partisan gains. Stifled is also the vision of future society AI alchemists propagate, under the banner of progress.
Virtual wall’s black-boxed harm: Theorising AI in border security
Artificial intelligence systems’ effect on people on the move, the way we conceptualise borders and our societies is certain, yet unknown. In this section, I first flag two areas of potential modification. What separates AI from other technological advancements in border control (such as CCTV, ground-level radars or thermal camera imaging) is a promise of total visibility at the border and increased automation. Such promise, relational to circumstances in which it arises and situations it addresses (O’Grady, 2021), is increasingly moulded by ideology. The cloak of humanitarianism, 13 as seen in the case study of Anduril, is the lingo companies invoke, when needed. AI borders, conversely, are deeply non-humanitarian and unlawful, as they prevent illegalised non-citizens from reaching safe countries and expose them to danger and in some cases, death at the border.
Humanitarian technology that could prevent if not eradicate crime and suffering inflicted by traffickers and transnational criminal networks validates the development of security technologies in border policing (Aas and Gundhus, 2014; Franko, 2020; Jumbert, 2016; Lemberg-Pedersen, 2018; Milivojevic, 2019; Molnar, 2020; Sandvik and Lohne, 2014). AI start-ups nominally uphold this narrative. Granted, drones can and do save lives at the border and beyond (Koslowski and Schulzke, 2018). However, the concern identified by scholars and social commentators (Csernatoni, 2018; Jumbert, 2016) is that humanitarian narrative here, similar to other border security technologies, obscures the main objective of hi-tech advances: social sorting and control of non-citizens. The supposed search and rescue element of drone surveillance, or Anduril’s promise to stop traffickers and smugglers, fails to disguise the reality of profit and ideology-driven techno-alchemy.
Experiments in border control are performed by private companies, thus limiting nation states’ responsibility for border deaths or additional collateral damage along border lines (for other contexts, see Lemberg-Pedersen, 2018). Businesses are becoming integral players in developing and upholding migration control agenda (Molnar, 2020), ‘experts’ with very little to no independent oversight. New IT players breed non-transparency. Workers’ opposition or public scrutiny is absent. Alike alchemists of mediaeval times, they speak the ‘expert language’, so that the uninitiated would not understand the ‘powerful knowledge’ they possess. Their technology is largely a business secret, known to a limited few. At Anduril (2021), people who create and modify technology are sought based on their passion for ‘our mission’; ‘to transform United States and allied military capabilities with advanced technology’. Code and systems they design define risk as anything and anyone jeopardising the integrity of the US border, and its military and border security might. In doing so, these new power brokers redefine security at the border and ultimately, the border itself.
Borders have long been a site of visibility (Adey, 2012). The essence of contemporary border interventions is to see and verify people and their movements, via technology. The key question, as Dijstelbloem et al. (2017) suggest, is what kind of seeing is in place in border control, and what are consequences to border crossers. Artificial intelligence systems might appear to have superior, uninterrupted, 24/7 vision of the border. But what they see does not translate to an understanding of data and contexts (for examples see Larson, 2021; Osiński, 2020). In border control, such understanding relates to the bigger picture and the impact of technology on ‘targets’, the border itself and our societies.
Artificial intelligence alters borders to its own logic. The assessment of risk at the border is, while still in the hands of humans, increasingly based on data captured, processed and analysed by a range of non-human technological advancements. This leads us to a likely shift within AI-based border security initiatives – a (yet another) revival of the panopticon as a political tool for mobility governance. While I acknowledge invitations to demolish or reinterpret the panopticon, 14 I would like to revisit it here.
AI-based border security technologies, just like in Bentham’s design of prison (although not so much in Foucault’s famous interpretation of it), focus on the architectural design (not the panoptic subject) of the border. The process of seeing and observing is certain and ongoing (not simply likely or probable). The scaled-down eye of God as ‘an ever present technologically-augmented gaze’ (Wilson, 2018: 59) monitors, simplifies, analyses and streamlines real-time border. Digital eyes of surveillant assemblage (Haggerty and Ericson, 2000) scan the border, communicate, analyse what they ‘see’ and make automated decisions along the way (whether the object is human or not). They render digitised human bodies into what Haggerty and Ericson (2000) call a data double – an entity entirely composed of information, streamlined to Lattice as a part of the ‘total picture’ of the border, and scrutinised for an intervention. Transmuting data into ‘knowledge’ and ‘solution’ for border security ‘problem’ appears complete.
The above process of ingestion – bringing something or someone to attention from the vast amount of data, as opposed to data collection that brings everything to attention – is a critical new moment in border management. Amoore and Piotukh (2015) warned about this phenomenon almost a decade ago. In border control, this means that performative surveillance is accomplished by a single guard in the watch tower based on ingestion of data, done by AI. Guards do not have to monitor the border 24/7, like in the original design of the panopticon. They are prompted to watch, when the process of knowledge production is complete and risk identified as such. There will be, the technology promises, no false positives or omissions. Long gone will be the days when many humans – officers and their supervisors – were needed to determine which event at the border is relevant, and which one is not. Just like in Bentham’s vision of a prison, a single guard will be sufficient to observe and preserve the integrity of borders, seen and streamlined by AI. The technology will, we are told, provide an unobstructed view for the decision making and, in the near future, a decision itself.
What triggers automated interventions? Is every human the same? The one in the uniform and the one with a bottle of water in their hand? Is a colour of their skin relevant at all? How will AI systems of the future identify risk? We do not know. Critically, ‘[n]ot fully understanding how something works has proven to be a fundamental, enduring factor in instances of technological failure’ (Broad, 2018: Section 5: Alchemy). I would like to suggest that a fusion of ignorance, absence of public and expert oversight, and alchemy, as observed in the case study of Anduril, is a recipe for disaster. Yet, apparently, one glance at a glossy system of systems such as Lattice, whether on smartphone or a computer, provides a snapshot of threats and devices in the ‘near real-time border’ (Wilson, 2018). We ought to trust the system – the very one we are not quite sure how it works. And, as Amoore suggests, when the system makes a mistake, said mistake is turned into data that will help rewrite code (Johnson et al., 2011).
Just as Anduril drones have the capability to attack and take another drone off the sky, AI systems of the future are likely to autonomously intercept and push back people they judge as unauthorised. In fact, another AI start-up backed by Peter Thiel, Brinc, had long advertised drones that interrogate and taser migrants at the US–Mexico border (Biddle, 2021). These processes have political and ideological objectives. Palmer Lackey identified some of them in a recent interview: the preservation of Western values and ensuring the supremacy of the American (and US allies) military and border security might (Stebbings, 2020; see also Harley, 2020). Just as its name suggests, Anduril’s aspiration is to uphold the Flame of the West. The border lines are its testing grounds, with other applications to follow.
The visual spectacle of AI borders, with incipient landscapes of sharp, tall surveillance towers and menacing drones, implies violence. Virtual wall’s intimidating and stark façade against the blue desert sky along US–Mexico border is the essence of the new border infrastructure. The very image of such a mighty opponent might be enough, the designers of virtual walls hope, to deter future border transgressions (for deterrence in earlier iterations of border control, see Heyman, 2008). The novel system of systems sends the message to unwanted Others: do not even try to cross the border. In doing so, technology changes the very essence of what border is, not just the border landscape.
Yet, people will keep pushing. The displacement of cross-border movement that leads to border deaths on land and at sea is well documented in border criminology. 15 Virtual borders are no exception. Anduril executives acknowledge that traffic of people across borders move ‘to the east and west of the systems’ (Weston Phippen, 2021). AI borders are ‘a 10-million scarecrow’ (Weston Phippen, 2021) likely to produce a ‘funnel effect’ and more border deaths (Chambers et al., 2019). Illegalised crossers and their allies may siege the towers and beat the system through creativity (such as ‘blinding’ the sentry towers with lasers) and other acts of resistance (de Leon, 2015; Nieto-Gomez, 2014; Vukov and Sheller, 2013; Weston Phippen, 2021). Ultimately, the question of the effectiveness or ‘success’ of AI borders remains unknown, even for start-up executives who are quick to suggest that such questions should be put to border agencies, not technology providers (Weston Phippen, 2021). The question whether the outputs match the promise remains blank for the public and government funders.
What is known, however, is that AI borders dehumanise men, women and children on the move and violate their rights, freedom and privacy. They frame illegalised non-citizens as mathematical problems that must be dealt with using smart technology. By creating mechanical distance (Grossman, cited in Sandvik and Lohne, 2014) between non-humans (code, devices) and humans (guards, demos of the Global North), people are reduced to targets, numbers, collateral damage, transferees, lawbreakers or illegals that need to be detected, restrained and removed. AI systems become political swords for thwarting unwanted mobility. Human cost, so clearly visible in the context of military AI, is obscured in border control, hidden behind the smokescreen of search and rescue, the fight against drug lords, smugglers and human traffickers, and the preservation of the nation.
Conclusion
Artificial intelligence’s promise to revolutionise the humankind is commonly compared to electricity, as the impact of AI-enabled systems could unleash modernising forces. Yet, as Broad (2018) reminds us, electricity also gave us the electric chair and electroconvulsive therapy. I am not suggesting here that the development of algorithmic governance as currently predicted is certain, or always bad. 16 AI could help us become better humans in a better world. But its usefulness might not be the same in every aspect of our lives.
Given the technology’s likely impact on fundamental human rights such as the right to liberty and security, and freedom from torture and inhuman or degrading treatment, we need clear regulatory and legal guidance towards the ‘good AI society’. We ought to steer AI towards the public good, and flag if not prevent any violation of human dignity that requires care and respect for people, individuals or groups, regardless of who they are (Cath et al., 2018). We also need audits (Ugwudike, 2021) and multidisciplinary research ventures that will uncover the black box, compare the outputs with a promise of AI, and reduce alchemy. As this article demonstrates, our ability to study technological innovations in border control is currently constrained, as methods we use only scratch the surface. A clear limitation of this article is a reliance on the company-released material and publications. The need for a more robust, empirical research is evident. Experts such as computer scientists, roboticists, data science and intelligent system engineers, machine learning experts, computer vision and image processing specialists, intelligence reasoning designers and others have to be included in future research on the matter, and new methodology ought to be devised to remove the techno-fog in border control. One possible direction we could pursue is to devise community-led, interdisciplinary methodologies by utilising innovative research collaboration spaces such as Bristol Digital Futures Institute’s Reality Emulator and Neutral Lab. 17
Unfortunately, as my analysis suggests, none of these strategies have been in place in border control. This begs the question: do we, academics and public commentators, argue for cease-and-desist policy in this area, or do we proactively engage in the development of technology to mitigate if not eliminate its negative consequences? While in many other sites of engagement I tend to advocate the latter approach, border control might be a whole different beast.
AI promises new ways of seeing and controlling the border. The role of technology and AI in border control remains unchallenged, if not desired, and the harm they create largely uncharted (Molnar, 2020). As Pamela Ugwudike (2021) suggests, even when harmful consequences arise, AI systems, considered neutral and objective, are not to blame. This narrative should be resisted. I go even a step further by suggesting that automation and algorithmic governance in border control are ideological instruments, developed and deployed to serve specific interests. Promoted and refuelled by ‘the imperial and capitalist projects of its patrons’ (Katz, 2020: 4), AI is currently used in border control to identify and apprehend the Other in places where physical walls are not present. Experts-lobbyists from the Silicon Valley are both knowledge and resolution producers. Private companies offering ‘security solutions’ are no longer neutral capitalists; they are ideology-driven warriors, backed by equally radical venture capitalists. Anduril’s Peter Luckey is simply the first among an army of followers who advertise the idea of seamless AI borders, while enforcing the doctrine of exclusion, profit, and Western military/border security supremacy. Artificial intelligence systems, thus, benefit few –corporate bosses and exclusionary and technology-powerful states of the Global North – while significantly disadvantaging or harming many, in particular illegalised non-citizens and asylum seekers. There is a big reason for pessimism, especially if we aim to stop the development of virtual walls: there is simply too much money to be made, as the case study of Anduril demonstrates.
Seamless AI borders of the future, built by Anduril and other defence start-ups, are akin to alchemy. As the devices (for now) largely fail to transmute base metals into gold and protect the demos, the outcomes are nevertheless grim. As scientist Ali Rahimi (cited in Broad, 2018: Section 5: Alchemy) pointed out, alchemy is fine if you are building photo-sharing service. It is not fine if you are devising systems that govern border control or punishment. Between October 2000 and September 2014, more than 2700 bodies were recovered in the Sonoran Desert alone (de Leon, 2015). How many more will die trying, while circumventing the virtual wall of Anduril?
AI systems in border management are opaque and dangerous. Computers, like humans, make mistakes (see Joh, 2017; Katz, 2020). Even their ‘right’ decisions can be harmful. At first, a humanitarian purpose of the technology might hide it: via technology we could rescue some people in distress and stop a handful of traffickers and smugglers. Ultimately, however, AI borders represent an extension of the imperial ambitions of the West, the desire to preserve our (Western) way of life, and the right to exclude. The slippery slope is obvious. Just as healing was the initial justification for alchemy, and similar to other innovations in the border security assemblage, virtual walls validated by the supposed humanitarian narrative or the need to protect demos are brutal.
We need more of normative evaluation and less alchemy in border control. Normalising black-boxing, obscurity, and non-transparency that goes along with privatisation and automation of border control and ignoring growing influence of politics and ideology that underpins its expansion is set to have devastating consequences. How long do we wait before harm at the border is not only in diverting traffic towards more inhospitable terrains and border deaths? And how long before this target is not just the Other at the border?
Technology is likely to advance ‘futuristic and high-tech security fantasies’ (Adey, 2012: 193) of pervasive, seamless borders that segregate wanted from unwanted mobility. While I limited my analysis here to the US–Mexico border, the developments in Europe, Canada and Australia are also concerning (Aas and Gundhus, 2014; Broeders and Hampshire, 2013; Cote-Boucher, 2008; Csernatoni, 2018; Franko, 2020; Jumbert, 2016; Lemberg-Pedersen, 2013; Martins and Jumbert, 2020; Molnar, 2020). We are beholding ‘a dark step into morally dangerous territory’, and ‘it is only a matter of time before a drone will be able to take action to stop people’ (Noel Sharkey, emeritus professor of robotics and artificial intelligence at Sheffield University, cited in Campbell, 2019).
At the helm of these interventions are venture capitalists and businessmen of a particular political affiliation and with little to no accountability for the consequences of their inventions. As Louise Amoore (2013: 17) noted almost a decade ago, in what is seen as a constant state of emergency, a call to arms has been sent to a range of private actors who now sell the same dream to the state ‘in their promise to make manageable the risky mobile bodies of the global economy’. Privatisation also means that we often do not know how the technology operates or who owns the data (Milivojevic, 2021; Molnar, 2020; Ugwudike, 2021). While businesses have little incentive to address black- boxing or collateral damage of AI borders, they do have a moral (if not yet legal) imperative to act (Beduschi, 2020; Broad, 2018).
Finally, it is not only our attitude towards technology that begs critique. It is our insolence towards the most vulnerable and precarious among us, who are used as guinea pigs and against whom the effectiveness of these and similar technological advancements is assessed. Deconstructing the narrative of migration as a problem and undoing non-entrée policies are critical starting points in the debate, as is debunking the supposed humanitarian nature of virtual walls. We also must be clear about who is benefitting from (or being harmed by) the technology: demos, the majority, or a privileged few. Only then we can look at a promise of technology, including AI, in delivering healthier, fairer, and a just society we strive for.
Footnotes
Acknowledgements
I thank the reviewers for their thoughtful feedback and efforts. This article was much improved thanks to their guidance and suggestions. I greatly appreciate their time and expertise. I thank Professor Mary Bosworth and Samuel Singler from Oxford University for their comments and ideas in the early stages of the development of this article.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
