Abstract
This article critically interrogates how borders are produced by scientists, engineers and security experts in advance of the deployment of technical devices they develop. We trace how sovereign decisions are enacted as assemblages in the antecedent register of device development through the everyday decisions of scientists and engineers in the laboratory, the security experts they engage, and the material components of the device itself. Drawing on in-depth interviews, observations, and ethnographic research of the EU-funded Handhold project, we explore how assumptions about the way security technologies will and should perform at the border shape the development of a portable, integrated device to detect chemical, biological, radiological, nuclear and explosives (CBRNE) threats at borders. In disaggregating the moments of sovereign decision-making across multiple sites and times, we question the supposed linearity of how science comes out of and feeds back into the world of border security. An interrogation of competing assumptions and understandings of security threats and needs, of competing logics of innovation and pragmatism, of the demands of differentiated temporalities in detection and identification, and of the presumed capacities, behaviours, and needs of phantasmic competitors and end-users reveals a complex, circulating and co-constitutive process of device development that laboratises the border itself.
Introduction
The Handhold project began in 2011 in response to a European Union (EU) FP7 funding call that sought the development of a portable and integrated CBRNE (Chemical, Biological, Radiological, Nuclear and Explosives) detection device that would complement, if not replace, the use of sniffer dogs at EU borders. Intended ‘for deployment by European customs agencies, border guards, first responders, police, civil security or others operating in potentially hostile environments’ (Handhold, 2012), the objective was to develop a prototype device – a portable box about the size of a car battery – that could operate with greater efficiency at a lower cost than dogs, and in environments or areas where dogs are impractical. Handhold brings together scientists, engineers and ‘end-users’ from nine institutions in five EU member states who, in their everyday laboratory practices, enrol their software and sensors, their hardware platforms, and one another into a coherent enactment of a future border. In doing so, they and the device they develop blur the lines between the supposedly separate spheres of science (expertise and experimentation in the laboratory) and security (sovereign decision-making at the border).
During 2013 and 2014 we observed the scientists and engineers of the Handhold project. We saw scientists and engineers perform many of the discretionary decisions that ultimately constitute the sovereign decision to permit or exclude at the border. Prior to the use of the Handhold device by border guards to examine flows of packages, containers, and people, we discovered that the capacities to detect, identify, control, and manage – the capacities which enable the enactment and practice of sovereign decisions of inclusion/exclusion – are constituted in the daily practices of scientists and engineers in their laboratories and on their computers. As we witnessed the development, negotiation, and adaptation of the Handhold device we also witnessed the composition of European borders.
Building upon critical scholars who illustrate the radical redistribution of the border and the manifestation of sovereign decisions in unexpected places (e.g. on boats, in public spaces, in drones), this article shows how sovereign decisions are enacted – indeed, are lively and numerous – in the scientific and technological imaginaries that develop border devices prior to their deployment (Balibar, 2004; Parker and Vaughan-Williams, 2012; Rumford, 2012; Salter, 2012). In contributing to a research agenda that understands border security as the variegated everyday ‘practices of the plurality of power-brokers involved in the securing of borders’ (Côte-Boucher et al., 2014: 195; Jasanoff, 2006), we show how scientists and engineers are significant power-brokers in this process, and how device development is one of the most important of the ‘plural forces of authorization’ that enable sovereign decisions at the border (Amoore, 2013: 14–15; Connolly, 2005). As Côte-Boucher et al. explain:
Research into concrete, everyday practices allows for a reconceptualization of security as a set of mediated processes situated at the junction between, on the one hand, the actions and worldviews of diverse border security actors and, on the other, security discourses, strategies, policies and technologies. (2014: 199)
By engaging in theoretically informed ethnographic research of the development of one security device (Handhold), we redirect critical insights from scholars who are politicizing the materiality of security technology at the border site to the antecedent register – to the laboratories that construct such devices (Aas, 2005; Adey, 2004; Amoore 2013; Amoore et al., 2008; Muller, 2010; Squire, 2014; Vaughan-Williams, 2009).
Roughly speaking, the plan is designed to deliver a prototype … I pretty much saw something close to a prototype this summer.
The blue boxes?
Yes … the blue boxes. (Handhold, 2014a)
This exchange with a Handhold engineer encapsulates the puzzle at the heart of this article: What is contained in these blue boxes that are meant to increase the security of sovereign borders? Who are they for? What, exactly, do they do? In this article we question the supposed linearity of the enrolment technological devices into security practice. Rather than seeing security devices as being first a product of science and then a tool of security in which ‘science’ decisions in the laboratory are supplanted by political ‘sovereign’ decisions at the border, our analysis of Handhold reveals that anticipations and assumptions about bordering practice simultaneously constitute both the border and its moments of sovereign decision, and also the device development process itself in tangible and productive ways.
In the looping back of assumptions about how devices will perform ‘in the field’ into processes of technology development, we find that the sovereign decisions of bordering are made not only by border guards (using devices) at the moment of border crossing, but also that they are disaggregated across time and space such that the multiple decisions made by scientists and engineers in their laboratories are also (and necessarily) constitutive of the ‘sovereign decision’ of the border. The linear sequence is thus disrupted as the scientific process is revealed to be multi-dimensional and co-constitutive with the practices of security.
Therefore, in conjunction with asking what Handhold’s ‘blue boxes’ do when implemented at the border, we must also analyse the complex story of how that blue box came to be, revealing decisive moments within the development process. How the devices were constructed, what forces shaped their composition, and which assumptions shaped how scientific and technical decisions were made become significant questions.
Analysing security through device development reveals the complex and contingent spatiotemporal emergence of political agency: in this case the political act of producing sovereign borders. We conceive the developing Handhold device as both an assemblage in its own right and as a possible future component of wider assemblages and practices of bordering. This article explores how bordering action emerges in the mediations of scientists, end-users, materials, international standards and policies, laboratory practices, immaterial imaginations, and phantasmic figures (terrorists, smugglers, border guards) as they circulate and combine with wider forces of political economy (from government funding to imperatives for fast and accurate bordering decisions). In the provisional stabilization of these forces and their co-functioning, these diverse entities create a device that enacts security in the laboratory while also reconfiguring the physical border site through practices of laboratization (i.e. the deployment of laboratories in the world).
Science and security: Sovereignty, assemblage,translation, and laboratization
They’re up to every dodge and fiddle, and remember, the criminals and the terrorists have some very clever chemists, strategists, and technologists who they pay a lot of money or bribe or exert influence – [slaps hand] that sort of influence – to get things their way. (Handhold, 2013a)
This framing of enemies as clever, cunning and ruthless is central to the security apparatus currently dominating the contemporary world order. Indeed, critical scholars in IR have done excellent work demonstrating how the prior framing of enemy Others – as threatening, cunning, and merciless – is central to the ways in which sovereign borders are currently understood and performed (Amoore, 2006; Heath-Kelly, 2013; Squire, 2011; Vaughan-Williams, 2009). The supposed mobility and flexible geography of criminals and terrorists underlies the securitization of mobility per se and the related modalities of bordering responses (Bourne, 2011; Elden, 2009; Packer, 2006; Salter, 2013). As a European Commission consultation paper on detection technologies in law enforcement, customs, and security argued, in 2006:
The nature of the threat from terrorism and crime is … becoming increasingly mobile. Hence, security authorities require portable solutions. Such solutions can improve cost-effectiveness and be readily transferred from one location to another where they are most needed, as it is simply not feasible to cover every entry point or point of concern with the same level of security. (2006: 10)
In 2011, the Handhold consortium was formed to answer an EU call that responds to this demand for portable solutions. Within Handhold, prevailing assumptions about threats, dangers, and enemies are always present, but are mostly unsaid – an ambivalence that powerfully shapes the emergence of sovereign decisions in dedicated laboratories and scientific practices located far away from the frontlines of geopolitics. Because this device will be deployed – at least initially – at sovereign borders, it is constantly positioned against human and non-human threats (e.g. the ‘criminals and terrorists’ who make CBRNE materials mobile). While engineers like the one quoted above occasionally refer explicitly to these threats, the meta-context of the device as a technology enabling the performance of sovereignty is largely assumed. But sovereign power is not silent in the project – far from it. Instead, sovereignty manifests itself in complex and heterogeneous ways throughout the device’s development, most obviously in the way that assumptions and anticipations about how border technologies will operate at the border govern the multiple trajectories, imaginaries, and possibilities that emerge in the laboratory.
Critical Border Studies shows that sovereign decisions of bordering practices (the decision to permit or exclude) are not exceptional acts of an ‘unencumbered sovereign’ (Côte-Boucher et al., 2014: 199) or the force of a singular security rationality, but are constituted by discretionary judgements performed by border guards (Salter, 2008). While we acknowledge Butler’s (2006: 65) claim that these ‘petty sovereigns’ deploy power in order to ‘reanimate’ sovereignty, we also know that bordering practices exceed anthropocentric framings because they also enrol non-humans, devices, practices, assumptions, and imaginations (Smith, 2009). Thus, discretionary sovereign decisions are assemblages that emerge from the interactions of border guards, procedures, detection devices, sniffer dogs, environmental conditions, screens and readings, suspicions, mobilities, imaginations, and experiences (Connolly, 2010: 179; Deleuze and Parnet, 2007). Moreover, these decisions occur continually and simultaneously at multiple sites and scales. Drawing upon recent conversations between security studies, STS, Actor-Network Theory (ANT), and Post-Humanism, we do not seek to simply ‘apply’ STS approaches to issues of border security studies; STS scholars continually caution against ‘application’ (Acuto and Curtis, 2013; Adey and Anderson, 2012; Aradau, 2010; Bellanova and González Fuster, 2013; Salter 2015a, 2015b; Walters, 2014). Rather, we engage the strange yet mundane topologies of the intersections between science and security that unsettle our familiar assumptions about both fields. We do not seek to merely identify fruitful correspondences or controversies between these two fields, but to engage the space that emerges by working between them.
Laboratories are assemblages of people, expertise, instruments, materials, routines, etc. They are constructed sites in which the work of controlling variables and reducing ‘noise’ is done in order to detect and identify entities’ actions and to act or enable others to act upon them. They are never merely the secluded spaces of the production of scientific knowledge, but rather are both the sites and the sets of productive forces that make the construction of reality possible (Latour and Woolgar, 1979). Our research shows that the distinction between device/artefact and laboratory (site and forces) is a blurred one, better grasped as the productive stabilization of different sites and times, and different actants and forces, into a particular co-functioning. For Callon et al. ‘the laboratory is a machine for producing inscriptions, for making possible their discussion, interpretation, and mobilization’ (2001: 51). Laboratories transform ‘teeming, dispersed crowds into … traces that can be taken in at a glance’ (2001: 51). Here, laboratory actions closely resemble bordering actions to be performed by the Handhold device: transforming chaotic crowds into rapidly actionable traces and indicators that enable and shape future actions. Our argument is that the politics of bordering are distributed and constituted in the earlier transformations of ‘crowds’ into traces, traces into inscriptions, and inscriptions into decisions. The development of a security device, then, is political, and contains multiple political actions. Rather than separate worlds of laboratory sites and border sites we find connections and flows, we find political forces active in the laboratory, bordering decisions enacted in the laboratory, and anticipations of laboratory actions at the border.
To draw out our claim that discretionary sovereign decisions include the antecedent register of laboratories of device development, we use Michel Callon’s (1986) concept of ‘translation’ that articulates how multiple things (goals, materials, devices, etc.) are brought into relation with one another, and how they are modified through this process (see also Freeman, 2009). One of the most useful schematics of translation is set out by Callon, Pierre Lascoumes and Yannick Barthe (2001) to analyse the political implications of an increasingly specialised and socio-technical world where the controversies of political life both entangle and redefine the boundary of the social and the technical. For them, the world does not divide neatly into separate technical and political issues; rather ‘[t]o declare an issue is technical is effectively to remove it from the influence of public debate; on the other hand, to recognize its social dimension restores its chance of being discussed in political arenas’ (Callon et al., 2001: 25).
Foregrounding this social dimension means understanding that laboratory work does not exist in isolation, but rather develops through ‘ceaseless movement … permanent exchanges between specialists and the world that surrounds them … the laboratory is only one element in a larger set-up, one stage in a long succession of comings and goings’ (Callon et al., 2001: 47–48). In theorizing these ‘comings and goings’ of science, Callon et al. (2001) present translation as an interconnected process with three distinct stages. These are understood as Translation 1, which represents the reduction of the macrocosm to the microcosm or, more simply put, reduces the real world to the lab, ‘translating’ reality into simplified forms so that it can then be studied and manipulated. This manipulation is Translation 2, as a group composed of human actors (scientists, technologists) and, crucially, also their instruments and devices, explores these simplified objects, testing them in ‘laboratory conditions’ to have real, if contained, effects. Translation 3 is the return of these objects to the ‘real world’ – but also the reconfiguration of the real world to reflect laboratory conditions and so to produce analogous effects to those experienced in the lab. Callon et al. (2001: 65) elaborate on this through their concept of ‘laboratisation’, which suggests that what is achieved in one enacted space (Translation 2) must be retained in its movement to another (Translation 3). For Callon et al. this is a rather unidirectional process of transformation: ‘For the world to behave as in the research laboratory… we simply have to transform the world so that at every strategic point a “replica” of the laboratory, the site where we can control the phenomena studied, is placed’ (2001: 65). Not only does this ensure a degree of stability of the assemblage as it moves/translates, it also ensures that the world ‘behave[s] as in the research laboratory’.
The concept of translation is useful in our analysis of Handhold in two ways: first, it effectively dissolves prevailing distinctions between the social and the technical; and second, it clearly reveals the terrain of the political (Callon, 1986; Callon et al., 2001; Knorr-Cetina, 1999; Latour, 1987). Here, we echo Callon et al.’s (2001: 4) injunction: ‘Politics is the art of dealing with disagreements, conflicts, and oppositions; why not bring them out, encourage them, and multiply them, for that is how unforeseen paths are opened up and possibilities increased.’ While these claims are seductive, our analysis of Handhold shows this model of ‘secluded research’ to be too linear, partly because it assumes that Translation 3 (from the laboratory back out into the ‘real’ world of political controversies) is the most politically important stage in the process. This privileging of the final stage of translation emerges, in part, from the security clearances and highly expert knowledge required in the laboratory space of Translation 2, which render it seemingly closed to public engagement. This depoliticizing lack of transparency orients public debate and scrutiny towards Translation 3, when the technology is already at work ‘in the field.’ The result of this closure is a limited and incomplete understanding of the politics of security technology. While we acknowledge that the material border is the most obvious site at which the assemblage of sovereign decision making (especially the border guards’ discretion) is clarified, reduced, and embedded within technological devices, our research showed similar processes occurring much earlier in Handhold’s development. Not only do the first and second translations determine the conditions of possibility for Translation 3, but future projections of what will happen at Translation 3 are constantly (re)circulating and making demands during Translations 1 and 2, disrupting the linearity of the process. In the laboratization of the Handhold project, which ensures that the world of border guards, migrants, and moving things is modified and improved, the political world of the border must be anticipated and pre-emptively designed into the device. Here, the distinctiveness of Translations 1, 2, and 3 becomes blurred and the worlds of science and security emerge as co-constitutive and irreversibly entangled.
The concept of translation has been acknowledged by critical border studies scholars who argue, for example, that border guards ‘translate’ rather than simply ‘apply’ laws and policies; that is, they make discretionary decisions–they interpret, alter, and renegotiate given security parameters (Côte-Boucher et al., 2014). We develop this further, enrolling the antecedent register of device development itself: the lively and multiple negotiations at the site of the border are also shaped by earlier translations in the laboratory that transform threats, goals, and geopolitical forces into specific engineering tasks. Target substances are turned into numbers and graphs, and suspicious packages become digitized alerts on a screen, forms that are ‘translated’ at the border, certainly, but which are first shaped by specific, material, and translated decisions in the laboratory that set the conditions of this border translation. By unpacking the heterogeneous operations of a technology as it is being developed rather than retrospectively as it is being used, we seek to redeploy the politicization of borders (e.g. the public debates on privacy and surveillance; the everyday practices of border guards; the unjust renditions of the security apparatus) into the antecedent register of design, experimentation, and development. In short, we move backwards from the border into the laboratory, revealing multiple origin points for the political contestations that arise from the use of already-complete border security technologies.
Negotiating threats: Interpretation, pragmatism, and innovation
There are already radio and nuclear detectors out there that are very good. Handhold is not going to better that technology, so why bother? … There is no advantage to having one piece of kit to detect multiple things. We like dedicated equipment that will detect one thing. (Handhold, 2013b)
Critical scholars in IR have done excellent work showing how dominant approaches in the discipline have always been underscored by the fantasy of total security in which sovereign borders can be finally and totally protected against designated threats (Dillon, 1996; Huysmans, 2006; Walker, 1993). Such a fantasy can only be sustained by continual performances of sovereignty in sites such as borders where accurate decisions about who/what is included or excluded must be made over and over again. While each of these sovereign decisions is supposed to be a clear demonstration of robust security, the mundane enactment of such decisions – especially when delegated to devices – always contains misfires, surprises, failures, and silences. As Amoore (2013: 107) explains, ‘[t]he surface effect of locating technologically a mobile person or object contains a multitude of conversations, mistakes, assumptions, and whims on the part of other subjects and objects’. Like many scholars in Critical Border Studies, we are interested in the role technology has in enabling the fantasy of total security to flourish: how it furnishes those ideational dreams with certainty, accuracy, reliability, efficiency, and impartiality. At the heart of the bordering process, then, is an important structuring tension between the belief that technology can provide total security on the one hand, and the everyday practices of human/technological assemblages that are riven with competing agendas, unexpected failures, tangents, and miscommunications, on the other.
This tension manifests itself in the development of Handhold in those moments where the purpose of shoring up sovereign decisions with scientific certainty is belied by the mundane, everyday practices of scientists, researchers, and engineers in the laboratory. Our observations are that the development of Handhold is not a straightforward or even rational mobilization of techno-science in the service of sovereign power. Rather, what we have discovered is a series of decisions, directions, and collaborations that owe more to pragmatism, efficiency, ambition, and haphazard consensus-building than they do to an overarching goal of protecting borders from palpable threats. Our intention here is to prise open the articulation of sovereign decisions within the laboratory to show how the fantasy of total security is unworked even before its devices are deployed.
The aporia at the heart of this fantasy has its origins in Translation 1, when the political controversy of borders – specifically the ability of European borders to halt the circulation of threatening substances – gathered enough momentum, resources, and publicity to demand the shift from the ‘real world’ into the laboratory. With respect to Handhold, this was initiated by a large EU FP7 grant which pre-determined the most dangerous substances (namely, CBRNE) and mobilized expertise within these parameters to produce a portable device to be used at European borders (Handhold, 2011; Handhold, 2013j). 1 The EU’s clarity about the named threat of CBRNE fulfils the fantasy structure because it suggests a simple solution to the endemic problem of insecurity: if scientists and engineers simply develop the best means to detect these substances, and future funding concentrates on response and containment, then our borders will be safe.
What interests us in the circulations between Translation 1 to 2 are the negotiations around the bid call itself, which was described by one end-user as a ‘dog’s dinner’ (Handhold, 2013b). Amid the competing forces that constitute the specific laboratory actions of Handhold, discretionary judgements made by scientists and engineers reshaped the parameters of the bid call throughout the early stages of the project. For example, the central aims of the project were understood differently by project partners and end-users, and then were ignored, reimagined, transposed, and improvised during routine scientific experiments in the laboratory such that the final composition of the device is not precisely as described either by the call, or by the expectations of the Handhold scientists or end-users. It is instead emergent, a hybrid, translated device that is shaped through circulating relations, assumptions, and contestations throughout the development process.
Like all contemporary funded research, the Handhold project involves end-users at every stage. In this case, security agents, border guards, and customs agents work in collaboration with scientists and engineers (both public and private) to bring this integrated device to its prototype stage, and to imagine a future for it in the commercial world. Collectively, Handhold participants navigated a fundamental tension between the way that scientists and engineers interpreted the EU’s privileging of CBRNE threats, and the end-users’ very different conception of threat based more on routine criminality than catastrophic danger (i.e. the less dangerous substances that end-users actually see coming across their borders). For example, one end-user was more interested in having Handhold detect for diesel, tobacco, and cash, whereas another was much more interested in using it to detect a particularly popular narcotic (Handhold, 2013b; Handhold, 2014b). Because none of these substances pose a direct and immediate threat to any nation in the way that uranium might, they were not privileged in the bid call, nor were they prioritized by the scientists and engineers building the sensors for the Handhold device. However, in negotiating these competing understandings of threat in the early part of project design, one engineer wondered whether Handhold could potentially become a more versatile device in which users could insert ‘cards’ with inbuilt sensors to detect anything they wanted (e.g. narcotics one day, chemical weapons the next) (Handhold, 2013f, 2014a). Moreover, such was the intensity of one end-user’s desire for a sensor to detect a specific narcotic substance that they repeatedly brought it up at consortia meetings until halfway through the project, even when it was clear that all the partners had reached a consensus over which particular materials the device would detect (Handhold, 2014d). While end-users tried to insert the daily practices and professional priorities of border guards into the laboratory space of Handhold, these did not amount to deterministic sovereign demands upon the scientists and engineers. Rather, these claims were translated and renegotiated in technical meetings and collaborative spaces in which other forces (e.g. routine engineering and scientific practices, institutional pressures, anticipated markets, particular expertise) also modified the process of technology development. Nevertheless, while they were not actively constructing the device itself, the end-users were ‘built-in’ to its development: their routines and capacities were directly inscribed and prescribed in the emerging device (Latour, 2008: 160). While they acted at a distance by bringing insights from the border itself (Translation 3) into laboratory space (Translation 2), their views were absolutely central to the imagination, construction, and adaptation of the material device itself. Collaborations between scientists, engineers, and end-users meant that the role of ‘petty sovereign’ enacting discretionary decisions at the border was mobilized in the laboratory as the device was being developed.
Decisions about what was actually to be detected by Handhold emerged in the routine daily practices and decisions of the scientists and engineers involved. They were seldom governed by the lofty goal of protecting the sovereign borders of Europe, but rather by a widespread and well-understood discourse of pragmatism. All participants were well versed in the huge gaps between what the bid call required (i.e. effective, efficient, and portable detection of CBRNE substances) and what was scientifically possible. They inhabited that misalignment quite comfortably as they pursued their daily research, which was revealed as surprisingly random and idiosyncratic in nature. All Handhold partners worked professionally towards the stated aims of the project, but the route to this goal was shaped and transformed by the manner in which scientists and engineers trafficked between their multiple scientific projects (while also mobilizing expertise and learning across projects), negotiated their career ambitions, and developed unexpected intellectual interests. Those working in publicly funded universities negotiated their commitment to Handhold with other research that was considered more pressing, more prestigious, and more intellectually challenging. Often, they spoke about Handhold as slightly tangential to their central focus; that is, they were perfectly capable of achieving the necessary science, but it was not enabling them to pursue their ‘real’ research interests, which were represented as more innovative (Handhold, 2013c). In some cases, this positioning of Handhold as rather ‘mundane science’ may account for the large number of Doctoral and Post-Doctoral researchers enrolled onto the project to manage its day-to-day development, while the more senior figures are freed up to work on more innovative aspects of the device (or indeed, on other prestigious projects). Innovations in technique, method, and direction continue to be the primary driver for academic publishing plans. Conversely, partners working in Small and Medium Enterprises (SMEs) framed their participation in Handhold primarily through future commercialization plans. For these scientists and engineers, the project’s purpose of protecting Europe’s borders from CBRNE is always handcuffed to the pressures and forces of the market for security technology, such as the overall cost, the size and workability of the device, competing products, and the potential profit margins. 2
During the initial phase of development in which trial, error, experimentation, and testing were the norm, scientists and engineers were not overly constrained by their collective decision to focus on CBRNE materials but were driven instead by a very simple desire to produce something – a sensor, a programme, a prototype – that worked. With respect to developing sensors, this often involved ‘practising’ (testing, adapting, re-directing) with substances that behaved like CBRNE for the purposes of detection, but were not ‘the real thing.’ This practice is not surprising in itself; given the heavy restrictions placed on CBRNE substances, scientists and engineers routinely work on more available substances, such as ‘synthetic’ versions of banned narcotics, during research. Importantly, this trafficking between substances is not governed by the consensus that CBRNE are the most dangerous things crossing borders, but rather by much more pragmatic concerns about existing expertise, state regulations, health and safety procedures, ethical parameters, and cost. For example, one of the project partners has a wealth of expertise about the detection of various drugs and toxins; the team has a fully equipped laboratory that meets all health and safety regulations, as well as resident expertise and infrastructure to negotiate the regulatory context of acquiring illegal and dangerous substances for the purposes of research. While Handhold is certainly an opportunity to build on existing expertise, the scientists in this laboratory were more explicit about using the project as an opportunity to develop additional research skills in methods of detection, such as surface chemistry. In this sense they were less concerned about what their sensors detected, and more concerned with developing new methods of detection that would, in the future, expand their institutional research horizon (Handhold, 2013d).
Throughout all stages of Handhold’s development, we witnessed complex negotiations over how to interpret the bid itself and how to transpose some kind of interpretive consensus into workable science. These negotiations did not cease once funding was acquired, end-users were enrolled, and research had begun – quite the opposite. Rather, the same tensions kept emerging, for example between end-user desires and scientific preferences, which troubled the consensus reached between all project partners. What this suggests is that the ‘multitude of conversations, mistakes, assumptions, and whims’ that shape bordering practices are also operating in the antecedent register in ways that un-work the certainty of both scientific claims and sovereign decision-making (Amoore, 2013: 107). It is neither the fantasy of total security nor the imagination of future deployment that centrally drives these conversations. Rather, it is a complex multi-directional contestation that enrols several subject positions, material substances, skill-sets, institutional pressures, governing forces, and career trajectories into micro-political decision-making about the device itself. These decisions are not contained within the ‘secluded research’ of Translation 2, but are instead infected by lingering dissatisfactions over the bid call itself, and are saturated with the multiple future imaginings of end-use, and end-users.
Assembling time, detection, and decision
The only way to kind of be sure was essentially to get on and to start trying it … doing experiments on the original device … ‘this is how long it’s going to take’… ‘that’s too long’
So, it is the time it takes to process that is the key?
Exactly … one of the first things is to draw up a list of requirements, and that involves a lot of contact with the end-users. And in there are things like how long you are prepared to wait to get a reading. It’s not exact but it gave us an idea that it needs to be sub-minute for a sensor reading. I mean if bags are going past, for the sake of argument, and it tells you half an hour ago there was a bag went past that had something in it, that’s no use. (Handhold, 2013e)
Despite acknowledging the impossibility of total security and the surprises, detours, mistakes, and misfires that underscore contemporary bordering practices, technology remains privileged in proposed solutions for a central reason: it is faster, more reliable, and more accurate than human practice. However, as Amoore (2013) argues, this reliance on technology for border security introduces an important structuring tension between (a) the complex temporality of flows (of people, things, and ideas) across borders, and (b) the time needed for the technology to produce data and decisions in the name of security or revenue. Bordering action, then, is both a temporally emergent agency (Pickering 2005) and is also about making these numerous distinct temporalities compatible. This desire to ‘make security quick’ involves what some Critical Border Studies scholars have described as practices which ‘concertina time’ (Parker and Vaughan-Williams, 2012: 730). We see this most clearly in the way that the instantaneous alerting produced by a portable detection technology (i.e. the milliseconds it takes to produce a particular sensor reading) is made to fit into a wider pre-emptive temporality of forewarning geared towards potentiality and probability.
Certainly border security demands instantaneous information from the technologies it enrols, and customs officials rely on technological devices to provide a rapidity that far exceeds human capacity. For example, US Customs and Border Protection (CBP) understand certain security technologies as ‘force multipliers’ enabling them to ‘work smarter and faster in detecting contraband while expediting legitimate trade and travel’ (US CBP, 2013: 1). What our research of Handhold demonstrates is that the temporality of technologically enhanced detection – the precise moment that enables the sovereign decision of what is permitted to enter and what is not – is a heterogeneous and multi-layered event in which an ‘instant’ has multiple durations. This is not a single moment of decision enacted by humans working in concert with increasingly autonomous security technologies; rather, this is a prolonged and textured experience that is constituted by multiple, competing, and juxtaposed temporalities. In short, the apparently instantaneous moment of detection is actually an assemblage of multiple distributed moments of decision.
Two temporal goals arise in the call to which Handhold responds: fast (i.e. instantaneous) and parallel (i.e. simultaneous) detection and identification. It states that ‘the “mechanized dog” should be able to detect in parallel a variety of possible illicit elements, with reliability, high speed of detection and identification, allowing fast threat assessment’ (European Commission, 2010: 24–25). While parallel or simultaneous detection is produced by the integration of multiple sensor components in the blue box, it is the goal of instantaneity that reveals most clearly the multiplicity of actions and decisions involved in producing fast detection. How fast is possible? How fast is fast enough? The answers to these questions are negotiated and renegotiated throughout the technological development process in similar ways to the selection of substances and methods. The funding call merely demanded ‘high speed’ and ‘fast’ action, and various consultations with end-users reinforced the need for speed. Handhold scientists and engineers developed more specific understandings of both time and speed, which were formed and negotiated though the assembled relations with their equipment, substances, and each other.
In building a short and coherent time for detection, scientists and engineers enrol practices from multiple scales and sites. For example, a wide range of international standards for Radiation and Nuclear detection exist, including some specifically for handheld devices. On the Handhold project, these standards informed but did not impose a target on researchers. Rather, scientists and engineers made a specific choice as to which of the numerous standards to adopt as a target, a choice driven by the dynamics of possibility, innovation, and curiosity that we saw in the selection of substances. In this case, they selected the global IEC (International Electrotechnical Commission) standards recently adopted by the EU rather than, for instance, US police standards. In other areas (e.g. Chemical, Biological and Explosives), target detection times did not draw on specific international standards, but were negotiated between the Handhold partners themselves. In these cases, ‘fast’ is a matter decided by, and negotiated between, scientists, engineers, and end-users with only implicit acknowledgement of international standards (Handhold, 2013g, 2014d).
The ways in which Handhold engineers construct the temporality of detection and bordering decisions is also an emergent effect negotiated with humans in congregation with materials. Each sensor technology brings its own temporality of detection to the device, and these multiple temporalities arise from the interaction of specific target substances (e.g. biomatter) with aligned detection devices (e.g. surface chemistry). The sensors in Handhold do not directly detect the target substance; rather, they detect and identify that substance on the basis of its action – the difference it makes to something else. In other words, the sensors measure and compare reactions to substances such as light or other chemicals (Handhold, 2013j, 2013k, 2014c, 2014d). These reactions have different temporalities. The reactions detected by radiation, nuclear, chemical, and explosive sensors, indicated by infrared and light technologies, produce detectable changes in micro-seconds (Handhold, 2013j). Biomatter, on the other hand, is stubbornly slow as it interacts with ‘biorecognition elements’ (i.e. antibodies), which may take many minutes to produce a reaction (Handhold, 2013j). Given these multiple reaction times, the Handhold team had to actively construct a ‘Total Detection Time’ out of the numerous temporalities generated (Handhold, 2013m, 2013d, 2014d, 2013f, 2013l, 2014b, 2014c, 2014d). These differences in reaction time, in turn, created new challenges for those working on how to display the results. For example, the team had to enrol and test recently developed processor chips to make sure they could manage data emerging at such different speeds (Handhold, 2013e, 2013h, 2013i, 2013f, 2013l, 2014d). What we discovered in our observations of Handhold were complex and non-linear temporalities that were stitched together through negotiation across the entire collective as partners tried to balance the competing forces of international standards, common scientific practices, and the multiple substances, reactions, and calculations constituting the device.
This active construction of temporality engages what Pickering (1995: 20) refers to as the ‘tuning’ practices of technology that always enrol human and material agencies into assemblages that modify even the most instrumental programmes (Latour, 1994, 1999). As Pickering (1995) suggests, the slow, circuitous and heterogeneous development of instantaneous detection is itself a ‘dance of agency’ in which the nature of decision is not that of sovereign/human intentionality nor non-human/technological determinism. Rather, there is a ‘dialectic of resistance and accommodation’ (1995: 22) in which the intentional actions of scientists and the agency of materials modulate and modify each other. It is precisely by opening up the moment of detection within the development of Handhold that we are able to see more clearly the complex and emergent choreography between human actors and the material technologies they work with:
As active, intentional beings, scientists tentatively construct some new machine. They then adopt a passive role, monitoring the performance of the machine to see whatever capture of material agency it might effect. Symmetrically, this period of human passivity is the period in which material agency actively manifests itself. Does the machine perform as intended? Has an intended capture of agency been effected? Typically the answer is no, in which case the response is another reversal of roles: human agency is once more active in a revision of modelling vectors, followed by another bout of human passivity and material performance and so on. (Pickering, 1995: 21–22)
For us, this ‘dance of agency’ is not simply the usual daily practice of laboratory science and technology development because the future border emerges within the laboratory and presents itself as another dance partner in the desire to compress detection time.
All of the tuning of detection time was governed in advance by the idea that border guards and customs agents require quick answers to their two primary questions – Is something present (detection)? If so, what is it (identification)?– in order to move more quickly into practices of dispersal, containment, and protection. However, the target time varied according to the specific bordering action required: practices of detection and identification themselves have different temporalities (detection in seconds, identification in minutes), and because each substance produces different durations of detection and identification, there are always multiple answers to the imagined questions of the border guard. For example, if neutron radiation is detected, the relative lack of ambiguity and imminent danger will yield prompt action, whereas the detection of an unspecific ‘chemical’ or ‘biological’ substance is not itself a cause for alarm but instead requires a further step of identification to determine whether it is potentially harmful (e.g. ricin) or just illicit (e.g. cocaine) – a process that is seen as more complicated for biological and chemical substances than for radiation (Weerth, 2009). These different processing times must also be attuned to the specific flows being acted upon: for example, cargo containers spend long periods of time behind secure borders before they are cleared for movement, which provides more opportunities for access, testing, and analysis than the speedier trajectories of passenger baggage or postal deliveries.
In all three translations the notion of speed remained intact, but a consensus on speed (i.e. how fast is fast enough) was always emerging and changing as engineers, end-users, instruments, substances, atmospheres, and imaginations combined their initially incompatible temporalities. In revealing the multiple and complex temporalities at work in the elongated moment(s) of detection, decisions no longer appear as contained moments of human intentional action at the border. Our observations of Handhold provide greater texture and granularity to the exact moment of the sovereign decision to permit or exclude, and as such foreground how that decision is constituted by multiple temporalities and enacted by an assemblage of human actors and non-human materials (e.g. technologies; substances; data) at multiple sites. While opening up that moment of decision enables us to trace the force of sovereign power back into laboratory spaces, what needs further explanation is how that moment is also powerfully constituted by future-oriented fantasies, imaginations, and assumptions.
Imagining the border: Phantoms, assemblages, and decisions
Engineer [describing an early observation of border practices at a European port]: At six in the morning on a cold, spring, dark morning watching the dawn come up you realize just, you know, how this device has got to work. Because they had beautiful big dogs … just placidly sitting there wagging their tails … they [customs officials] made the people walk through into the reception area of the port. The dog just goes forward, just ever so gently on, on the leg – just brushes up against him, and you realize that you’ve got to design a box of electronics that not only is as good as the dog but interacts with people in a way, because no box of electronics is going to have that kind of general interaction. (Handhold, 2014a)
We have revealed thus far that the scientific practices of bordering within the laboratory are framed by competing interpretations of the purpose of the project (e.g. between end-users and scientists), and by the competing temporalities of the instruments and the substances under scrutiny. A final dimension that disrupts the secluded nature of Translation 2 is how particular actors are re-enrolled into the assemblage; more specifically, how the practices of actually-present human actors within the laboratory are governed by the capacities, needs, and desires of particular phantom agents: the sniffer dog, and the border guard. This suggests that the assemblages constituting Handhold are not simply comprised of human and non-human/technical nodes, but also that the actors are simultaneously actually present and immanent in the form of phantoms. Moreover, the enrolment of phantasmic actors into the assemblage further relocates the sovereign decisions of risk assessment and management away from the border and border guards, and into the laboratory itself.
The founding phantoms of Handhold are the sniffer dogs that the device is designed to replace, or at least to complement, as they form the baseline against which all functionality is to be compared. This canine spectre haunted Handhold’s multiple and sometimes competing interpretations of the EU’s desires for CBRNE detection. When asked about the purpose of Handhold in general, almost all participants were clear that whatever device was developed, it had to do more than a sniffer dog could do, and do it better and more efficiently. The project partners were clear in their preference for techno-science over canine detection: dogs get bored after 20 minutes of detection work, whereas security devices can go for as long as their batteries let them; the labour intensive and costly training period for a single sniffer dog is replaced by a one-off training session for large numbers of security agents; and while a dog can be trained to detect one or two substances, a portable device can detect multiple substances at once. The sniffer dog came to operate as both an aspiration and a regulatory ideal as scientists and engineers continually gauged their progress through constant comparison between the Handhold device and the dog. For example, one of the Handhold partners discovered that dogs can detect substances at one part per billion rather than many parts per million, and this quickly became the agreed target (Handhold, 2013i). Moreover, during the prototype testing as the scientists and engineers asked end-users and EU representatives whether they wanted more or less sensitivity with regards to a particular sensor, the criteria for judgement was whether this was close to what a dog could detect (Handhold, 2014d). At many points throughout Handhold’s development, the device’s ability to detect more substances and last for longer than the sniffer dog energized scientists and engineers, and gave the project momentum (Handhold, 2013f, 2013i, 2013j, 2013l, 2014d, 2014e). It became clear that Handhold partners did not develop a unified sense of their progress with reference to the EU’s remit of CBRNE, but rather charted their success with reference to a widely shared imagination of the capabilities and behaviours of the sniffer dog.
While the omnipresent sniffer dog stands as a key comparator for the device, the most palpable phantom in the laboratory with regard to its form and composition is the border guard, or primary ‘user’. The imagined needs, desires, and habits of these individuals are constantly invoked in the laboratory in ways that have concrete impacts on the design of the device itself. As a partner in the consortium, Handhold’s end-user group is used nominally to ensure that the work of the project is ‘grounded in and therefore valuable to the practical day-to-day realities of detection’ (Handhold, 2013j). While sometimes physically present as an end-user representative, the phantom border guard emerges most frequently within the imaginations of scientists and engineers as their assumptions about the ‘day-to-day realities of detection’ shape the material composition of the device. These imaginations range in complexity, but are often expressed with reference to the mundane materiality of the device itself. Throughout our research we encountered highly sophisticated and intense conversations about, for example, the ease and frequency of battery changes, the size and weight of the actual device, and the physical placement of screens, read-outs, and power sources (Handhold, 2013f, 2013l). We saw very clearly how the ‘real world’ of an imagined border populated by multiple border guards (Translation 3) loops back through Translation 2 and is inscribed directly into the device in a way that ‘scripts’ future bordering practices (Akrich, 1992).
The ‘end-user’ continually disrupts any unidirectional understanding of translation by introducing questions about the actual physical use of the device on a daily basis. While these questions involve future-oriented and speculative scenarios, they are no less real in terms of their impact on the design and development of the device. Scientists and engineers were constantly trying to ensure that the device was robust enough to avoid catastrophic breakage when, for example, it was ‘tossed onto a desk’ by an imagined border guard, or tilted the wrong way in an effort to reach difficult or enclosed spaces (this latter speculation produced a demand for tilt alarms to avoid spillage within the device’s liquid interfaces) (Handhold, 2013f, 2013l). While the device was adapted to accommodate the ‘rough’ and ‘careless’ behaviour of imagined border guards, its user interface was also adapted in response to the environmental conditions in which bordering takes place. If the device is to be used outside in inclement weather, including rain or cold, it was observed that the guards would wish to keep their gloves on (Handhold, 2013f, 2013l). The original plan for a touch-screen device, therefore, was discarded, as it cannot be used by gloved fingers. Moreover, the need for a wireless connection to a smart phone application, also part of the initial plan, was also revisited when its actual utility for guards was questioned. In these ways, phantom end-users were powerful in shaping the actual form, design and components of the device itself. To be sure, each of these scenarios of future use by border guards was an exercise in imagination that required mental constructions of a ‘typical’ user and a ‘typical’ scenario, but the effects of these constructions were both real and determinative of decision-making.
The complexity of that force emerged in the efforts to make Handhold a truly international bordering device that can operate in multi-jurisdictional and multi-state scenarios. There are different detection means and procedures across Europe, and the challenge for security actors is to accommodate different practices, standards, and equipment (NATO, 2007). Within Handhold, these accommodations were revealed in partner efforts to create a common language to understand the results produced by the device in multiple countries, borders, and professions. Once again, imagined border guards loomed large, especially their capacity to accomplish technical tasks: Should results be presented in pictorial, graph or number form? Should alarms be accompanied by sound or flashing lights, depending on the severity and immediacy of presumed danger (Handhold, 2013f, 2013l, 2014d, 2014e)? These anticipations about the skill-sets and capacities of border guards range from questions about their basic ability to use such a device through to more specialized trouble-shooting and problem solving (Handhold, 2014d, 2014e). For example, if the system needs to be flushed and cleaned, how can this be as simple and non-technical as possible, and also portable so it can occur regularly outside of the sterile and equipped environment of the laboratory, in order that border guards themselves are able to accomplish such mundane tasks (Handhold, 2013f, 2014d, 2014e)? Here, too, the environmental and atmospheric context of use is crucial: influences such as background radiation have to be measured and accommodated for in order to set the parameters for false positives and accurate readings (Handhold, 2014d), and accounting for such interference should also be as non-technical and straightforward as possible.
Enrolling the phantom border guard in the development of Handhold is a process of laboratizing the border – of placing a replica of the laboratory at the border itself. This introduces difficult questions about the autonomy of the device itself in relation to who makes the decision (i.e. the increasingly autonomous device or the fallible and phantasmic border guard) and where the decision is taken (i.e. at the border when the device is in use, or in the laboratory when scientists and engineers are adapting the device). When we are attentive to excesses of Translation 2 – how the laboratory is constituted by imagined actors and future scenarios – we see that solutions become most apparent when decisions are removed from the human border guard and located in the increased autonomy of the device. Central, here, is the adaptability of the device – its ability to accommodate the needs and desires of the guards using it regardless of where they are located, or how much specific technical knowledge they may have. Increasing the autonomy of the device means reducing, as far as possible, the role of the border guard by automating and miniaturizing the laboratories of Translation 2 and extending them to the border in a way that pre-emptively captures – and laboratizes – the space of Translation 3. This capture does not determine usage or appropriation of the product (as discussed in the introduction to this special issue); it does not remove the discretion of border guards who may ignore a reading, act in ways other than what the reading might suggest, damage the device, or reject the product altogether. Rather, it reveals how Translation 2 extends itself into the future by anticipating and adapting to usage and appropriation via the quite unscientific means of bringing it into the laboratory through imagined phantoms and scenarios.
A great deal of bordering and laboratizing is (at the time of writing) still ongoing within the dispersed Handhold collective. The assemblage that comprises the scientist-device-guard spans geographical space, but nevertheless exists in a tight network that is mediated by the device itself. Our point is not that there will be no decisions left to be made at the border, but rather that those decisions are configured (informed, sped up, and constrained or enabled) as the conditions of the laboratory are built into the device in advance of its deployment at the border. This suggests that disaggregated sovereign decisions are enacted much further back in the process of research and development than initially understood. With respect to Handhold, our point is that every decision on the way to producing a coherent little blue box that informs and inscribes bordering actions has within it aspects of sovereign power.
Conclusions
Our analysis of Handhold seeks to contribute to ongoing debates about the complex relations between scientific research and political life in the area of security. Further, we want to intervene in critical border studies research with the claim that we cannot fully understand – and therefore question or resist – the operation of security technologies at the border without understanding how such devices were funded, designed, crafted, adapted, and tested before being deployed. Security is not simply governed at the moment of border crossing, at the site of the border itself, or in the moment of encounter between security agents and travellers; rather, sovereign decisions about who/what can enter and who/what cannot enter are constituted long before security devices are purchased by governments and used by customs agents and border guards.
Callon et al. write that the difference between the before and after of ‘science’ is the sudden proliferation of laboratories along with techniques and entities they mobilize and ‘the interests and projects they authorize’ (2001: 66). As we have shown, Handhold illustrates how these interests and projects are authorized in the laboratory during the work of ‘making silent entities speak’ (Callon et al., 2001: 54). We have seen how the silent entities of CBRNE substances are made to speak first in the Handhold laboratories so that those laboratories can be rendered stable, portable, accurate, robust, and fast, and transported to future borders where they can act as one. Handhold is about making CBRNE substances speak to border guards and enrolling funders, scientists, and engineers into the project of calibrating what these substances say and how they say it. In this way, the voices of CBRNE substances become an active part of the rituals of bordering – as lively, present, and animated as the human agents who cross and patrol them (Salter, 2012).
It is within Handhold’s seemingly secluded assembling of imaginations, materials, and phantoms that a future border security is materialized. To adopt Pickering’s metaphor of choreography, many human and non-human and material and immaterial ‘plural forces of authorization’ (Amoore, 2013: 14–15) dance with each other as Handhold brings science and security together. Developing the Handhold device does not present a linear process of Callon et al.’s three translations, but instead a sense of emergence, non-linear resonance, endless circulation, and unexpected redirection. It shows an ‘uncertain world’ (Callon et al., 2001) in which scientists and engineers produce an equivalence between the tangible worlds of explosives, narcotics, and radiation, and the intangible worlds of phantoms, imaginations, and future scenarios. The decision of the border – the sovereign capacity to identify, control, and manage – takes place here prior to any encounter between the device/dog/border guard and the flow of packages, containers, and people. In this way, sovereign bordering takes place in the laboratory, before the laboratory takes its place at the sovereign border.
Footnotes
Acknowledgements
We wish to thank the scientists, engineers, and end-users of the Handhold project for their assistance and support in the conduct of our research. We would also like to thank the editors of this special issue, Julien Jeandesboz, Anthony Amicelle, and Claudia Aradau, as well as the three anonymous reviewers for their constructive and insightful comments on the earlier draft of this article.
Funding
This work was conducted as part of the TRUST: Tracing Risk and Uncertainty in Security Technology project supported by the UK Economic and Social Research Council and the UK Defence Science and Technology Laboratory (grant number ES/K011332/1). Supporting data are openly available at
. The views expressed are those of the authors.
