Abstract
Digital identity systems are usually viewed as
Keywords
Introduction
The term
Countering this narrative, studies across disciplines have seen digital identity associated to erroneous exclusions of genuinely entitled users (Drèze et al., 2017; Muralidharan et al., 2020), as well as undue redirections of economic development policy (Masiero and Arvidsson, 2021). Coexistence of data-based social assistance with techniques of policing and profiling, resulting into what Iazzolino (2021) refers to as “infrastructures of compassionate repression”, has added to the same narrative. The result is that a digital-identity-for-development (D4D) orthodoxy, while diffused in policymaking, is systematically questioned on empirical grounds (Beduschi, 2019; Weitzberg et al., 2021).
In portraying the digital identity orthodoxy and its problematisations, a view of digital identity systems as
Instrumental to understand data-induced injustice, a datafier perspective still leaves a gap: alone, it does not fully illustrate the core process through which digital identity operates. Illuminating such a process is a view of digital identity schemes as
In this commentary I suggest that a platform view is essential to understand the workings of digital identity and, crucially, its surveillance outcomes. In surveillance studies literature, data-based profiling is teleologically linked to tracing and, in turn, to different modes of repression of the surveilled (Akbari and Gabdulhakov, 2019; Murakami Wood and Monahan, 2019). I argue that a platform perspective is key to understanding the roots of such outcomes, which stem from the core-complements architecture of digital identity systems. If it is so, the popular idea of a “dark side” of digital identity loses its meaning: rather than a “side”, it is the very architecture of digitally identity platforms to enable their surveillance outcomes.
Three views of digital identity
Masiero and Shakthi (2020) put forward a taxonomy of views of digital identity. In such a taxonomy, a datafier view is juxtaposed to alternative visions centred on platforms and routes to surveillance.
Datafier view
A long-established view sees digital identity in its role as a
Platform view
Integrating the datafier view, Masiero and Arvidsson (2021) illuminate a vision centred on the platform architecture of digital identity systems. In such an architecture, a
Surveillance view
Beyond datafier and platform perspectives, a surveillance view is centred on the profiling and policing outcomes of digital identity. Instances are centred on linkings of access to digital identity data to police violence, capture and deportation (Akbari and Gabdulhakov, 2019; Murakami Wood and Monahan, 2019). A surveillance view confronts the D4D orthodoxy with the harm caused by profiling: Newell et al.'s study of the US-Mexico border (2016: 178) notes, for example, how research suggests “a causal link between the U.S. government's border control policies and rapidly increasing numbers of migrant deaths.” Differently from a platform view, whose focus is on platform architecture, a surveillance view centres on the surveillant outcomes that digital identity produces.
All three perspectives are important in conceptualising digital identity, with the platform view holding more explicative power than it is commonly recognised. Below I show, through the notion of platform-mediated surveillance, how a platform view is instrumental in studying the outcomes mapped by the surveillance literature.
Platform-mediated surveillance
Strikingly, it is mostly works embracing the D4D philosophy that take a platform view of digital identity, highlighting the positive implications of platform properties for development outcomes. For example, Mukhopadhyay et al. (2019) study India's Aadhaar through the platform perspective, identifying its core-complements architecture and relating it to the platform's openness and scalability. Aadhaar's openness, it is argued, is functional in matching identities with entitlements: in turn, its scalability is essential for enabling Aadhaar-based distribution of benefits across the whole country. Based on a platform perspective, inscribed in an Information Systems (IS) research tradition, this work however omits the point of view of recipients, portraying Aadhaar-like systems as fundamental for nations “struggling to provide basic services to poor” (Mukhopadhyay et al., 2019: 437).
Problematising this view, the idea of platform-mediated surveillance relates the architecture of digital identity platforms to the surveillant outcomes which they engage. A core-complements architecture is effectively capable of matching individuals not just with their entitlements, but with their records in state and international databases, whose presence deters vulnerable groups from enrolling into schemes for accessing core services. Portrayed by Mukhopadhyay et al. (2019) as a way to improve the distribution of benefits through scaling, the architecture of digital identity platforms effectively enables interoperability among systems: Masiero and Arvidsson (2021) note that such an architecture produces unjust exclusions, making access to services conditional to biometric user authentication. Platforms are, to say it with Winner (1980), artefacts that carry politics, crystallising into technology the choices of the designers along with any biases implicit in them.
Research on biometrically-enabled food aid to endangered refugees offers a powerful illustration of the problem. Iazzolino's (2021) study of a Biometric Identity Management System (BIMS) for the distribution of food aid in Kakuma refugee camp, Kenya, details how refugee household heads were urged to register their fingerprints to receive food rations at the distribution points in the camp. In this case a platform
Integrating a substantial body of literature on technology in humanitarian action (cf. Cheesman, 2022; Martin and Taylor, 2021; Weitzberg et al., 2021), Iazzolino's work traces a clear link between the platform properties of digital identity and its surveillance implications. It is, in fact, the profiling of refugees through BIMS registration that spurs the perceived danger of policing by authorities, inducing refugees to refrain from registering or requiring “compensation” for registration in the form of larger food rations (Iazzolino, 2021: 111–112.) Built as a means to subordinate food access to authentication, the BIMS platform simultaneously exposes refugees to unwanted and dangerous forms of profiling, rooted in a colonial history that deeply characterises the national context (Weitzberg, 2017). Indivisible from its profiling affordances, the BIMS architecture leads refugees into the binary choice of registering and running the risk of policing, or giving up essential benefits of food ration provision.
Studies of asylum seeker databases further reinforce this argument. In developing the notion of “processing alterity”, Pelizza (2020) illustrates how a shift in the Eurodac system, which univocally identifies asylum seekers in European countries through their fingerprints, in 2015 made the Eurodac database interoperable with national police authority databases across Europe. Operating on a core-complements architecture, with a core constituted by the asylum seeker database and complements enacted by third parties such as police authorities, the Eurodac system is a further exemplification of the issues that can emerge from platform-based profiling. It is, once again, the platform's architecture that enables surveillant outcomes, making it possible for national police authorities to access asylum seeker records and make decisions based on these.
Kenya's BIMS, the Eurodac system, and at large systems enabling the construction of complements on a digital identity core illustrate the essence of platform-mediated surveillance. In disputation of the D4D view, such systems show how enabling third-party access to a central database entails much more than the “openness and scaling” envisaged by Mukhopadhyay et al. (2019): with their enablement of access, such architectures are a core affordance for surveillant action to happen. Far from being an incidental consequence of digital identification, these architectures are at the heart of surveillant action, and their study is inseparable from any recognition of the “benefits” of digital ID registration.
It should be noted that, while offering the conceptual tool to link platform architecture to its outcomes, the platform view's focus on architecture is also its main limitation. Rooted in the IS research tradition, this view's technical focus needs to be combined with the effects of digital identity platforms, effects that the platform literature has only recently started exploring (Bonina et al., 2021). A view thoroughly centred on platforms, focusing just on the effectiveness of core-complements architectures, may erase the voices of beneficiaries, risking, as Breckenridge (2019: 6) puts it, to “ventriloquise for the poor” who are digitally identified. It is hence important to study platform properties in constant relation to their outcomes, rather than as standalone features of supposedly value-free technological artefacts.
The dark matter of digital identity
In the study of digital identity, a datafier view is essential to grasp the conversion of human beings into machine-readable data. Such a view needs, however, to be integrated with a perspective centred on platform features, where digital identity systems are viewed in their nature as platforms whose core-complement architecture generates surveillance outcomes. As illustrated in examples from biometric food distribution and profiling of asylum seekers, surveillance outcomes are integral to the substance of digital identity, as they are produced by the architecture of digital identity platforms. The point holds for digital identity systems that present as decentralised: as shown in Cheesman (2022), such systems consist of technologies that, beyond the promotion of “self-sovereign identity”, effectively crystallise the existing logics of control on beneficiary populations.
Crucially, this point puts into question the narrative, diffused especially in the Information Systems (IS) literature, that speaks of undue surveillance as a “dark side” of digital identity platforms. The deep entrenchment of surveillance in digital ID platforms’ architecture leads to argue that a dark “side” is a partial, and misleading, representation of the phenomenon: to be dark here, where “dark” is meant as openly detrimental for users of digital identification, is the inner matter of the platforms themeselves, whose architecture is inseparable for the surveillance outcomes produced. Rather than with a dark side, by its nature occasional and incidental, we are confronted with the dark
This point intersects closely with risk-based approaches adopted in the human rights discourse (cf. Taylor et al., 2009). In a recent report, Cioffi et al. (2022) take a global perspective on digital identity systems, making the point that such systems result in large-scale human rights violations. Digital ID advocates, note Cioffi et al. (2022), state that satisfaction with digital identity systems is high across users: this leads, by implication, to the argument that opportunities outpace risks. But the same report shows the flaws of such an argument, noting that, while large evidence exists on harm caused by digital identity systems, benefits are “ill-defined and poorly documented” (Cioffi et al., 2022: 8).
This vision needs to be seen in context, for example that of studies arguing that digitally-enabled food security results into greater offtake of foodgrains (Muralidharan et al., 2020). Without questioning these results, here I make the point that looking at digital identity as a sheer datafier may lead research to miss a substantial component of what digital identity effectively does. The action of digital identity schemes is predicated on platform features that need to be taken into consideration, to understand the actual reach of interoperability-induced harm that digital identification can lead to. All in all, unpacking the platform architecture of digital ID systems illuminates problems that a datafier view alone does not contemplate, and whose consequences are powerfully articulated in the surveillance studies literature.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
