Abstract
Technological interventions in aid are both complex and deeply ambiguous. Nonetheless, many contemporary controversies surrounding humanitarian data reflect underlying tensions that stem from competing claims over sovereignty. That is, where disputes arise in humanitarian contexts following the unauthorized access to data by a third party, the unconsented sharing of humanitarian data, or the imposition of interoperability requirements on the technical systems of humanitarian agencies, these disputes regularly exhibit deeper concerns about power and authority that go beyond traditional privacy or data protection claims. This article explains the interpretive value of such a sovereignty lens on humanitarian data. To do so, it first provides an overview of how humanitarian data is shared by different actors involved in aid. Then it unpacks the meanings of sovereignty in the humanitarian domain while highlighting the emergence of “pseudo-sovereigns,” that is, actors who assert sovereignty over data in ways that challenge established norms and practices. The analysis reinterprets recent controversies surrounding the collection and sharing of biometrics, namely concerning the Rohingya in Bangladesh, Houthi in Yemen, “double registered” people in Kenya, and as part of the humanitarian response in Ukraine, through a sovereignty lens to demonstrate the utility of this perspective on humanitarian data. To better account for the complexities of power, I encourage scholars to center sovereignty considerations in their analyses of surveillance and privacy in humanitarian innovation.
This article is a part of special theme on Technopolitics of Interoperability. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/techno-politicsofinteroperability
Introduction
Close observers of tech issues in the humanitarian sector will be familiar with a steady chorus of criticism leveled at aid organizations and their data practices. Detractors lament perceived misuses and abuses of digital technology in aid interventions (Mohammed, 2023), irresponsible innovation by humanitarian organizations and their corporate partners (Cheesman, 2022a; Smith and Raymond, 2024), and security breaches involving the sensitive data of people affected by humanitarian emergencies and conflicts (Elliott, 2022). Civil society actors and other pressure groups draw attention to the privacy risks and rights violations resulting from lax data protection by humanitarian actors, the potential harms of which are amplified by the sector's rapid “digital transformation” (Coppi, 2024).
It is important to acknowledge both the complexity and ambiguity of these technological interventions in aid (Weitzberg et al., 2021)—technology can be both empowering and repressive depending on how it channels power locally. This observation leads us to recognize that many of the controversies surrounding humanitarian data can, in fact, be understood as reflections of underlying tensions stemming from competing claims over sovereignty. By sovereignty, this article refers to the exclusive authority to exercise power, which for our purposes also pertains to power over data. As discussed below, it is traditionally understood that the state is the “sole repository” of sovereign authority (Ayoob, 2002: 81), permitting it to exercise power over its territory, polity, or data, and includes the power to exclude others from doing so. The recognition of these powers by other sovereign states is a cornerstone of the international legal order and is reflected in both theory and practice, however as this article will explore, certain actors are making claims to data sovereignty in humanitarian space which challenge the established order and force us to reconsider what these shifting contestations of power and authority mean to both our theories and practice. As Sean Martin McDonald observes: the practicalities of how states conceive of and invoke digital sovereignty, when tested against urgent needs of a refugee crisis or the competing politics of a conflict zone, create real problems – not just for humanitarians,
Where disputes arise in humanitarian contexts following the unauthorized access to data by a third party, the unconsented sharing of humanitarian data, or the imposition of interoperability requirements on the technical systems of humanitarian agencies—all increasingly common occurrences—these disputes regularly exhibit deeper concerns about power and authority that go beyond traditional privacy or data protection logics, which to date have dominated risk analyses and critical discourses among institutional actors and other stakeholders in the sector. This article explains the interpretive value of such a sovereignty lens on humanitarian data. By this I mean the analytical utility a sovereignty framing provides in explaining whether and why data moves in humanitarian contexts (and to what end), whose interests are perceived to matter when disputes arise when data flows, and the extent to which these disputes can be resolved. This optic helps us better understand why privacy or data protection claims involving humanitarian data often go unfulfilled or are ultimately frustrated.
Background
Humanitarian data
“Humanitarian data” can be understood to include data about the people impacted by humanitarian crises, that is, personal data, but not exclusively so. Personal data that are commonly collected and processed by humanitarian organizations include crisis-affected people's identity information (name, date of birth) and more sensitive data such as their biometrics. Nonpersonal data relevant to humanitarian intervention includes, for example, operational or logistics data that does not relate to a data subject, or data about people that has been sufficiently de-identified such that normal data protection rules no longer apply. While humanitarian organizations widely recognize the sensitivity of personal data and are developing compliance programs to ensure beneficiaries’ privacy, nonpersonal data is generally viewed by humanitarian actors as safe to share so long as it is not deemed sensitive (UN OCHA, 2022).
Humanitarian data is extensive and complex. Beyond the personal/nonpersonal distinction, humanitarian data also can be quantitative or qualitative, sensitive or not based on different factors, related to groups and/or individuals, and shareable in aggregated or disaggregated forms (Fast, 2023: 127). It covers a wide range of types and modes of collection. The UN Office for the Coordination of Humanitarian Affairs (UN OCHA) Centre of Humanitarian Data defines humanitarian data as “data about the
This paper's concept of humanitarian data builds on the June 2022 resolution on “safeguarding humanitarian data” by the Council of Delegates of the International Red Cross and Red Crescent Movement, published in the aftermath of the large-scale hack of the Restoring Family Links database that was disclosed in January of the same year (ICRC, 2022a). The Restoring Family Links program aims to prevent separation and disappearance, looks for missing persons, restores and maintains contact between family members, and clarifies the fate of persons reported missing. The hacking incident involved the breach of more than 515,000 people's personal data and was a critical moment for the humanitarian sector, which has struggled to meet data security challenges. As Nathaniel Raymond has sharply observed, What we see over and over again is that humanitarians are being expected to hold some of the most sensitive data in the world of the most vulnerable people in the world and have the resources of mall cops to protect against the cyber hacking equivalent of Delta Force. (as quoted in Elliot, 2022)
Following the Restoring Family Links hack, the 2022 Council of Delegates resolution reaffirmed that “the ability of impartial humanitarian organizations to process data, including personal data, for humanitarian purposes (
An expansive conceptualization of humanitarian data inclusive of both personal and nonpersonal data as well as these other complexities helps to orient our analysis of sovereignty concerns. To illustrate the utility of a sovereignty focus, it is worth recalling the details of the data partnership launched in 2019 between the world's largest humanitarian agency, the UN World Food Programme (WFP), and Palantir (the controversial American data analytics provider). This partnership is aimed at addressing logistics challenges and supply chain inefficiencies within WFP's global operations, which involve delivering food, cash, and commodity vouchers to an estimated 152 million people across 120 countries and territories. 1
According to WFP representatives, its work with Palantir does not raise privacy concerns because the project does involve the personal data of its beneficiaries: “We can say definitively that Palantir has agreed to our rules of engagement: No access to WFP data that provides beneficiary information” (Porcari, 2019). The agency and its corporate partner have thus managed to deflect and suppress much of civil society's initial criticism of the relationship (cf. Easterday, 2019) by emphasizing the safeguards in its privacy-informed approach (Martin, 2023). However, discomfort with the broader implications of the relationship continues to linger. As we have noted elsewhere: On a structural level, the partnership also exposes tensions at the intersection of privately provisioned humanitarian technology and state sovereignty. WFP's operational data provides Palantir with deep insights regarding global food in/security. Their access to this information has heightened concerns among host states about the involvement of a private technology firm with strong ties to the US security establishment in sensitive humanitarian work. (Martin et al., 2023: 1363)
Technopolitics of humanitarian interoperability
These concerns about control over—and governed access to—data will only become more pressing as the humanitarian sector continues to experiment with and implement new models and technologies for data governance such as data trusts, data collaboratives, data commons, and data cooperatives (see Taylor et al., 2025 for a full overview). Some of these are based on principles of data portability and interoperability, including concepts of self-sovereign identity. Others are inspired by growing interest in so-called digital public infrastructure and its implications for the humanitarian sector (Schoemaker, 2024). New governance approaches might result in greater data sharing by different aid actors, including humanitarian agencies, host governments, and donors (Currion, 2022; Worthington and Duechting, 2023), including data exchange across borders, and potentially nonhumanitarian actors as well. The Humanitarian Data Exchange (HDX), an initiative of UN OCHA's Centre for Humanitarian Data, illustrates the potential for data sharing within and beyond the sector. Launched in 2014, HDX is an open platform for sharing data across crises and organizations. Its goal is to make humanitarian data accessible for analysis, particularly to the public. 2
Building on initiatives like HDX, efforts to promote greater data sharing, portability, and interoperability could potentially disrupt existing power configurations between people, aid agencies, and states. These initiatives are said to provide beneficiaries with more autonomy by giving them better control over their data, for example by making it easier to transfer data from an aid agency to the person or to another agency upon request. It might also save people “time and effort through reducing assessment fatigue and/or re-registration”—a common complaint among aid recipients (Donor Cash Forum, 2022), though these operational efficiency gains are likely to be most appreciated by aid providers rather than aid recipients. Regardless, as Madon and Schoemaker argue, “enhanced interoperability may not always be of value to refugees [and other humanitarian subjects]; indeed it may also increase risk” (2021: 946). Concerning these risks, Jacobsen and Fast observe a related phenomenon whereby humanitarian agencies’ collection of personal and sensitive data means that humanitarian protection is no longer only about ensuring access [to affected populations] but increasingly also about denying access… it concerns the question of
More recently, Hancock et al. draw attention to the “ironies” that follow from mandated data sharing in the context of an adjacent field, human rights initiatives, that is, how “legal uncertainties, nontransparent sharing procedures, and limited accountability regarding downstream uses of data may undermine efforts to tackle modern slavery and place victims of abuses at risk of further harms” (2024: 974).
In spite of these risks, it is possible that other actors might also stand to benefit from innovative forms of humanitarian data management. Donors could use interoperability initiatives to require more comprehensive reporting from the organizations they support, or to force data sharing between aid agencies that compete for their funding. Host states, too, might expect that the data platforms of humanitarian agencies that are deployed in a country's jurisdiction, including registration, identification and payment platforms, interoperate with government systems for the purposes of providing greater visibility on the populations and operations active in their territories. But with what consequences for sovereignty?
In the rest of this article, I will answer this question. To do so, the article first provides an overview of how humanitarian data is shared by different actors involved in aid, focusing on the tensions that emerge when humanitarian data moves to and from humanitarian organizations, government agencies (including host governments and donor states), and private sector firms increasingly involved in aid delivery. Then it unpacks what is meant by sovereignty in the humanitarian context before arguing in support of a sovereignty perspective on humanitarian data. The analysis reinterprets recent controversies surrounding the collection and sharing of a sensitive type of data, namely biometric data, through this sovereignty lens. To better account for the complexities of power and to demonstrate the interpretive value of a sovereignty frame, I will explain why these considerations ought to be central to our analyses of humanitarian data.
Humanitarian data sharing: Actors, modes, and rationales
Humanitarian data is shared widely among different actors involved in aid, including across both institutional and jurisdictional boundaries. These dynamics have important consequences for notions of sovereignty. Here, I summarize four of these data sharing relationships according to the types of actor involved before turning to consider the key questions pertinent to sovereignty claims regarding humanitarian data.
Humanitarian data sharing is a complex and poorly understood issue, which often leads to misunderstandings when incidents occur. Data sharing is the subject of ongoing discussions within the sector, particularly as digital technologies and standards facilitate greater sharing and reports emerge of possible malpractice and potential harms. Key actors such as UN OCHA are developing policy frameworks to govern operational practices and to mitigate risk (2022).
For the purposes of our analysis, we can distinguish between four of the most common data sharing relationships. However, it is worth noting in passing that other configurations are possible. For example, the aforementioned HDX (see the subsection on the technopolitics of humanitarian interoperability) primarily aims to make nonpersonal, aggregated data available to the public for analysis. Its guidelines on and oversight over the sharing of sensitive data on the platform make it less interesting for the purposes of this paper, so it will not feature below. Also excluded below is aid-relevant data that is collected by vulnerable populations themselves, such as data on household food in/security or settlement living conditions collected by refugees. Some have advocated for more data to be conducted “with” not only “about” these populations (cf. Aljadeeah, 2022) as this could help increase local data sovereignty, but as yet these practices are not commonplace.
Data sharing between humanitarian organizations
Humanitarian organizations regularly share different kinds of data among themselves for the purposes of providing aid. For example, within the UN system, the UN Refugee Agency (UNHCR), WFP, and the United Nations Children's Fund (UNICEF) share data (both personal and nonpersonal) for humanitarian cash programming purposes (UNHCR, WFP and UNICEF, 2020). Arrangements such as these are generally governed by data sharing agreements, though their details are not well publicized.
Outside the UN system, the collaborative cash delivery (CCD) Network—a network of 14 of the largest international nongovernmental organizations (NGOs) that operate in every global humanitarian crisis—has established a working group to facilitate data sharing among its members. The CCD has published a generic data sharing agreement for its members to adapt for specific interventions, with a view to being more transparent about data sharing practices. 3
Data sharing between humanitarian organizations and government authorities
Humanitarian organizations are often legally required to share certain kinds of data with government authorities in the countries they operate in. For international organizations (e.g., UN humanitarian agencies as well as the International Committee of the Red Cross), the rules governing these data exchanges form part of the host country agreements that enable agencies to operate in a specific jurisdiction (McDonald, 2019a). The lack of transparency regarding the terms and conditions of these agreements has led to occasional data sharing controversies, some of which I discuss below. Among the host country agreements that have been made publicly available, data sharing provisions—especially with security services—are a common feature (McDonald, 2019a).
For humanitarian agencies that do not operate with legal privileges and immunities, which includes most organizations outside the UN system, domestic laws will specify any mandates or requirements to share data with local government authorities. These laws may not sufficiently protect the data and rights of beneficiaries, especially as some legal regimes do not include protections for nonnationals (which would include refugees and asylum seekers), and thus may present additional challenges for humanitarian organizations that are legally compelled to share data with state authorities. As recognized in a 2015 resolution by the International Conference of Data Protection and Privacy Commissioners (now known as the Global Privacy Assembly): Humanitarian organizations not benefiting from privileges and immunities may come under pressure to provide data collected for humanitarian purposes to authorities wishing to use such data for other purposes. The risk of misuse of data may have a serious impact on data protection rights of displaced persons and can be a detriment to their safety, as well as to humanitarian action more generally. (ICDPPC, 2015: 1)
It is also worth noting that there are often different pressures on international NGOs and national ones to share data with authorities—the latter may have less capacity for data management, and less capacity to modulate their relationship with the government (Marelli, 2024a: 54).
Data sharing between humanitarian organizations and private sector actors
Humanitarian organizations may also share data with commercial partners. According to sectoral norms, data should only be shared with the private sector for the exclusive purposes of delivering humanitarian assistance. Examples of such sharing activities include data exchange with financial service providers for the provision of cash aid (Marelli, 2024b) or with mobile service providers (Martin and Warnes, 2024) or satellite companies (Martin and Tsui, 2025) for the delivery of connectivity services to affected populations. In these situations, contracts will specify the terms of data collection and sharing between humanitarian and commercial actors, and for which purposes (i.e., purpose specification, a fundamental requirement of data protection compliance). These contractual requirements may be shaped by broader legal and regulatory mandates, for example anti-money laundering/combating the financing of terrorism (AML/CFT) rules that specify due diligence requirements for banking customers, or SIM registration regulations that require identity data about mobile users to be recorded and in some cases shared with regulators and other authorities (Donovan and Martin, 2014; Martin and Taylor, 2020; UNHCR, 2020).
Data sharing between humanitarian organizations and donor agencies
Data is regularly shared between humanitarian organizations and their donors for different reasons including reporting and auditing. While much of this data is aggregated and/or pseudonymized, and thus supposedly “de-risked,” in some cases, personal data are shared with donors. This data sharing may be inadvertent, for example, where a spreadsheet including the names of aid recipients is unintentionally shared with a donor before personal data is scrubbed. But it may also be intentional: in certain situations government donors are not the only source of funding for international NGOs doing aid work in a locale. Large aid agencies, like those from the UN system, may perform a dual role: both as an operational partner for NGOs and as a donor. A 2021 investigation reveals the tensions that this raises: In the former role, NGOs occasionally give these UN agencies access to personal data of individual beneficiaries… This is the type of data generally not shared with government donors. NGO observations regarding the role of UN agencies as donors to an extent mirror those related to government donors.
Finally, donors also play an important role in requiring humanitarian agencies to share data with third parties for nonhumanitarian purposes, for example, to screen people being considered for assistance for possible links with terrorism (Westphal and Meier, 2021: 9). These practices are controversial and may raise sensitivities with local authorities, as we will see below. They also reveal how sovereign power shapes and extends data sharing in the humanitarian sector, often in ways that may not be apparent.
On sovereignty
Sovereignty is a multifaceted and contested concept, as is its relationship to the digital sphere (Pohle and Thiel, 2020) and digital data (Hummel et al., 2021), with a burgeoning literature now emerging on the topic of digital sovereignty. The meaning of sovereignty in humanitarian intervention is moreover the subject of considerable debate, contestation, and scholarship (Kahn and Cunningham, 2013). For the purposes of this article, it is useful to review how common understandings of sovereignty shape data practices in humanitarian intervention, and highlight the emergence of “pseudo-sovereigns” whose claims to authority attempt to challenge established norms and practices.
It is widely accepted that the state is the “sole repository” of sovereign authority (Ayoob, 2002: 81), which permits it to exercise power over its territory, polity, or data, and includes the power to exclude others from doing so. The recognition of these powers by other sovereign states is a cornerstone of the international legal order and is reflected in both theory and practice.
These ideas are also reflected in the operational policies of international agencies. For example, USAID's official guidance on data responsibility includes a tool for deconflicting claims to data sovereignty in which only states can be claimants: “data sovereignty defines
Others, however, moot the possibilities for nonstate actors involved in humanitarian conflicts to exercise a kind of
Another form of pseudo-sovereignty in humanitarian space emerges in claims by providers and operators of self-sovereign identities, which purport to give individuals complete ownership and control of their digital data without relying on a central authority such as the state or, in other cases, a humanitarian agency. Often based on technologies like blockchain, these self-sovereign identities have been provided to undocumented, under-documented or stateless people, sometimes in conflict settings, to help them access humanitarian aid and other services, with mixed results. It remains to be seen whether state authorities will ever legally recognize these identities no matter their technological sophistication (Cheesman, 2022b). As we will see below, the evidence to date is underwhelming.
Note on methods
This research is the product of a multi-year ethnography of data policies and practices in the humanitarian sector and builds on previously published research (Martin et al., 2023; Martin and Taylor, 2020). Since 2018, the author has closely studied how humanitarian organizations are developing and implementing frameworks to govern their use of data, while tracking data-related controversies as they emerge. In some cases, this has involved working closely with organizations to help them cultivate their approaches to data governance. In other cases, it has required the undertaking of elite interviews with representatives of aid agencies, including innovation experts, registration officers, data protection and cybersecurity specialists, protection officers, lawyers, and others regarding their organizations’ use of data and technology. The research also involves regularly attending relevant industry events (e.g., Identity Week, ID4Africa, Biometrics Institute Congress, etc.), where digital identity vendors market their products to government and aid actors; as well, it entails joining webinars organized by humanitarian agencies and concerned civil society groups, where the challenges associated with the responsible use of data and digital technologies are discussed and debated. This ethnographic approach helps sensitize the author to the current research problem and informs the identification and selection of the “critical incidents” (Miles and Huberman, 1994) or “moments of interest” (Hosein, 2022) for biometric data sharing in the humanitarian sector analyzed in the next section.
The analysis is primarily based on a document analysis, which is informed by insights gleaned from expert interviews. The document analysis draws on publicly available materials including academic research, grey literature such as audit reports, and other public reporting on the incidents. These were collected and analyzed as part of the multi-year ethnography on humanitarian data governance, including since 2023 a subproject on digital identity innovation and migration governance supported by the Robert Bosch Foundation and involving collaborative fieldwork in Kenya and Germany. The analysis is informed by interviews with humanitarian practitioners and other experts (
It is acknowledged that there are limitations to this methodological approach, but I argue that it is appropriate to inform the perspectives that follow in which controversies regarding biometric data sharing in humanitarian contexts are reinterpreted through a sovereignty framework.
Humanitarian biometrics: A sovereignty reading
A full review of humanitarian biometrics is outside the scope of this article, 4 but it is noteworthy that the use of biometrics in managing access to humanitarian aid is increasingly being normalized in spite of regular controversies. Where resistance or disputes emerge in response to these data practices, these shed light on fractures in governance systems and compel us to examine the underlying dynamics of sovereign power that shape the possibilities for access, sharing, and surveillance. This section examines four cases involving biometrics that underscore the analytical utility of a sovereignty perspective on humanitarian data.
Stark realities
In 2021, the pressure group Human Rights Watch accused the UN Refugee Agency (UNHCR) of impropriety in its collection and sharing of Rohingya refugees’ personal data, including biometrics (Human Rights Watch, 2021). As part of a joint registration exercise with the Bangladeshi government, UNHCR was legally required to share this data with local authorities in Bangladesh, where Rohingya refugees are contentiously hosted. These authorities subsequently shared the Rohingya's data with counterparts in Myanmar—from where the refugees had originally fled to escape crimes against humanity and acts of genocide—reportedly for repatriation eligibility assessments. This data sharing raised serious concerns about potential harms to the Rohingya while casting doubts on the efficacy of UNHCR's data policies and practices.
Human Rights Watch's critical investigation focused on the lack of meaningful consent in the initial collection of Rohingya data, as well as insufficient transparency by UNHCR about what would happen to the data after collection. Subsequent analyses have similarly framed the incident in terms of irresponsible data practices and ineffective data protection by UNHCR (see, for example, Rahman, 2021).
However, what has gone relatively unremarked 5 in these critiques is how both the initial sharing of data by UNHCR with Bangladeshi authorities (which was legally required by the host country agreement), and the subsequent transfer of that data to Myanmar, reflect the stark realities of sovereign power—neither the Rohingya nor UNHCR could effectively exercise it, while both Bangladesh and Myanmar authorities could (and did). As McDonald notes, “any promise made by a humanitarian organization—particularly about data privacy, security, or ethics—is bounded by the interests of the government” (2019b: 12). Among other reasons, this matters because much of the criticism of the data exchange appears to have overlooked the intractable situation in which organizations like UNHCR find themselves in these challenging contexts.
Here, state sovereign power over humanitarian organizations and their data is evident, as is the interpretive value of the sovereignty optic: UNHCR, despite its legal immunities and privileges (i.e., its pseudo-sovereignty), was unable to resist requests for data sharing, and in fact was compelled to cooperate with authorities. The Rohingya population was virtually powerless to stop it.
Incidents such as these have motivated some community members to explore the development of self-sovereign digital identities for the stateless Rohingya diaspora (Curran et al., 2018), namely through The Rohingya Project, which describes itself as “a grassroots initiative dedicated to preserving and promoting Rohingya identity through decentralized ID, utilizing blockchain.” 6 However, these identities have not been recognized by authorities, again exposing the political shortcomings of the self-sovereignty movement.
Nonstate data sovereignty?
Elsewhere, nonstate actors involved in conflict have tried to assert sovereignty over the data of the people they claim to represent, with unexpected results. In 2019, the WFP accused Houthi rebels of interfering in the delivery of humanitarian food aid and fraudulently diverting food rations meant for crisis-affected people in Yemen. In response to the alleged fraudulent activities, the humanitarian agency requested to implement biometric controls to manage aid distribution in Houthi-controlled regions. Houthi leaders, however, refused these requests due to suspicions that the data might be used for nonhumanitarian purposes as well as on the basis of data sovereignty claims: “the Houthis… objected to the programme on the grounds that they, rather than the WFP, should be in control of the data” (Parker and Slemrod, 2019). Due to this resistance, WFP temporarily suspended its assistance in the area—a decision that was unprecedented and worried many in the sector that the move represented a surveillance ultimatum for life-saving aid (The New Humanitarian, 2019). 7 Aid was only reinstituted after Houthi leadership agreed to a compromised approach to biometric data collection and storage.
The details of this compromise are telling. As was described in a 2020 internal audit report: “data shall be retained in a joint server room, and
This incident also speaks to the evolving technopolitics of aid, as observed in a 2023 civil society analysis: “the Houthi government's focus on the sovereignty of data is indicative of how biometric data collection introduces a new dimension to long-standing questions around political neutrality and humanitarian action” (The Engine Room, 2023: 54–55). This underscores a key lesson and highlights data sovereignty's interpretive value: sovereignty, including over data, is
Entangled sovereignties
Research from Kenya further illustrates the intricacies of humanitarian data and state sovereignty. Keren Weitzberg has written on the challenges associated with “double registration” in places like Kenya. In the 1990s, certain ethnic Somali Kenyans falsely claimed to be refugees escaping Somalia's civil war, which allowed them to access food aid and other humanitarian services. In the process, their biometrics were collected by UNHCR and entered into refugee databases. The agency later shared its registration systems with the Kenyan government, which began cross-checking applications for national identity cards against historic refugee data. Anyone who had previously been assigned refugee status would be unable to access a Kenyan identity card, denying them an essential token of citizenship (Weitzberg, 2020).
Weitzberg analyzes this case in terms of four emergent “sovereignty problems”—a “series of entanglements” between different territories, welfare systems, bureaucracies, and technical systems—and how these ultimately led to the repurposing of humanitarian data for citizenship determination (Weitzberg, 2025). She articulates the difficulties of decoupling humanitarian biometric initiatives from the gatekeeping mechanisms of the Kenyan state. While UNHCR introduced biometrics to prevent citizens from “slipping into the refugee system” and refugees from registering multiple times, data and techniques used to clean up the refugee database were then used to purge people from Kenya's citizenship registry. After years of legal challenges brought by Haki na Sheria, an NGO, in January 2025 Kenya's High Court finally ruled that those who had been double registered must be removed from the refugee database within 60 days (Macdonald, 2025). This decision can be seen as the first step in disentangling the interconnected systems analyzed by Weitzberg, thus paving the way for those affected to finally gain access to Kenyan identity cards.
Entanglements such as these have emerged elsewhere, for example in Bangladesh, where “the government is seeking to utilize the Rohingya refugee database maintained by the UNHCR to prevent Rohingyas from obtaining Bangladeshi identity documents” (Shahariar Zaman, 2024). As an official of the Ministry of Foreign Affairs has explained, “The UNHCR has a database of Rohingyas and if we can use that information, it will not be possible for the Rohingyas to collect identity cards.” A technical committee has already been formed to facilitate this data sharing; the database, which at the time of writing stores information on approximately 970,000 Rohingyas, including fingerprints and iris biometrics, is expected to be operational soon. “The United Nations agency has already given initial consent to the proposal sent by the [Bangladeshi] government” (Shahariar Zaman, 2024).
These data sharing and sovereignty dynamics may become further embroiled as systems for managing refugee populations and citizens become more interoperable and integrated, as in the case of Ethiopia's “fayda” foundational identity program (UNHCR, 2024). Authorities there have committed to including refugees in the country's national identification system through the use of biometric technology, “making Ethiopia a pioneer in the inclusion of refugees in its national systems” (UNHCR, 2024). However, what these new modes of technological inclusion mean for the sovereignty of different actors, including humanitarians and the beneficiaries of their aid, remains uncertain. What is evident from the Kenyan case, however, is that state power over data, including humanitarian data, has the potential to deny people data sovereignty and thus their rights, and thereby exacerbate actual harms.
Biometric exceptionalism
In contrast to the Rohingya, Houthi or Kenyan cases, the decision by humanitarian agencies
Summary of cases.
Note: UNHCR: UN Refugee Agency; WFP: World Food Programme; NGOs: nongovernmental organizations.
In the context of the ongoing war with Russia, the Ukrainian government has communicated to humanitarian agencies that it does not want its citizens’ biometric data collected by aid actors—a request that has largely been honored by international NGOs operating on the frontlines (Cheesman, 2023). These organizations have also informed UN agencies such as UNHCR that they will not be collecting biometrics as part of their response efforts: “…they wanted to keep the data they were collecting on their own systems, rather than feeding it to UN agencies to input into their systems, as has become the norm in most humanitarian responses” (Wille, 2023).
Here, the tensions between the priorities and prerogatives of larger and smaller humanitarian actors are exemplified as noted above in the section on humanitarian data sharing, while also observing how Ukrainian state sovereignty over citizens’ data is respected by the international community. It is reported that this pushback against biometrics and other humanitarian data sharing is possible because NGOs in Ukraine are less financially dependent on UN agencies than in other crisis contexts, as well as being a recognition of the import of Ukraine's data protection legal framework. It is likely, as well, that the humanitarian sector is showing deference to Ukrainian state authorities in a time of war and “solidarity.”
Conclusion
In this article, I have made the case for moving beyond privacy and data protection-oriented analyses to center questions of sovereignty and contestations over political control in expositions of humanitarian data governance. While our focus has been on controversies related to biometrics, these arguments can be extended to other areas of humanitarian data, including artificial intelligence (which, of course, depends on massive quantities of data), cash aid, health, geography (i.e., geospatial data), and connectivity, among others. Researchers interested in adopting a sovereignty optic in these other areas of humanitarian data will find that it can yield helpful perspectives for both theory and practice.
To briefly illustrate the interpretive value of a sovereignty lens across these other areas with a connectivity example, in late 2023 SpaceX's CEO, Elon Musk, announced in the context of an extended communications blackout in Gaza during the conflict with Israel that his company would “support connectivity to internationally recognized aid organizations in Gaza” through its low-earth orbit satellite offering, Starlink (Ewing, 2023). Starlink has grown increasingly valuable to aid actors who need to access emergency connectivity. Israeli authorities, however, reacted to Musk's announcement with dismay, including the Minister of Communications (Karhi, 2023), who pledged that, “Israel will use all means at its disposal to fight this.” After months of negotiations involving the governments of Israel and the United Arab Emirates, access to Starlink was permitted in a single local hospital in Gaza (Cartier, 2024). Future research ought to explore how this deal and others like it were negotiated, and moreover how humanitarian actors are deploying technologies like Starlink in conflict settings in light of the sovereignty challenges highlighted in this paper (cf. Abels, 2024; Martin and Tsui, 2025). Scholars have already begun theorizing the emergence of Starlink in terms of “corporate sovereignty” in outer space, that is, “the de facto control of orbital resources by private enterprises” (Sharma, 2025). As Sharma (2025) reflects: “if a private firm can control vast orbital niches, how can the international community maintain space as a shared domain?” He continues: “‘corporate sovereignty’ in space is not formal sovereignty; companies cannot claim territory. Still, they can wield outsize influence over crucial orbital resources.” In other words, another pseudo-sovereign. The interlinkages with notions of sovereignty in humanitarian space, which is also supposed to be governed by international law, are worth exploring in the future.
In closing, it is important not dismiss the real privacy and rights violations that emerge in what are often contexts of acute vulnerability. Instead, I hope to have demonstrated the value of an alternative perspective to explain why data governance frameworks sometimes fall short in protecting the interests of crisis-affected people and how a sovereignty lens may help us understand the motivations, practices, and limitations of different actors involved in humanitarian data sharing, and to better explain the sometimes-troubling outcomes of these contestations over data. I am also hopeful that humanitarian agencies might learn from the lessons emerging from this paper by, for example, explicitly integrating sovereignty assessments into their risk analysis tools, due diligence measures, and data governance frameworks, as we have called for elsewhere (Martin and Tsui, 2025). It is certain that sovereignty concerns will continue to be critical to the technopolitics of interoperability initiatives in aid and for humanitarian data in general, particularly at this moment as the sector and its funding models are being radically transformed and disrupted.
Footnotes
Acknowledgements
The author would like to acknowledge the following people for their feedback on earlier drafts of the article and for their support during the research and writing process: Margie Cheesman, Keren Weitzberg, Quito Tsui, Emrys Schoemaker, Sophie Bennani-Taylor, Paul Currion, Amos Doornbos, John Warnes, Silvia Pelucchi, and Massimo Marelli. He would like to acknowledge the guest editors, Rocco Bellanova, Jan-Hendrik Passoth, and Silvan Pollozek, for their hard work in organizing the special issue, and the anonymous reviewers for their excellent feedback on earlier drafts. He would also like to thank the organizers of the 2024 Surveillance Studies Network Conference in Ljubljana, Slovenia and the UVA Digital Technology for Democracy Lab December 2024 Symposium on Digital Democracy, where previous versions of this paper were presented. Last, he thanks the Centro de Estudios en Tecnología y Sociedad (CETyS) for inviting his presentation in Buenos Aires in June 2025. This research was made possible by a grant from the Robert Bosch Stiftung GmbH.
