Abstract
Artificial intelligence (AI) continues to feature prominently in global discourses on sustainable development as a potential solution to long-standing social and economic issues across the global South. In recent years, there has been increased interest by public and private actors to develop and deploy AI-powered digital solutions positioned to help close the digital divide—a phenomenon that has been traditionally framed as the gap between the connected and unconnected. Framed against a backdrop of “tech for good,” developments in AI and other emerging technologies have led to new challenges, including algorithmic awareness, a new dimension of the digital divide that attends to data and data-related inequalities. This colloquium paper uses World, a digital identity project co-founded by American tech company Open AI that combines AI, biometrics and blockchain-based technologies, as a case study to explore the ethical implications of private sector-led digital initiatives in the global South. Despite claims that digital identity projects help promote social and economic inclusion, we show that projects such as World can intensity existing inequalities through data extraction methods. We argue that the company's activities in countries such as Kenya are possible because of the digital divide and gaps in regulatory frameworks on AI in the global South.
Introduction
Over the past decade, artificial intelligence (AI) has increasingly appeared on the sustainable development agendas of international development institutions, government, civil society, and the private sector as a potential solution to long-standing social and economic issues across the global South. The increased scholarship on AI for sustainable development points to it as a way for improving access, connectivity, and efficiency in various sectors, including finance, healthcare, education, farming, and transportation (Goralski & Tan, 2020; Mann & Hilbert, 2020). One area that AI is gaining prominence is in digital identification (digital ID) solutions, particularly as a tool for recognition as well as a means for state and nonstate actors to facilitate socioeconomic inclusion and provide humanitarian relief, welfare, and other social protection initiatives (Addo & Senyo, 2021; Masiero & Arvidsson, 2021). It is viewed as a necessary precursor for individuals to exercise other rights, including access to basic needs, education, and health services. Digital ID systems have been developed and mobilized by states, including India, Kenya, and Nigeria (Misra, 2019; Musoni et al., 2023) and international institutions such as the United Nations and private sector partners (e.g., ID2020).
One avenue through which private sector actors engage in international development work is via digital identity projects which claim to help bridge the digital divide by providing individuals, particularly in the global South, with the tools needed to access and benefit from global financial systems. Often framed as cross-sectoral partnerships in humanitarian and development programs, their entry point has been criticized for generating and reinforcing power imbalances (Olwig, 2021; Richey and Fejerskov, 2024). For example, Richey and Fejerskov (2024) maintain that tech for good often hide the exploitative practices of tech companies in the global South through corporate social responsibility strategies and performative altruism, which they argue, intensify inequalities instead of meaningfully addressing them. Others have argued that such projects allow the tech industry to test new digital technologies on marginalized communities (Ajana, 2020).
The issue of the digital divide is at the center of current debates on tech for good. This is particularly significant in regions such as Africa, where digital connectivity is limited and many across the continent remain unconnected to the internet. According to the International Telecommunication Union (2021), only a third of Africa's population has internet access compared to 80% in North America and Europe. Digital literacy and access to technologies continue to be issues in many African countries, deepening the digital divide. Furthermore, over half of the global population without identification or proof of identity live in sub-Saharan Africa (World Bank, 2021).
The deepening digital divide in Africa, coupled with rapid AI-driven development and the increasingly prominent role of the private sector, raises the question: Can AI-powered digital identity systems offer concrete solutions to these problems or are they simply mechanisms to extend the economic realm of private companies from the global North? In this commentary, we critically examine the intersection of these issues, particularly the ethical and structural challenges of private sector involvement in digital ID initiatives. We use the recently launched digital identity project World (formerly known as Worldcoin) and its activities in Kenya as a case study to show how digital identity projects can deepen inequalities and further marginalize those in the global South. In the following sections, we begin with a brief background highlighting the promises and risks associated with AI and how this extends earlier debates within the ICT4D movement, using the concept of algorithmic (un)awareness as a new element of the digital divide. We argue that the extractive activities of World in the global South are possible because of the growing digital divide and the limited (or lack of) regulatory frameworks around AI development.
From ICT4D to AI for Development
The Information Communication Technologies for Development (ICT4D) movement has long examined the relationship between information communication technologies (ICTs) and international development, focusing on not only the value of technology for socioeconomic development (Heeks, 2008) but also seeking answers to questions around the negative effects of ICTs, including who is being left out? (Walsham, 2017). AI for development is rooted in this longer history of ICT4D. Recent research suggests that advances in AI can support efforts to solve the world's most pressing social and economic challenges (Goralski & Tan, 2020; Tomašev et al., 2020). For example, AI solutions may be adopted to improve citizen feedback and political decision-making processes (Visvizi, 2022) and integrate variable renewables through smart grids (Vinuesa et al., 2020). Across Africa, digital identity systems (including those built on AI) have surged in recent years under the banner of social and economic inclusion and involve various actors, including governments, international organizations, and private sector. Proponents of digital ID systems hail them as a key solution for addressing issues such as fraud in government, tax collection, voting, and delivery of government services (Sagar, 2023).
Despite these benefits, many scholars and practitioners have raised concerns regarding the approach to implementing digital ID systems and their impact on the vulnerable groups they are intended to support. The concerns are more profound when the private sector is involved. Lopez-Solano and Castañeda (2024) argue that many companies use the sustainable development agenda to develop identity systems that they can exert control over in postcolonial governments. Issues of exclusion, intrusion or surveillance, distortion and data leakages or exploitation of data have been reported in the humanitarian and development sector where international organizations promote digital ID (Kim, 2023; Masiero, 2023; Masiero & Arvidsson, 2021; Weitzberg et al., 2021). Similarly, other research has shown how digital ID initiatives have unintended consequences and risk turning vulnerable African countries into “living laboratories” where large parts of the population can become test subjects (Jacobsen, 2021).
These risks can be further aggravated by integrating AI and other emerging technologies whose risks are not fully known. Potential risks include infringements on rights to privacy, inclusion and fairness, and discrimination and bias in sectors such as healthcare, education, and criminal justice (Intahchomphoo & Gundersen, 2020; Muley et al., 2023; Shams et al., 2023). As institutional actors from the West, including Big Tech, make significant investments in the AI industry, the disparities in the global South could be exacerbated.
Algorithmic (Un)awareness and the New Digital Divide
The digital divide, a phenomenon that has been traditionally framed as an issue of access to ICTs and a gap between the connected and unconnected (Gran et al., 2021), is a key challenge for the successful adoption of AI technologies. The widespread adoption of AI adds to this challenge, as digital illiteracy is compounding the extent of the digital divide and leading to unequal distribution of the benefits of digitalization. Along with the inequalities in technical means, usage, and skills, the divide is driven by limited ICT infrastructure, leading to a dearth of network affordability, lack of digitized and structured data, and scarcity of AI developers, all of which are impeding the adoption of AI and the ability to leverage its capabilities to address challenges (Eke et al., 2023). These emerging issues highlight current gaps in AI governance. In fact, a UNESCO-produced needs assessment survey in Africa found gaps in legal and regulatory frameworks for AI governance, noting the need to include specific measures for preventing algorithmic bias and discrimination (Sibal & Neupane, 2021).
Beyond this, scholars have argued that lack of awareness of AI and its data-related inequalities poses a significant challenge considering the effects of these algorithmic systems (Cinnamon, 2020; Gran et al., 2021; Lythreatis et al., 2022). Algorithmic awareness, defined broadly as the ability of users to understand and assess algorithms and their impact (Kizilcec, 2016), has an effect on user behavior, trust of algorithmic processes and “their understanding of the control of the information flow embedded within them” (Shin et al., 2022, p. 12). Yu (2020) uses the term “algorithmic divide” to describe the emerging and growing AI-driven inequalities that “now prevents a large segment of the population—in both developed and developing countries—from enjoying access to machine learning and artificial intelligence” (p. 334). He discusses five attributes of this new divide, the first being awareness and argues that the algorithmic divide is more difficult to spot as those who are part of the “have nots” often have limited understanding of how AI and machine learning influence their lives in positive and negative ways. Even so, those who are aware of its implications may find it challenging to understand how algorithms work (Yu, 2020).
As a result, algorithmic awareness is regarded as “a digital strength,” and its absence can have consequences for democratic processes (Gran et al., 2021), including worsening the divide in information access and public participation. This challenge is also prominent in global debates around the data revolution which is criticized for unequal data access, control, and nonrepresentation leading to unreliable data in developing countries (Cinnamon 2020). Similarly, a UN working group on data has concluded that many people are excluded from today's datasets because of lack of technology infrastructure, remoteness, or discrimination. These exclusions become even more pronounced in AI-powered solutions.
The Case of World
World Network or World, formerly known as Worldcoin, is a digital identity and cryptocurrency project co-founded by Open AI and Tools for Humanity that combines AI, biometrics, and blockchain-based technologies, with the aim of developing a world digital ID system that can help make the global financial network accessible to all. With the slogan for every human, the project works under the premise that a unique digital identity and “proof of personhood” will become important in a world that is increasingly influenced by AI systems. Through claims that digital ID will improve access, promote financial inclusion in the global economy and the eventual goal of universal basic income, the project positions itself as a part of the solution to closing the digital divide.
Individuals are biometrically verified by looking into a bowling ball sized silver sphere known as the Orb and are issued a verified unique identifier also known as a World ID. The device is purported to prove “humanness” through a secure, cryptography-based technique known as “zero-knowledge proof,” where the ID of the user is differentiated from the biometric information attached to it, allowing for the authentication of a person's information without disclosing it (Chow, 2023). According to the company, the Orb is built on architecture that enables it to run sophisticated AI models on the device meaning that “algorithms operate locally on the Orb to validate humaneness, while safeguarding user privacy” (World, 2023, p. 21). While the whitepaper claims that the information (iris code) is processed locally and deleted immediately in the device, it adds that “the biometric uniqueness service i.e., the determination of uniqueness based on the iris code is performed on a server since the iris code needs to be compared against all other iris codes of humans who have verified before” (p. 77). This determination process suggests that data must be saved which contradicts claims that data is immediately deleted.
Since World is run by a private company, the complexity and intricacies of the biometric uniqueness service and its algorithms are not fully known or disclosed in the whitepaper. The opacity of this process poses immediate and long-term challenges in marginalized communities where digital literacy and access continue to be an issue. Those in rural and underdeveloped areas may not have regular access to ICTs to begin with and without access to technological infrastructures, they will not benefit from the digital identity projects presented to them. Even in communities with regular access, there may be gaps in digital literacy and the skills needed to use and benefit from these systems. Moreover, the complexity of the information, unawareness of algorithms and how they work could still pose challenges to their ability to control their participation, increasing their vulnerability to data breaches, data misuse and other forms of exploitation. These challenges require multidimensional approaches to connect issues of data use, ownership, access, and representation to questions around social justice particularly in contexts of international sustainable development (Heeks & Renken, 2016). One way is to adopt a data justice framework (Taylor, 2017), which in response to concerns about datafication, offers a guide to assess how individuals are represented by data, their ability to decide how much engagement or disengagement with data systems and the freedom from discrimination and unfair treatment in these data processes.
From a data justice standpoint, World's activities in the global South are deeply concerning. In Kenya, thousands of people lined up to receive 25 digital tokens (equivalent to about 50 USD) in exchange for their iris scans. Critics have raised concern over the company's reliance on biometric data and have questioned whether the individuals who offered up their biometrics in exchange for tokens understood the implications of signing up and what steps would need to be taken to opt out after sharing their data. Related to the issue of informed consent, there were other concerns about token-earning incentives and the exploitative interactions between Orb operators and the people they signed up (Guo & Renaldi, 2022; Jain et al., 2024), including reported incidents of fraud, theft of tokens and people selling their world IDs and tokens for cash (Kemp, 2023).
These concerns about the project's data collection and storage methods made international headlines as the Kenyan government suspended World's operations in the country while it carried out its investigation (Ogonjo & Kitili, 2023). Similar issues have led to the banning and pausing of its activities in other countries including Nigeria, Portugal, and Germany (Reuters, 2024b). While the company is operational in several European countries, most of the biometric data collected from millions of people came from the global South. Independent oversight bodies in Spain and Germany expressed concern about the project and its adherence to the European Union's General Data Protection Regulation and have called on the company to halt activities and delete the iris scans (Reuters, 2024a). On the other hand, African countries with underdeveloped or weak data protection laws may be seen by private companies as lucrative opportunities to exploit loopholes in data privacy legislation.
The activities of World in Kenya could be viewed as part of the dominance of Western technology firms in the design and development of AI solutions and the extraction of data and labor. This dominance has been theorized as a form of digital or algorithmic colonialism (Birhane, 2020; Couldry & Mejias, 2019), pointing to the multitude of risks associated with the convergence of experimental and untested technologies (Madianou, 2019).
Conclusion
With over one billion people globally who lack identification documents, identity has been prioritized by international development actors and highlighted in the 16th Sustainable Development Goal to promote peace, justice, and strong institutions. In response, public and private actors have offered digital solutions powered by AI and other technologies to address this issue and have promoted digital ID as a tool for recognition and empowerment, claiming these systems offer a means to connect marginalized communities to global financial systems. Still, these systems present ethical challenges, including the risk of discrimination, privacy concerns and issues around transparency, accountability, and further marginalization. Projects such as World do not meaningfully provide solutions to issues of the digital divide—namely connectivity, skills, and algorithmic awareness. In the global South, such projects are primarily data extraction exercises involving millions of people, including those with limited or no access to ICTs.
In our commentary, we have shown how private companies shape AI development in international development contexts via digital identity projects. We discussed how algorithmic (un)awareness as a new dimension of the digital divide is accelerated by the widespread adoption of AI and lag in regulatory frameworks, arguing that World's activities in the global South are possible because of the digital divide and the lack of or limited safe and ethical mechanisms in place for AI development (Global Index on Responsible AI, 2024). Kenya is in the early stages of an AI policy framework which leaves much of AI development and deployment unregulated (Onyango, 2024).
As World is set to resume operations in Kenya after police dropped its investigation into the company (Reuters, 2024b), it is important to consider the broader ethical implications of the increasingly prominent role played by private companies in AI and Digital ID development. Thus far, the company claims to have signed up 15 million users across 160 countries (World, 2024). The increasing interest in AI, its rapid development and the growing presence of private actors makes this a critical time for addressing digital rights and data justice, particularly for people in the global South.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
