Abstract
This article examines the societal and structural risks linked to digital ID programmes in Africa and the adequacy of regulatory and legislative responses that are in progress in this area. It unpacks the disjointedness between the societal and structural nature of the risks and the individualistic orientation of the right to privacy and data protection often presented as a response to such risks. To address these incongruities, the article argues for people’s right to privacy as a complementary layer of protection to mitigate the societal and structural risks associated with digital ID programmes. The account of people’s privacy proposed in this article draws on the people-centric African perspective on human rights as reflected in the African Charter on Human and People’s Rights (the Banjul Charter), African philosophy that is rooted in communality, scholarships in the area of group privacy, people-centric due diligence assessment, and the AU Data Policy Framework.
Introduction
Biometric ID 1 system is often encouraged for its two potential attributes: First, it contributes to inclusive and sustainable development, and second, it stands as a prerequisite for the realisation of the right to identity (World Bank, 2022). However, the empirical reality of the biometric ID system does not reflect the realisation of inclusion or identity for at least two reasons. As some digital ID programmes have shown—such as projects in West African countries (World Bank, 2020) —the system de-links the ID system from identification in terms of citizenship and legal status. This defeats the human rights purpose of the programme, which is to protect and fulfil people’s right to identity. Meanwhile, evidence from lawsuits and reports by rights groups suggest that the digital ID programme has led to systemic exclusion and marginalisation of vulnerable groups (Mutung’u, 2022; Srinivasan et al., 2018). This defeats both goals of ‘leaving no one behind’ by ensuring full inclusion and the right to identity. By defeating the human rights to identity and inclusion, such projects not only violate human rights provisions but also deepen structural and intersectional injustices.
Against this backdrop, the article is interested in examining the misalignment between the promises and empirical experiences from biometric ID programmes in Africa through the frame of two questions: (1) How do the regulatory responses conceptualise societal risks of digital ID programmes and frame possible solutions? (2) What are the limitations of such framing and how can it be improved? To answer these questions, the article examines digital ID programmes from West Africa as its empirical object and will use interdisciplinary theoretical accounts from human rights to privacy and group privacy, African perspectives on human rights and STS scholarships.
The article argues that one reason for the disjointedness between the promises and empirical experiences from biometric ID programmes has to do with how regulatory responses conceptualise potential risks and frame remedial alternatives. Regulatory and legislative responses to mitigate potential and actual risks associated with the deployment of biometric ID across Africa primarily focus on strengthening legislations on the right to privacy and personal data protection. And hence, different regulatory adjustments are underway as a result. While these are positive measures, this article argues that a gap still exists between the nature of risks that emerge from big data-driven biometric ID programmes and the conceptual underpinning of mainstream Euro-centric rights to privacy. The article identifies a misalignment between the societal and structural nature of risks that emanate from biometric ID programmes and the framing of the risks and potential solutions through an individualistic conception of the right to privacy and data protection.
Building on existing scholarships on group privacy (Loi & Christen, 2020; Taylor et al., 2017) and the social value of the right to privacy (Cohen, 2019; Nissenbaum, 2009; Regan, 2000; Solove & Citron, 2021; Westin, 2003;) the article will show the inadequacy of the individualistic right to privacy in the context of big data-driven public and private decision-making. Furthermore, using insights from science and technology studies (STS) as a method (Murphy & Cuinn, 2013) and concept (Amoore, 2020; Boenig-Liptsin, 2022; Hassan, 2022; Mezzadra, n.d.; Rouvroy et al., 2013), the article will shed light on the disjointedness between the risks and regulatory responses. To address the misalignment, it will introduce an alternative perspective that builds on the current privacy debate through Africa’s people-centric approach to rights and duties. The article will argue for people’s right to privacy as a complementary layer of protection to mitigate the societal and structural risks associated with digital ID programmes. This concept of people’s right to privacy, while inspired by the people-centric African perspective on human rights as reflected in the African Charter on Human and People’s Rights (the Banjul Charter), African philosophy rooted in communality (Afolayan & Falola, 2017; Menkiti, 2005; Shivji, 1989) and the AU Data Policy Framework that creatively incorporates people’s privacy, its practical application is supported through ex-ante and ex-post due diligence assessment mechanisms and collective access to remedy for those adversely affected by biometric ID programmes.
The article is structured in six sections, the first of which is this introductory section. The second section presents a literature review on the right to privacy appraisal in light of societal risks associated with digital ID programmes. This part identifies an appraisal of the individualistic orientation of the right to privacy and its limitations in providing an adequate response to the societal and structural dimensions of risks that emanate from digital ID programmes. The third section offers an empirical overview of digital ID programmes and related challenges, using examples of digital ID programmes deployed in West Africa. The fourth section highlights the gap between the nature of risks associated with digital ID programmes and the individualistic privacy rights regulatory responses. The fifth section offers an account of people’s right to privacy as a way forward and the sixth section concludes with practical applications of people-centric privacy rights through due diligence and collective remedy mechanisms.
The Right to Privacy Appraisal: Beyond the Individual Overview of the Rights to Privacy in Africa
The primary regulatory and legislative responses to potential risks from the deployment of digital ID programmes in Africa often call for strong safeguards for individual rights to privacy and data protection. According to the UNCTAD database, 33 African countries have adopted personal data protection laws, while six countries have drafted such laws (UNCTAD, 2023). However, this response often overlooks the risk of viewing the right to privacy as a panacea for the complex and systemic social risks associated with digital ID deployment. Therefore, this section begins with an overview of the literature on the right to privacy in Africa, both conceptually and doctrinally, and highlights key appraisals.
The concept of the right to privacy has evolved over time and has become increasingly important in today’s technological societies. Despite differences between the US and European conceptions of privacy, both follow the individual-centric account of the right. This account is also reflected in international human rights instruments as an individual right to keep one’s personal information and private life confidential and to protect oneself from unwanted intrusions, surveillance, or scrutiny by others, including the government and private actors. 2 Despite its increased importance, there is still a lack of consensus on the definition of the right to privacy. The UN Special Rapporteur on the Right to Privacy noted that the concept of privacy is related to individual autonomy and self-determination that safeguard and enable the fundamental right to dignity and the free, unhindered development of the self. 3
In the African context, the Banjul Charter does not explicitly recognise the right to privacy. As argued by scholars, this exclusion is believed to have originated from the drafters’ perception that the right promotes individualism, which runs counter to the communalist values of African societies (Razzano, 2021). On the other hand, it is also argued that although the right to privacy is not expressly mentioned in the Charter, it can be read into it through other rights, such as the right to integrity, dignity, liberty and security and the right to health (Ayalew, 2022). With the surge of digitalisation across Africa, privacy and data protection mechanisms are being introduced to safeguard the right to access, update and rectify personal information rooted in the right to privacy. Among other rights, the right to privacy is protected by the 2019 African Declaration on Freedom of Expression and Access to Information (the African Declaration), Article 40 of which expressly guarantees the protection of the confidentiality of personal information and communication, as well as anonymity. Likewise, Article 8 of the 2014 African Cybersecurity and Personal Data Protection Convention (the Malabo Convention) offers important protection against the violation of personal data and privacy rights. 4 The Malabo Convention which entered into force in June 2023 stipulates that ‘any form of data processing respects the fundamental freedoms and rights of natural persons…’. 5 Furthermore, the 2022 African Union Data Policy Framework (the Framework) 6 stressing the right to privacy and personal data protection, takes a creative approach by spotlighting collective privacy rights.
Echoing the Framework’s direction towards people’s right to privacy, this article argues that there is a need to transcend the narrow and individual-focused right to privacy and data protection. The push towards embracing the right to privacy into the African regional and sub-regional human rights systems is reasonable and, as argued by others, is necessitated by globalisation and the Fourth Industrial Revolution (Ayalew, 2022). However, the uncritical adoption of this approach is a problematic process. Paying attention to the emerging appraisal of individual-focused privacy rights in the age of big data and how the African perspective can contribute to filling the gap is important—which this article hopes to do.
The debate on the value and subjects of privacy rights is not new, and the dominant and primarily western account of the right to privacy is based on the premise of individualism. This account locates privacy in the individual system of liberty along with the popular but vague concept of the ‘right to be let alone’ (Emerson, 1979). However, a second account focuses on the social value of privacy. As argued by Solove (2015), this account asserts that the protection of the individual’s right to privacy has a social basis and social justification insofar as it is for the good of society (Solove, 2015). This latter account asserts that privacy is an internal dimension of society, and that ‘We protect individual privacy as a society because we recognise that a good society protects against excessive intrusion’ (Solove, 2015, p. 80). That said, even in the latter account of privacy, the primary object of protection is the individual, and the argument has a utilitarian basis. However, the African account of the right to privacy as reflected by the Framework and as proposed in this article, despite being closer to the latter account of the social value of privacy, has its own peculiarities as will be discussed.
Privacy concerns in the digital ID context are entangled with the collection and processing of information for identification and the production of a networked digital identity infrastructure. This entanglement calls into question not only the extent to which the right to individual privacy and personal data are protected but also the extent to which the mainstream individualistic account of these rights suffices in safeguarding both the individual and society as a collective. Clarifying the different accounts of privacy is crucial to the context of digital ID in which this article is interested. In the following section, I will briefly summarise the core critiques of the individualistic account of privacy within the context of digital ID.
The Right to Privacy Appraisal
The concept of privacy is an ongoing subject of debate (Solove, 2008). The mainstream interpretations of privacy rights are criticised for being individual subject-centric, narrow and context-blind. The individual liberty-based understanding of the right to privacy is criticised for overtly focusing on the autonomy and private life of the individual subject (Cohen, 2019). A comprehensive critique of the association of privacy with individual interests is given by Regan (2000; 2002); Regan argues that the value of privacy in promoting autonomy, human development and freedom of thought and action must not stop at the good they bring to individuals but must also reflect their benefit in creating a vibrant democratic society and solid institutions. This argument aligns with Solove’s concept of the social value of privacy, which contends the weighing of privacy as an individual right against the common good. Solove (2015) insists that the inherent value of protecting individual privacy is social, which means that the protection of individual freedoms such as privacy is what makes a ‘good’ society. Without necessarily opposing the individual dimension of privacy, Cohen (2019) argues for detaching privacy from the liberal legal theory built around individual subject-centred protection of the autonomous self through the notion of consent. Decisions and experiences in the digital space cannot simply be considered in terms of individual choice and consent. Individual choices and consent are socially shaped by networks of relationships, practices beliefs and embodied experiences (Kim, 2019).
As well as the critique of consent-based privacy narrative, the individualistic account of privacy is also criticised for being context-blind, as it is disproportionately drawn from western empirical experience (Arora, 2019). Scholars advise that the very notion and scope of privacy are dynamic and context-specific—meaning that it depends on the sociocultural values and political outlook of the society concerned (Westin, 2003). In this regard, Nissenbaum (2009) argues that privacy as a right to proper flow of personal information is relative to context or dependent on the circumstances in which an act is prescribed. Context, according to Nissenbaum is ‘abstract representations of social structures experienced in daily life’ (Nissenbaum, 2009, p. 134). Experiences in the digital space and human-digital encounters such as digital ID programmes are shaped by background socio-economic, political and historical factors and other dynamics through which existing power asymmetries invisibly play out. These factors cumulatively determine people’s access to, usage and degrees of leverage in their encounter with technologically mediated relations, such as digital ID systems, which puts the notion of privacy beyond individual choice and consent. This explains the limitations of characterising privacy as an individual choice and choice as a deliberate autonomously performed act when there are few (or no) viable alternatives (Arora & Scheiber, 2017; Benjamin, 2019).
Based on these critical perspectives, the notion of privacy that would present at least some kind of solution to the problematic digital ID programmes described below is a form of privacy that nurtures the collective as an object of protection and one that is context-sensitive in terms of outlook. With this in mind, the account of privacy protection this paper seeks draws on the people-centric African human rights perspective, which will be discussed following the empirical overview of digital ID programmes in Africa.
Biometric ID in Africa, Empirical Sense-Making
Recent data from the WB estimates that a little under 850 million people still do not have an official identification (ID). 7 Of these, over 90% live in low-income and lower-middle-income countries, and that over half of which are situated in Sub-Saharan Africa. This has significant implications for people’s access to public services and for the realisation of their fundamental rights, including but not limited to the right to identity, movement, work, health, access to information and education. The adverse impact of this lack of identification disproportionately affects vulnerable and marginalised groups such as children, women, older people, ethnic minorities, people with disability and religious minorities. A significant number of people live without ID and therefore risk exclusion from increasingly digitalised public services and access, and their lack of access to realise fundamental rights is used as the primary justification for deploying digital ID programmes with biometric components. Accordingly, the World Bank Group’s Identification for Development (ID4D) initiative envisions to enable ‘all people to exercise their rights and access better services and economic opportunities in line with the Sustainable Development Goals’ 8 —which covers the realisation of the right to recognition before the law and the right to birth registration. 9 Different digital ID programmes across Africa have therefore been deployed at the backdrop of two attributes: inclusion for sustainable development and the human right to identity.
However, in empirical terms, it is debatable whether these two objectives are meaningfully realised. While the right to legal identity and access to public services are fundamental human rights and warrant safeguard, the practical impact of different digital ID initiatives appears to have adverse consequences that defeat both the objectives of the right to identity and inclusion. In particular, the foundational ID (fID) programme has been criticised for effectively de-linking people from their legal status and entitlements (Centre for Human Rights & Global Justice, 2022a). IDs can be used to identify an individual through their unique biometric credentials and biographic data. However, this does not entitle the person identified to any legal status such as nationality or residence, nor does it necessarily provide them with access to entitlements such as public services. In effect, it is a means for authentication and identification at the government or private sector level rather than a provision for establishing legal status and entitlements. This is specifically the case of the digital ID project in West Africa. As specified in the WB’s West Africa Unique Identification for Regional Integration and Inclusion, Phase 2 document, ‘the fID system is de-linked from nationality, universally accessible, and enables registration of all people’ and will ‘neither accord nor recognise a person’s rights’ (World Bank, 2020, p. 22), beyond authentication (World Bank, 2020, pp. 95–96). The WB specifies that the core function of the ID is to provide multipurpose proof of government-recognised identity credentials and, in doing so to support governmental and non-governmental functions such as service provision (World Bank, 2020, p. 95). According to the WB, the rationale for de-linking legal status and entitlements is to mitigate the risk of exclusion in cases in which applicants are unable to prove their nationality or legal status and to minimise documentary requirements (World Bank, 2020, p. 59).
As widely criticised by rights advocates (Access Now, 2022), the fID project defeats the purpose of realising the right to legal identity and inclusion. First, as the fID does not establish any legal status or legal entitlement, rights and access to benefits are conditional on the decision of the service provider (public or private), who will use the fID for verification purposes only. In this context, the fID can be used for verification but not for the right of the verified individual to access public services. As such, using human rights language to justify the ID4D programme may amount to rights-washing. It also carries the risk of turning the whole population into a single hyper-visible entity by exposing the collective body of the people to private and public surveillance and capitalist exploitation at corporate and governmental levels (Amoore, 2020; Zuboff, 2019). By doing this, it heightens the risk of a perpetual state of public and private surveillance and datafication, as explained by scholars in the context of risks that are currently being faced in the Global North (Eubanks, 2018; Zuboff, 2019).
Even in cases where digital IDs are linked to legal status and citizenship, such as Kenya’s Huduma Namba, evidence shows that such programmes pose a high risk to the right to privacy and personal data protection while replicating existing inequalities as well as deepening systemic and path-dependent marginalisation (Mutung’u, 2022), creating new forms of inequality in the process. The risks of social exclusion and marginalisation that come from the use of digital IDs that are linked with biometrics have been highlighted by scholars (Lyon, 2007; Wickins, 2007), and although these risks are often taken as isolated incidents of specific rights violations, they are in fact deeply structural (Wodajo, 2022). Structural inequalities differ from perceived human rights violations, and regulatory and adjudicatory institutions seldom capture them as they are normalised and embedded within social, economic and political systems (Young, 2011). Controversies related to Kenya’s digital ID programme—including national and transnational legal actions—demonstrate the risk of replicating structural inequalities through the programme. For instance, in Nubian Rights Forum v. the Attorney General of Kenya, the High Court of Kenya concluded that collecting DNA and GPS data for the purpose of identification was intrusive and unnecessary and was thus a risk to privacy. 10 However, rights groups argued that the ruling still did not adequately consider the discriminatory and exclusionary effects of the deployment of the biometric ID systems. Among other factors, the marginalisation of certain groups that resulted from the deployment of the digital ID can partly be traced to the British colonial legacy and its footprints that remain in present-day Kenya. Some of the marginalised ethnic groups—such as Kenyan Nubians, who are descendants of people brought to Kenya during the days of the British Empire (Dahir, 2020). It is this past legacy that positioned groups that are vulnerable to discrimination and denial of access to different public services by replicating old prejudices through the deployment of the digital ID. In this context, mere remediation of individual privacy and data breach does not address the path-dependent structural problem that has put groups in a vulnerable position.
Replicated inequalities and different aspects of social injustice that are linked to biometric ID projects are further complicated by the involvement of a network of national and global actors (Centre for Human Rights and Global Justice, 2022b), all of whom have their respective vision of the world, priorities and interests that shape regulatory and technical infrastructure related to biometric ID systems. Specifically for countries in the global south where the digital ID projects are implemented, these projects have a use that goes beyond identification for development and inclusion, such as security and surveillance purposes (Centre for Human Rights and Global Justice, 2022b, pp. 21–23). For donor countries in the global North, the deployment of the digital ID is a means of controlling migration and predicting population movement through that analysis of big data (Centre for Human Rights and Global Justice, 2022b, pp. 21–23; Molnar, 2019). For international business and transnational corporations, it also represents a new business opportunity, and the interests of such companies are unlikely to focus on the consequent human rights risks and injustice that will disproportionately impact marginalised communities. In most cases, issues relating to human rights and justice are often kept on the margins, and in other cases, they are instrumentalised and framed in a way that serves the interests of those in power positions (the state, transnational corporation, donors, etc.). For example, a recent transnational lawsuit brought before the Paris Tribunal against Idemia, a French firm that supplied biometric capture kits to the Kenyan government, alleges the failure of the company to conduct a human rights risk assessment, including exclusion and marginalisation and design mitigation measures required under the French Due Vigilance Law (Hersey, 2022).
The empirical landscape and practical challenges described above, such as the risk of rights violations and the replication of structural injustices, call for creative intervention. As the following section will show, it is critical to examine the gap between these challenges and emerging regulatory responses in Africa and the causes of such disjointedness between the problem and ‘solution’.
Mind the Gap Between the Challenges and Regulatory Responses
Most regulatory, legislative and adjudicatory responses to the digital ID-related challenges identified above draw on the discourse of strengthening personal data protection and the right to privacy, with less attention being paid to the structural and societal dimensions of these challenges. This can be inferred from proliferating regulatory initiatives around emerging technologies, more specifically data and AI, and some judicial decisions. For instance, in the Nubian Rights Forum case discussed above, even though the court clarified the need for a robust regulatory and data protection framework to address the possibility of exclusion, it failed to recognise the structural and historical inequalities that shape the outcome of such digitalisation processes (Muriungi, 2019). The Kenyan Data Protection Act was adopted a few months after the verdict. While this trend shows how the importance of the right to privacy and data protection is gaining ground in the legislative and policy landscape across Africa at both national and regional levels, the risks of exclusion, marginalisation and surveillance that are linked to various digital ID programmes still represent pressing challenges (Centre for Human Rights and Global Justice, 2021).
Among reasons for the persistence of these challenges are, as is the case in most other emerging technologies, such as AI governance (Hassan, 2022), the regulatory and legislative responses to whether the right to privacy or data protection laws and policy measures are not rooted in the empirical and epistemic reality of Africa and cognisant of the inter-relationality of data and governance through data. Particularly, the disproportionate focus on the individualistic notion of the right to privacy and personal data protection seems inadequate to cover the contemporary big data processing that is linked to biometric ID programmes. This article argues that the misfit between the risks from digital ID programmes and regulatory responses is attributable to two closely related and interdependent factors: (a) the diffused societal nature of the risks that arise from big data processing and consequent state of societal hypervisiblisation linked with digital ID systems, and (b) the individualistic orientation of privacy and data protection solutions.
Let us start with the first factor. Drawing on John Dewey’s account of critical inquiry, Solove (2015) stressed the importance of beginning by exploring the problems that have been experienced rather than an abstract universal notion of privacy. Following this approach helps contextualise the risks of exclusion, marginalisation and hypervisiblisation or perpetual tracking in the context of digital ID programmes. Privacy violation in general and structural injustices such as exclusion, marginalisation and vulnerability to surveillance are not risks limited to an individual in isolation (Post, 1989; Schwartz, 2000; Simitis, 1987; Solove & Citron, 2021). The unique nature of these risks resides in their societal and relational dimensions, as described below.
One core factor is the ways in which big data collected for the digital ID purpose are processed and the context of data inter-relationality throughout the lifecycle of the data. This covers aspects that include the process by which data are collected, what data are involved, how the data are stored, used and combined with other data, and the future use of the information inferred from this big data. As shown by Amoore, data are clustered to map the patterns of relations and to define attributes in order to generate a condensed output of actionable meaning (Amoore, 2013). The aggregated pool of data becomes an archive of the future and the basis of pre-emptive decision-making (Amoore, 2020). In this process, although data are collected from an individual with justifiable intentions (even though not always), their effect is not limited to the individual data subject. Instead, the effect is more significantly about the pattern of data relationships and the group profiles the data analysis constructs. Solove and Citron (2021) argue that the consequences of unknown future uses of personal data and breach of individual privacy do not stop at the harm they bring to the individual (Solove & Citron, 2021). Due to the relationality of data and the network effect, the loss of privacy of one individual impacts the privacy of others (Roessler & Mokrosinska, 2013). This level of harm is dispersed among a large number of people, and due to its large scale, the harm is effectively societal (Viljoen, 2021).
The effect of big data analytics in the context of biometric data collection for digital ID intended to facilitate public and private services, for example, welfare delivery systems and poverty mapping transcends the narrow individual dimension of data and privacy protection (Taylor et al., 2017). This is primarily due to the use of data analytics that help to determine groups through clustering and profiling using defined attributes based on existing group identity, such as ethnic, racial, gender or socio-economic status (Kammourieh et al.,, 2017) or can be used to create a new group category generated through machine learning (Taylor, 2017). As argued by scholars of group privacy, in big data the individual is incidental and the primary interest and consequent data-driven decisions target the group (Loi & Christen, 2020; Taylor et al., 2017). Moreover, clustering and profiling along existing group identities such as ethnicity, race, religion and gender identity and consequent exclusion or targeting and tracking are inherently forms of structural injustice (Achiume, 2020; Obermeyer et al., 2019; Wodajo & Ebert, 2021). In this context, access to and control over big data on a country or region’s population significantly increases the control that governments, transnational corporations and other actors can exert over society. While these data can be harnessed for various beneficial purposes, it also poses the risk of replicating and amplifying existing inequalities and vulnerabilities. By allowing these entities to track the movements and habits of groups through aggregated data, there is an increased risk for vulnerable populations. These groups may be more easily identified, classified and monitored, which can lead to potential abuses of power and further marginalisation.
The second gap between the risks associated with digital ID and regulatory approaches builds on the societal nature of risks associated with biometric ID projects discussed above and the limits of mainstream privacy and data protection. Contrary to the societal nature of risks that emanate from digital ID programmes, the concept of privacy and data protection that is incorporated into the legislative and regulatory measures disproportionately focuses on the individual as the common denominator, often referred to as the data subject. While the necessity for robust personal data protection and the right to privacy is unquestionable, this article argues that there is good reason to believe that the liberal individual-centric privacy right is ill-equipped to adequately address the societal and structural nature of the risks posed by biometric ID programmes described above. Hence, this article argues for ways to go beyond the individualistic orientation of the right to privacy and data protection. The rights to privacy and data protection under conditions such as a collection of nationwide or regionwide biometrics must be supplemented by a broader and more dynamic account of privacy, as will be discussed in the following sections.
People-centric Privacy, Lessons from the African Human Rights Perspective
Considering the disconnection between the societal and structural nature of the risks that arise from digital ID programmes on the one hand and the individualistic orientation of privacy rights on the other, this article proposes a people-centric concept of privacy that would extend protection to the collective or group. This account of privacy supplements the existing individual right to privacy by extending protection to collectives, and in doing so it emulates civil liberties on the individual scale. While the notion of group privacy is not a novel concept and has been advocated by several scholars (Loi & Christen, 2020; Mantelero, 2017; Taylor et al., 2017), this article builds on these works from an African perspective. The suggestions made here are inspired by the people-centric focus of the African perspective on human rights as reflected in the Banjul Charter, the African philosophy rooted in communality (Afolayan & Falola, 2017; Menkiti, 2005; Shivji, 1989) and the recent framework which creatively incorporated people’s privacy, albeit with less clarity.
The concept of people’s rights is one of Africa’s significant contributions to international human rights norms and laws as reflected under the Banjul Charter (Okafor & Dzah, 2021), but what exactly would people’s rights to privacy look like? In theory, the process has two dimensions. First, instead of limiting the subject of rights to the individual level with a focus on digital rights, we should expand it to the collective level of the people as group-based rights-holders. This conceptual underpinning of the collective or the people as rights-holders is rooted in the African concept and the epistemology of a person as a communal entity or a person as a part of society rather than as an isolated abstract individual (Cobbah, 1987; Kiwanuka, 1988). This is underpinned by Paragraph Four of the preamble of the Banjul Charter, which asserts that member states agreed to take ‘…into consideration the virtues of their historical tradition and the values of African civilisation which should inspire and characterise their reflection on the concept of human and peoples’ rights’. This viewpoint recognises and reflects Africa’s political history and the epistemic violence the continent faced in both the colonial and post-colonial periods. In light of this, applying the African humanistic and communal values to the concept of privacy enables us to see a two-dimensional protection: First, the protection of the collective body through the protection of the individual, and second, the protection of the collective as an autonomous entity.
The first dimension, the protection of the body of the collective as a relational and interdependent society through the protection of the individual, draws on the African Ubuntu philosophy. The Ubuntu sees a person through his or her community, a sense of ‘I am because we are’ that is more deeply entrenched in the interconnectedness of humanity than it is in the individual self (Metz, 2012). This account seems in part to resemble the social value of privacy advocated by Post, Regan and Solove among others (Post, 1989; Regan, 2000; Solove, 2015) which locates the primary value of privacy as societal. As discussed in the previous sections, the justification for the social value of privacy depends upon two claims. The first is that the violation of an individual’s right to privacy has a societal ramification as it has a defused and lasting implication on others. The second is that protecting and respecting privacy is simply a defining feature of a good society. This approach sees society as an aggregate or sum of individuals. What makes the African perspective different from the western approach to the social value of privacy is that it does not see the individual as an isolated atom of which aggregation forms a society but sees people as inherently interrelated. An important difference is that the social value of privacy focuses on the relationality of big data, their future use and their consequential impacts beyond an individual. However, this account does not consider the individual as relational or communal. In the African conception of the person as relational, the protection of the individual amounts to the protection of society because of the overall interconnectedness of humanity (Mhlambi, 2020). This approach seeks to protect a person’s privacy not as a means to an end (i.e., protection for the good of society), but because the person is the society.
The second concept, the protection of the collective as an independent entity, draws upon the concept of people as subjects of rights under the Banjul Charter and other African human rights instruments. The idea of groups or peoples as the subject of rights is also recognised by international human rights instruments, mainly the International Bill of Rights, although not in the context of the right to privacy. However, the definition of ‘people’ remains part of a long-standing debate. In the context of big data, scholars argue that technology also creates or determines groups other than socially constructed groups that can form collectives worthy of legal protection (Kammourieh et al., 2017). In this regard, Mantelero (2017) argues for a form of collective privacy that protects a group of people categorised through data collection and analytics, and notes that their interests as an autonomous collective entity may be different from the interests of the individual (Mantelero, 2017). The proposal suggested in this article adopts a flexible concept of the collective and the group that accommodates both groups created by emerging technologies and groups that become victims of structural injustice through digital encounters as a result of existing group identities, such as the exclusion or targeting of groups based on racial, ethnic or gender identities or socio-economic status. For the sake of clarity, the concept of a collective that I apply here is inspired by the concept of people in the African human rights scholarship. By people or collective, I refer to groups that collectively share some form of attribute or identity and become victims of data-driven technological encounters such as digital ID programmes due to their attributes as a group. In the context of digital ID programmes linked to biometric data, such a conception would protect people or vulnerable groups against exclusion and from being targeted by governments or other actors. However, it is important to note that this group-centric account of privacy does not oppose or disregard the relevance of individual rights. Instead, it claims that collective and individual rights can be used to complement and reinforce one another.
Conclusion: Practical Application of People-Centric Privacy
The application of the people-centric right to privacy and data protection as a response to societal and systemic risks that emanate from biometric ID systems would centre around ex-ante and ex-post measures. The ex-ante measures would take the form of active and meaningful participation of people in the design and deployment of the technology, as well as the crafting of policies and laws to regulate the deployment of such technologies. This approach would draw from local indigenous knowledge coupled by existing public participation mechanisms, including thorough ex-ante and continuing impact assessments. These assessments would specifically require evaluating societal, people-centric human rights and systemic impacts of biometric ID programmes on the collective well-being of the community, particularly focusing on vulnerable sections of the community.
One way to experiment with this approach would be through an adapted human rights due diligence mechanism, as introduced by the UN Guiding Principles on Business and Human Rights (UN Guiding Principles). As stipulated by principle 18 of the UN Guiding Principle, due diligence requires identifying and assessing any actual or potential adverse human rights impacts with which companies may be involved either through their own activities or as a result of their business relationships (Bonnitcha & McCorquodale, 2017). However, the human rights due diligence mechanism designed by the UN Guiding Principles is currently limited in scope and focus, primarily addressing human rights risks that may arise from business activities by the private sector or companies, either alone or in complicity with a government. Hence, the present proposal seeks to broaden this requirement to cover all social impacts, systemic and structural injustices, in addition to identifiable and attributable human rights impacts. This broadened scope would not only apply to the private sector but also to state-run projects, such as biometric ID systems, as well as donors and other stakeholders involved in the deployment of such technology and its governance. When undertaking the due diligence assessment, it is crucial to engage every community that will be impacted by such projects in a meaningful consultation and in the design of the biometric ID system. This impact assessment should be a continuous process even after the deployment of the biometric ID system. All involved stakeholders and providers of different infrastructure that sustain the system must regularly undertake this process and take measures to address any potential or actual risks to society. Moreover, it is important that the impact assessment mechanism divorces itself from an individualistic rights-based approach by having a people-centric orientation.
Additionally, there are two potential ex-post measures: the first ex-post measure is one that ensures continuous monitoring and evaluation of the biometric ID systems after their deployment. This would include regular audits and feedback loops from the affected communities. The second ex-post protection mechanism would take the form of collective access to remedy mechanisms in cases of any form of rights violation or injustice that may arise from the deployment of the ID systems. This particularly requires the flexibility of the adjudication process to accommodate not only individual claims of rights violations and the provision of compensation for experienced harm but also to: (a) create a conducive environment for collective action against both state and private actors, such as corporations, through mechanisms such as class and representative actions (Wodajo, 2024); (b) provide remedies that go beyond compensation for specific harm by reforming the policy directions as well as the institutional, regulatory and legislative approaches to such technologies.
Footnotes
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author received no financial support for the research, authorship and/or publication of this article.
