Abstract
COVID-19 has hit a world in which social protection schemes are increasingly augmented with digital measures. Digital identity schemes are especially being adopted to match citizens’ data with social protection entitlements, enabling authentication through demographic and, increasingly, biometric data at the point of access. In this commentary, I discuss three sets of implications that COVID-19 has yielded on digital social protection, whose central trade-off – increasing the probabilities of accurate user identification, at the cost of greater exclusions – has become even more problematic during the crisis. I argue that three forms of data injustice – legal, informational and design-related, previously identified in datafied social protection schemes, will need to be monitored in the post-pandemic scenario. I finally observe that the crisis exposes the long-term need to place digitality within social protection schemes that expand user entitlements rather than constraining them. Implications of such reflections are drawn for the study of data-based social welfare interventions.
Introduction
Social protection can be seen as ‘the set of policies and programmes aimed at preventing or protecting all people against poverty, vulnerability, and social exclusion throughout their lifecycles, particularly the most vulnerable groups’ (FAO, 2018). The World Social Protection Report 2017–2019 illustrates that 45% of the world’s population is effectively covered by at least one social protection benefit, and links effective social protection to the achievement of goals of global hunger and poverty reduction (ILO, 2019). Over time an idea of ‘transformative social protection’ (Devereux and Sabates-Wheeler, 2004) has largely replaced advocacy for short-term economic assistance, shifting from a focus on social safety nets to one on social protection as part of long-term social policy (Devereux, 2016; Devereux and Sabates-Wheeler, 2004).
Over the last 15–20 years, social protection schemes have become increasingly integrated with digital measures ‘at the service’ of their good functioning (Gelb and Metz, 2018; World Bank, 2016). In the light of issues of malfunctioning, leakage and diversion, improving access to social protection has been regarded as particularly important, countering mechanisms that prevent people’s ability to obtain their entitlements (Gelb and Clark, 2013). In this context, digital identity schemes – where ‘identification, authentication and authorisation are performed digitally’ (Nyst et al., 2016: 8) have emerged as a key route to improving social protection, matching users to their entitlement and enabling access on the basis of such matching. By doing so, exclusion errors that leave out entitled subjects and inclusion errors which include non-entitled subjects are meant to be simultaneously combated, ensuring the correct targeting on which effective social protection is premised (Muralidharan et al., 2016).
It is against this backdrop, one of highly diffused and digitised social protection, that the COVID-19 crisis has hit the world. With lockdown measures disproportionately affecting vulnerable groups such as poor, migrant and informal workers (Drèze, 2020), extant social protection schemes have acquired even greater importance for their users, with situations where existing provisions have been increased in volume and coverage to help face the crisis. Where social protection was already crucial, COVID-19 has further illuminated its importance and magnified the consequences of exclusion. With the crisis spreading through the Global South, national governments have embarked onto new measures of social assistance, while dealing with issues of implementation already suffered before the outbreak.
In this commentary I reflect on diverse implications of COVID-19 for digital social protection systems. A central trade-off – that of increasing the likelihood of accurate identification, at the cost of greater exclusion of entitled users – is exposed as a pre-existing feature of digital social protection, along with its effects on beneficiaries under lockdown. I use a previous classification (Masiero and Das, 2019) of forms of data injustice in social protection to recognise legal, informational and design-related issues suffered by such schemes under COVID-19, reflecting on potential avenues to address their consequences. I conclude by illustrating the need to place digitality within social protection schemes that expand user entitlements, rather than constraining them as a consequence of extreme targeting. I draw implications of these points for data-based social protection in the post-pandemic scenario.
Datafied social protection: A central trade-off
India’s Public Distribution System (PDS) is one of the largest, food security centred social protection schemes worldwide. Launched in 1965 along the lines of pre-Independence rationing, the scheme provides subsidised goods to the nation’s poor through a network of ration shops, where monthly quotas are accessed by entitled users. Affected by long-term issues of leakage and diversion, the PDS became targeted in 1997 and started being integrated with measures of digital identification across the states. Among such measures, one adopted by several Indian states is based on Aadhaar, the national digital identity platform that assigns all enrolees a unique 12-digit number and stores their biometric data in a central repository, allowing identification through biometric credentials at the point of sale of PDS goods.
Research has highlighted the dichotomy between the alleged anti-leakage rationale of the Aadhaar-based PDS and the exclusionary effects yielded by the same technology (cf. Drèze and Khera, 2015, 2017; Khera, 2017). In the Aadhaar-based PDS, recipients authenticate biometrically with the local ration shop, where their credentials are matched with their entitlement and the correct quantity of foodgrains, sugar, and other rationed goods can be assigned. This is meant to build a double-accountability mechanism for which the customer cannot fake entitlement, and at the same time the ration dealer cannot ‘cheat’ the system by diverting goods to the private market for a profit. Studies of Aadhaar-based PDS illuminate its anti-leakage rationale, in some cases advocating the technology as means to streamline the delivery of social benefits (Gelb and Clark, 2013; Saini et al., 2017).
A trade-off however lies in the exclusionary effects of the Aadhaar-based PDS, widely documented at quantitative (cf. Drèze et al., 2017; Khera, 2017; Muralidharan et al., 2020) and qualitative levels (cf. Chaudhuri, 2020; Hundal et al., 2020; Masiero and Prakash, 2019). From a quantitative perspective, research in the eastern Indian state of Jharkhand found a 10% reduction in benefits for recipients (23% of the total) who had not linked Aadhaar credentials to benefits, with 2.8% receiving no benefits at all (Muralidharan et al. 2020). Researching the same state, Drèze et al. (2017) and Chaudhuri (2020) noted the uncertainties of biometric authentication for the poor, a finding that emerged across studies of the same scheme (cf. Hundal et al., 2020; Masiero, 2020). Research also problematises the assertion for which biometrics mean more effective delivery, since the system may afford recording disbursement even if rations are not provided as per eligibility (Hundal et al., 2020).
Within this backdrop, COVID-19 has left recipients in extreme need for social protection (Drèze, 2020). India has a very large population of informal workers (over 80% according to the Employment-Unemployment Survey Report, 2015–2016, cited by Khera, 2020), and a large population of workers who travel on a seasonal basis to their work sites. In the aftermath of the lockdown announced by India’s Prime Minister Narendra Modi on 19 March 2020, numerous informal workers have lost their main source of income, while internal migrants have been stranded from their regions, cities or towns of origin. In this situation of sudden crisis, the importance of extant social protection systems such as the PDS has been brought to the fore, and with it the seriousness of issues related to the exclusion of entitled users.
Operating throughout the lockdown, PDS ration shops have served households for whom poverty was combined with inability to work. In the early days of lockdown, Khera (2020) noted the importance of using the surplus stocks of foodgrains held by the Food Corporation of India (FCI) to help face the crisis, providing double rations to the most vulnerable households and expanding the scheme’s coverage where possible. In addition, the suspension of the Aadhaar-based PDS was advocated on grounds of the need to avoid transmission of the disease from biometric point-of-sale devices (Shrinivasa, 2020). In juxtaposition to such measures, surveys in the aftermath of the lockdown (cf. Counterview, 2020) have revealed persistent insecurity lived by daily wagers, with additional strains affecting migrant workers stranded in their work sites (Suri and Mishra, 2020).
Before the pandemic, reports of hunger deaths associated to faulty Aadhaar identification of PDS users (Singh, 2019) highlighted extreme consequences of exclusion errors. Against this backdrop, COVID-19 has highlighted the need of secure, free-from-uncertainty access to social protection, a need that becomes even more acute in crises that disproportionately affect the economically vulnerable. The need for biometric-free social protection systems goes beyond risk of transmission from touchscreen machines, and points to the exclusionary effects perpetuated in the trade-off brought by digital identity. A trade-off that balances accuracy of identification with exclusions is extremised in the crisis, where social protection needs to meet the consequences of measures that, such as lockdowns, result in income reductions for the already vulnerable.
Forms of data injustice in digital social protection
Relying on Taylor’s (2017) notion of data justice as ‘fairness in the way people are made visible, represented and treated as a result of their production of digital data’, Masiero and Das (2019) focus on data injustices in anti-poverty schemes that were integrated with digital identity systems. In their work on India’s PDS, a techno-rational view captures advocacy of the ‘positive effects’ stemming from digital identity, such as unproblematic delivery of goods to recognised users and inability of ration dealers to divert goods. But in juxtaposition to this view, three forms of injustice – legal, informational and design-related – are identified in digital social protection. All three forms may be relevant when observing the response of social protection systems to the pandemic.
From a legal perspective, Masiero and Das (2019) note that digital social protection makes entitlement conditional to registration in digital identity schemes, subordinating universal rights to enrolment. The legal nature of injustice lies in the shift of rights from fundamental to conditional to registration, pertaining to systems claimed to be ‘free and voluntary’ like India’s Aadhaar. Applied to the case of the PDS, the problem lies in a shift of the right to food from an essential, legally recognised one to one conditional to Aadhaar enrolment. Studies show that in states operating an Aadhaar-based PDS, Aadhaar registration (and well-functioning authentication) is essential to collect rations, which are not disbursed without it (Chaudhuri, 2020; Hundal et al., 2020; Masiero and Prakash, 2019).
As social protection systems face the COVID-19 emergency, dangers of legal data injustice need to be watched for. In situations of crisis, subjects who are unable to authenticate may suffer exclusion from essential services, unless exceptions of some form are made. Suspension of programmes such as the Aadhaar-based PDS limits such a risk, delinking entitlement from a digital authentication system that fails many users (Muralidharan et al., 2020). This is to be balanced with alternatives to biometric identification, such as the digital, but non-Aadhaar-based smart card system operating in the Indian state of Tamil Nadu (Hundal et al., 2020; Khera, 2018).
Secondly, Masiero and Das (2019) refer to informational data injustice to denote situations in which users are not fully informed of how their data are used by agencies handling them. During COVID-19, social welfare schemes have been deployed to identify vulnerable subjects: in Colombia’s Solidarity Income program, data from different government databases were cross-referenced to identify needful households (UNDP, 2020: 27). To do so, information was combined from extant data repositories, ultimately charging an algorithm with the decision on which households were to receive the subsidy. But in the machine-led process of entitlement assignation, what remains obscure is how information was combined, leaving households uncertain on their subsidy status and on the data that concurred to determine it (López, 2020).
Similarly in Peru, two cash transfer programmes were launched as part of the response to COVID-19 in March 2020. A first one, ‘Yo me quedo en casa’, involved a transfer of about US $108 to 2.7 million households in poverty, and a second one, «Bono indipendiente», granted the same amount to an additional 780,000 households with self-employed workers (World Bank, 2020). Yet as noted by Cerna Aragon (2020), the handling of information by the benefit-assigning system is opaque, with poverty status assigned through cross-checking of databases such as the Census, property registry and electricity consumption. While this information was available to the government before, the way it is combined to determine eligibility is again obscure, with ‘incertitude being the rule’ when it comes to household classification and assignation of entitlements (Cerna Aragon, 2020).
Another, related problem pertains to what ultimate policy goals the datafication of social protection systems is finalised to. In the case of the PDS, Masiero and Das (2019) note how Aadhaar leads the shift to a new system based on cash transfers, a system which recipients express multiple concerns about. In the cases of Colombia and Peru mentioned here, it is not clear whether current measures of data cross-checking will remain emergency-oriented or transform benefits in the long run. Ultimately, users’ unawareness of the final goals of datafication leads them to give out data without knowing the long-term objectives of their registration.
Finally, Masiero and Das (2019) report on a design-related form of data injustice, which results from the misalignment of datafied social protection with the needs of users. Such disjunctures, which can be seen as ‘design-reality gaps’ as in Heeks (2002), emerge when design of digital social protection does not match the real needs lived by users. The trade-off of accurate identification with exclusions of the needy is a case in point: as inclusion errors are cautioned against, risk of exclusion remains. In a situation of livelihood risk heightened by exclusions, a program that combats inclusion, but not exclusion errors does not align with users’ priorities, an issue whose consequences are brought to the extreme in emergencies.
The three forms of data injustice – legal, informational and design-related – identified here may hence need to be closely monitored in the post-pandemic scenario. Exclusion of users from essential services, lacking or erroneous information on data usage, and gaps between user needs and design of digital social protection are issues that already surfaced, which need attention not to be designed into long-term social protection responses. While these injustices emerged before the pandemic, their visibility has been brought to the fore by COVID-19.
A shift in perspective: From targeting to long-term inclusion
Economic impacts of COVID-19 have been non-neutral (Milan and Treré, 2020), with especially dire effects on vulnerable groups such as informal workers (Drèze, 2020) and workers of the gig economy (Krishna, 2020). In a crisis that systematically hits the already-vulnerable, narrowly targeted social protection systems can be especially problematic, and the exclusionary effects of digital identity have revealed limits in sustaining social protection in situations of emergency.
But at the same time, the crisis generates thoughts on ‘how else’ social protection policy can be enacted, in alternative to a setting that equates narrow targeting of schemes with efficacy. The debate on the ethicality of targeting (Devereux, 2016) acquires new relevance here, because if on the one hand identifying the ‘most needful’ is a common approach in emergency, on the other the limits of extreme targeting have become visible during COVID-19. Enforcing narrow targeting, digital identity may blur the question on whether targeting is really the best alternative, especially when compared to universal social protection systems.
The case of India’s PDS is again relevant here. In response to widespread leakage, state-level measures of computerisation started emerging since the early 2000s, with the goal of restricting the possibility of diversion. In the same historical time, Sen and Himanshu (2011) argued that universal provision was not only desirable, but ‘a more efficient and feasible way to ensure food security’, and the only approach consistent with the universality of the right to food. With the National Food Security Act (NFSA) approved in 2013, PDS coverage was extended to about two-thirds of the population, ensuring broad coverage and low issue prices across the country (Drèze and Khera, 2017). This was the result not of targeting, but of the very opposite measure, guaranteeing entitlement expansion.
India’s NFSA shows an alternative route to narrow targeting, drawing benefits from greater coverage and consequential reductions in leakage. This leads to augment the data justice debate with the question on whether digital technology, rather than being tied to the reinforcing of targeting, may be tailored towards inclusive systems, drawing their effectiveness and resilience under crisis exactly from the expansion of coverage.
Conclusion
As part of the Viral Data symposium initiated by this journal, Milan (2020) argues that numeric and statistical categorisations have been invariably central in national responses to COVID-19. Shelton (2020) similarly notes that data mismanagement by national authorities has blurred our data-based understanding of the pandemic, and especially of effects on vulnerable groups invisible to mainstream ‘counting’ techniques (Milan and Treré, 2020). This commentary adds to the debate from the angle of digital social protection, highlighting three forms of data injustice that, with the new strains imposed by COVID-19, will need to be watched for in a post-pandemic society.
Originally proposed before COVID-19 (Masiero and Das, 2019), the taxonomy of injustices adopted here intersects with the data justice implications of responses to the pandemic (Taylor et al., 2020). While the trade-off (effectiveness vs. exclusions) of biometric social protection is now especially problematic, its consequences for the excluded go beyond the ongoing crisis, leading to consider the alternative offered by universal social protection. In the light of the pandemic, questions on the ethicality of targeting (Devereux, 2016) have acquired special relevance, and with them the ability of technology to intertwine with universal, rather than targeted, social protection systems.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
