Abstract
Vexing political questions of power, inequality and coloniality permeate the tech sector and its growing use of global ‘virtual’ assembly lines that see them penetrate even refugee camps in efforts to extract value. As a response, tech companies have been expanding non-commercial activities within a presumed framework of humanitarianism, in part, trying to outweigh the negative implications of unjust business practices often characterised by third-party avoidance of responsibility. This commentary focuses on tech companies’ engagement with people in the Global South – not as recipients of tech beneficence – but as labourers who make tech possible. First, we document why companies are brought into humanitarian crises, and then we briefly chart examples of the practices of tech companies in the Global South. Then, we argue that ‘tech for good’, often presumed as altruistic, instead reproduces an expansive history of questionable corporate social responsibility efforts that sustain inequalities more than assuaging them. We conclude by reflecting on the impact of commodifying compassion for humanitarian helping and argue that tech companies should stop trying to ‘help’ through self-perceived altruistic activities. Instead, corporations should focus on remaking their core business practices in an image of justice, protection, and equal value creation, particularly in contexts characterised by vulnerability.
This article is a part of special theme on Commodifying Compassion in the Digital Age: The Promise and Perils of Tech for Good. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/commodifying_compassion
Tech moguls and their companies have emerged as the grand philanthrocapitalists of our time, comparable only to the early twentieth century American industrialists who ushered in modern philanthropy itself. Today, this transnational ‘helping’ has a new humanitarian business model (Barnett, 2022) in which ‘helping’ itself is now a branded commodity and businesses use the possibility of humanitarianism to provide a ‘halo effect’ for their brands, deflecting attention from other less desirable practices (Richey, 2019). In the tech sector, this has impelled ambitions of tech for good - a catch-all term to describe forms of philanthrocapitalism (Bishop and Green, 2008; McGoey, 2016) where tech companies combine their products, expertise and for-profit business operations with aims to ‘do good’ (Henriksen and Richey, 2022; Madianou, 2021). ‘Tech for good’ also refers to social enterprises within the tech sector (Bughin et al., 2017), when programmers use their coding skills to solve social problems (Roberson, 2018), or when interests are mobilised to mitigate risks and reduce harms of digital technology (Dillet, 2018).
While tech is often framed vis-à-vis an imagined ability to transform societies and inspire progress, the reality is also one in which (predominantly Global North) tech companies are perpetuating deeply troubling practices – especially labour exploitation and precarity in the Global South (Fejerskov, 2022). Even when designed or intended ‘for good’, tech companies ‘rework, amplify and justify the extractive logics of the past’ through what Madianou terms ‘technocolonialism’ (2021, 2022). This challenge lies at the heart of a politics of helping that describes the unfair distribution of value, both ethical and economic, between companies and organisations that attempt to help and those who are supposed to benefit from said helping. Thus, vexing questions of power, inequality, and coloniality permeate both the tech sector and the humanitarian sector, and these become amplified through the sectors’ increasing intersection.
In this short commentary, we focus on tech companies (for-profit, social enterprises, and those in between) and argue that rather than enacting transformative change, even tech for good often ends up perpetuating a longstanding pattern of integrative or instrumental (Garriga and Melé, 2013) corporate social responsibility (CSR) initiatives that prioritise wealth creation over social responsibility. Scholars, consumers, activists, and government officials must look past the smokescreen of ‘helping’ and ‘goodness’ to understand the complex and diverse labour relationships between people in the Global South and technology, and we must push for tech companies to remake core business practices in an image of equal value creation, protection and justice. To support our position, we first explore why tech companies are helping, before spotlighting some of the questionable tech business practices and ending on the potential ways of addressing the amplification of inequalities characterising tech companies’ current engagement with people in the Global South.
Why are tech companies ‘helping’?
Businesses attain newfound power and influence by engaging in humanitarian endeavors, a domain traditionally dominated by NGOs, non-profits and governments (Fejerskov et al., 2017; Kolk and Lenfant, 2012; Liverman, 2018). They are increasingly involved in shaping the discourse surrounding humanitarian issues and their resolutions, engaging in cross-sector partnerships (Henriksen, 2023), celebrity strategic partnerships (Budabin and Richey, 2021), cause-related marketing (Kipp and Hawkins, 2019) and iterations of ‘brand aid’ (Richey and Ponte, 2011). Through their active participation, businesses define the concept of ‘doing good’. In ethical discussions, business activities are often portrayed as virtuous and stripped of political implications (Rajak, 2011). Consequently, recent scholarly work emphasises the importance of scrutinising how narratives construct humanitarian problems and solutions, especially as partnerships with businesses are increasingly portrayed as the universal and exclusive means of doing good (Olwig, 2021; Richey et al., 2021).
The institutional context has been challenged by greater need (global climate crisis and ‘wicked problems’) and less public capacity to meet these needs. The global frameworks for transnational helping encourage leveraging tech companies’ core business practices and their commitments towards global helping. The United Nations (UN) Sustainable Development Goals and the UN Global Compacts have embraced tech companies as the providers of innovation in aid (Fejerskov, 2022; Müller and Sou, 2019; Sandvik, 2017). Humanitarian organisations benefit from the politics of helping that advance inequalities of ethical capital, so it is not surprising that businesses also profit, literally, from these relationships (Richey, 2024).
Tech companies’ business practices exploit the Global South
Tech companies’ commercial activities have negative impacts such as the dispossession of indigenous people, human rights abuses, and precarious working conditions for people in the Global South. Still, these companies have been expanding non-commercial activities within a presumed framework of humanitarianism, in part to outweigh the negative implications of unjust business practices in the Global South. Tech's growing use of global ‘virtual’ assembly lines penetrates even refugee camps in efforts to extract value (see Madianou, 2019; Pascucci, 2019). Poor working conditions and human rights violations of invisible, or ‘ghost’, workers are a steady part of these global virtual assembly lines fueling technological innovation and a decarbonised future (see Bilic, 2016), or as in the case of AI, occurring across data cycles of preparation (data generation), modelling (coders), and verification (output and output check). Cheap labour makes tech possible: 75% of Tesla's data used in the training of its algorithms for autonomous driving was labeled by Venezuelans, facilitated by hyperinflation, devaluation, and a complete lack of jobs in a country where more than 90% of the population live in poverty (Hao and Hernandez, 2022).
Additionally, electric vehicle companies such as Tesla are currently fueling a goldrush targeting both lithium and cobalt. In the ‘lithium triangle’ straddling Argentina, Chile and Bolivia, which holds 85% of the world's reserves, extraction is causing dispossession of indigenous people (Hernandez and Newell, 2023). A similar picture emerges for other minerals such as cobalt and copper used for green technologies. Congolese people suffer human rights abuses, forced evictions, and violence to maintain cobalt production (see Calvao et al., 2021). While most of the large-scale industrial mines are controlled by companies such as Chinese Zijin Mining, China Molybdenum (CMOC) or Swiss Glencore, sometimes in cooperation with the state-owned Gecamines, much illegal mining is governed by local militia groups.
In the digital space, including ‘back-end’ AI, companies annotating data for artificial intelligence incorporate populations from refugee camps into their virtual assembly lines. For example, in camps such as Dadaab in Kenya or Shatila in Lebanon, refugees are increasingly given the opportunity to conduct ‘clickwork’ such as picture annotation (labeling of pictures to teach algorithms how to know cats from cattle), adjusting their sleep patterns to conduct the work while the parent companies on the ‘other’ side of the globe are awake (Jones, 2021). Microemployment has spread over the past decade (see Hackl, 2022), yet provides no rights, no security, and abysmal pay. Indian content outsourcing companies refer to themselves as the ‘gatekeepers of the internet’, for all their moderation work on behalf of Western tech companies, with 90% of the workers being graduates or postgrads with engineering and computer science skills (ILO, 2021).
This is not just a western problem of course. Chinese companies such as ZTE and to a lesser degree Huawei have been providing digital infrastructure backbones that provide cynical leaders with the instruments to surveil, control, and punish their populations (see Jiang et al., 2016). Israeli companies have been providing the same elites with digital technologies to foster disinformation, hack political opponents and sabotage elections. Team Jorge operatives working out of Israel told The Guardian that they had completed 33 Presidential-level campaigns to influence elections, two-thirds of which were in Africa (Kirchgaessner et al., 2023). Not to speak of the impact that Israel's attacks on Gaza have had for a tech sector that formed the fast-growing part of the Palestinian economy. These practices are not unexpected uses of technologies meant for other purposes but part of companies’ core business models.
‘Tech for good’ or technocolonial responses to humanitarian crises?
Despite the well-documented consequences of the problematic practices outlined above, they are now being applied in humanitarian settings in which ‘Tech for Good’ justifies a variety of technocolonial practices, from ‘partnerships’ to cheap labour. Henriksen and Richey (2022) show how Google's AI-Impact Challenge defines ‘tech for good’ as something that should be risk-taking and accelerated, not AI that responds to humanitarian beneficiaries’ priorities. Marketing Google's already existing products and experts took precedence over any disinterested identification of the greatest AI needs in the global South.
Perhaps the most controversial example comes from Palantir, the predictive policing and surveillance company, known driving the World Food Programme's (WFP) digital transformation while exposing vulnerable humanitarian populations’ biometric data to control by an American security company (Martin et al., 2023). Although ‘the WFP celebrated the partnership as tech-for-good, arguing that the relationship was without concern because Palantir would not allow access to the information of beneficiaries,’ data reuse scholarship warns that this could not be assured, given that ‘the value of data reuse is deeply enmeshed with the value of recalibrated and repurposed algorithms’ (Thylstrup et al., 2022: 3, 7).
Also, under the name of ‘impact sourcing’ or ‘socially responsible sourcing’, companies such as Sama are utilising vulnerable communities under the pretext of providing them with skills during humanitarian crises (Muldoon et al., 2023). Together with the WFP, Sama maintained a project bluntly called ‘Tech for Food’, to help ‘young people affected by conflict in the Middle East reach their full potential through a career in the digital economy’, connecting them to demands from the USA and Europe. As UNHCR explains, ‘with remote work becoming the norm across many industries, online freelancing offers isolated communities in places like Iraq and Lebanon new opportunities to earn an income and build a brighter future’ (UNHCR, 2023). These ‘skills’, however, mainly qualify refugees to conduct simple and repetitive services such as data entry or picture annotation.
In Kenya, content moderators have recently been part of struggles against employers such as Sama and Meta. The precarious nature of their work reviewing and moderating posts on Instagram, Facebook, and other platforms includes everything from birthdays to beheadings, rape, and murder, causing an immense emotional toll (see De Gregorio and Stremlau, 2023). Hundreds of Kenyan moderators filed a case with the Kenya Human Rights Commission. Despite a crowdfunding campaign to support them, they are unlikely to win against Sama. Many moderators, recruited from countries like South Africa and Nigeria, face problems of expired work visas and no options for other employment (Business and Human Rights Resource Centre, 2024).
What needs to change?
What is needed from tech companies to avoid exploitation and unequal extraction of value through datafication? Companies exploit vulnerable populations and situations such as humanitarian crises and refugee camps because there is value to be extracted from helping them and cheap labour to be employed. If even incarcerated women can be commodified to increase the value of prison-made ‘helping’ products (Richey, 2024), then there is no limit to the dehumanising sale of big data that can, literally, never be traced to its producers (Thylstrup et al., 2022). Insurmountable amounts of data are needed to maintain and develop digital technologies. This is data that needs to be collected or extracted, processed as data sets, and presented in a form that enables, for example, the training of generative AI, large language models, etc. Data extractivism (see Couldry and Mejias, 2019; Fejerskov et al., 2024; Sadowski, 2019) exists mainly because of its profitability – because it allows for the cutting of costs in virtual assembly lines. These costs are cut at the expense of people in the Global South.
In short, tech companies need to change their business practices, and when they do not, governments need to regulate them. While regulation discussions are fundamentally challenged by the political disruption brought by emerging technology, much of what we single out here is not the result of overly complex or black-boxed algorithms, but of inherent mistreatment of workers in the Global South. At the same time, the growth in humanitarian-tech collaborations is as much a pull by the humanitarians as it is a push by tech companies. To avoid risks of unfair treatment, compensation, and abuse of vulnerable populations, humanitarian actors must critically review engagements with technology companies and startups to refocus their efforts of ‘innovation’. Virtually all major UN organisations, NGOs, or development organisations today employ innovation departments, teams or labs to ‘stay in contact with Silicon Valley’ to ‘change business as usual’ through notions of accelerating, scaling, and experimenting. Instead of the current focus on material technologies and datafication, innovation should focus on systemic questions of rebalancing power relations, increasing localisation, and ensuring that appropriate relief responds to actual needs.
In conclusion, this commentary on the politics of helping has argued that tech companies should stop trying to ‘help’ through self-perceived altruistic activities. Instead, they should focus on remaking their core business practices in an image of justice, protection, and equal value creation, particularly in contexts characterised by vulnerability. While the inherent complexities of emerging tech may render it incomprehensible to many, the business practices with which these technologies are pushed in their commercialisation rests on well-known foundations of value extraction and exploitation.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This work was supported by the Ministry of Foreign Affairs of Denmark (grant number 18-12-CBS).
