Abstract
In a period of heightened societal expectations and investment in artificial intelligence (AI) technologies, this article directs attention towards the risks and concerns associated with relying on AI to address climate change. These include: AI's climate consumptionism in the form of its direct and indirect environmental footprint; AI's legitimation of techno-centric thinking within climate mitigation and adaptation initiatives; AI's entanglement in repressive practices of surveillance against climate activists and migrants; and AI's influence over public discourse in ways that undermine societal responsiveness to climate change. At a time when these risks are garnering greater public attention, this article assesses the promise and perils of rights-based approaches for addressing them. To this end, the article identifies three challenges that may inhibit the value of rights-based approaches at the intersection of climate and AI governance: first, the challenge of concretisation, encompassing difficulties translating the open-textured vocabulary of rights into more concrete operational standards that are attuned to the particularities of AI technologies; second, the challenge of individualism, encompassing difficulties of relying on the predominantly individualised discourse of rights to address the collective and societal concerns to which climate applications of AI give rise; and finally, the challenge of marketised managerialism, encompassing the challenge of guarding against corporate capture given the dominance of Big Tech companies in global AI supply chains. The article concludes that harnessing rights-based approaches requires being candid about their limitations, uncertainties, and perils – acknowledging rather than smoothing over the complexities and weaknesses of rights as a vocabulary for confronting risks and concerns at the intersection of climate change and AI.
INTRODUCTION
In 2023, Volker Türk, the UN High Commissioner for Human Rights, delivered a speech emphasising the urgency of addressing ‘the catastrophic impacts of climate change, pollution and biodiversity loss’ as well as ‘the real-life impacts of AI’. 1 The speech is illustrative of the attention increasingly directed towards the twin challenges posed by climate change and AI. At the same time, the speech is also reflective of the siloed approach that, until recently, tended to be adopted when discussing these challenges – neglecting the ways in which climate change and AI are, in significant respects, interconnected.
Where intersections of climate change and AI have been acknowledged, emphasis has typically been placed on AI's promise to enhance societal responsiveness to the climate crisis – with particular weight placed on AI's potential to help decision-makers more effectively predict, mitigate, and adapt to climate change. 2 Yet, there are also a range of interconnected risks and concerns associated with relying on AI technologies to address the climate crisis. 3 In terms of climate consumption, AI technologies extract significant resources and energy in their production and catalyse a consumptionist culture of consumerism in their application. Within climate mitigation and adaption initiatives, AI technologies may legitimate techno-centric solutions that fail to account for social inequalities and hierarchies in their contexts of implementation. AI technologies may also become entangled in climate surveillance practices that seek to repress climate activists and deter climate-induced migrants at the expense of addressing the underlying causes of the climate crisis. Finally, within climate discourse, AI technologies may contribute to the distribution of climate mis/disinformation whilst concentrating power in the hands of a small number of Big Tech firms which may use their outsize discursive influence to undermine or at the very least neglect societal responsiveness to the climate crisis in favour of their corporate interests.
The risks and concerns that arise from relying on AI technologies to address climate change have already been identified in a substantial body of literature. However, scholarship exploring the regulation of those risks within the legal domain remains relatively scarce. 4 This article contributes to this nascent area of legal scholarship by exploring the promise and perils of rights-based approaches for addressing such risks. Importantly, while policymaking concerning climate change and AI remains somewhat compartmentalised, efforts to address each of these challenges have increasingly been informed by significant enthusiasm for rights-based approaches. In the climate governance context, the climate crisis has been characterised as a human rights crisis while a wave of climate litigation has been informed and underpinned by human rights law. 5 In the AI governance context, the European Union's (EU) regulatory approach to digital regulation has been characterised as ‘rights-driven’, 6 informed and guided by the rights elaborated in the EU Charter of Fundamental Rights, as well as wider developments in international and regional human rights law.
Against this background, this article asks whether rights-based approaches are well-equipped to address the different categories of risk that arise at the intersection of climate change and AI. The article's central claim is that if rights-based approaches are to contribute to addressing risks at the intersection of climate change and AI, they must evolve to address at least three challenges: first, the challenge of translating the open-textured vocabulary of rights into more concrete operational standards that are attuned to the particularities of AI technologies (the challenge of concretisation); second, the challenge of applying the predominantly individualised discourse of rights to the collective and societal concerns to which climate applications of AI give rise (the challenge of individualism); and finally, the challenge of guarding against corporate capture given the dominance of Big Tech companies in global AI supply chains (the challenge of marketised managerialism). Reflecting on these challenges reveals that it is possible to adapt rights-based approaches to a certain extent. However, harnessing the promise of rights-based approaches requires being candid about their limitations, uncertainties, and perils – acknowledging rather than smoothing over the complexities and weaknesses of rights as a vocabulary for addressing challenges at the intersection of climate change and AI.
Methodologically, this article relies on a doctrinal approach to identify and analyse the scope and content of rights-based approaches within relevant legal instruments. The article uses the term ‘rights-based approaches’ to encompass the diversity of frameworks that have been established in international and regional human rights law, the United Nations Guiding Principles on Business and Human Rights (UNGPs), and supra-national legal instruments of the EU including its Charter of Fundamental Rights together with regulations and directives founded on a commitment to fundamental rights within its legal order. This doctrinal method is complemented by drawing on critical legal scholarship on the promise and perils of rights as a vocabulary of governance to identify the priorities, limits, and blind spots of rights-based approaches for addressing risks at the intersection of climate change and AI. The article is also informed by literature from disciplines beyond the legal field, including digital geography, political ecology, and media studies, to identify and navigate the risks of relying on AI technologies to address the climate crisis.
The article proceeds as follows. After outlining some of the most significant risks and concerns at the intersection of climate change and AI (2), the article turns to critically evaluate the capacity of rights-based approaches for addressing such risks (3), before offering some concluding remarks (4).
THE RISKS OF AI FOR ADDRESSING CLIMATE CHANGE
To identify the risks of relying on AI for addressing climate change, it is important to begin by framing what these concepts encompass. It is well-established that addressing climate change requires a frame that extends beyond its atmospheric dimensions to encompass the distributive inequalities that underpin the climate crisis and societal responses to it. As Aarti Gupta explains, such inequalities are not only visible in terms of the impacts of climate change but also constitute a significant driver of the climate crisis, rooted in ‘the historical trajectories of colonialism and extractivism between and within states that have fuelled cycles of poverty and environmental degradation’. 7 Similarly, while AI often conjures an image of a technical toolbox of algorithms, data, and cloud architectures, confronting the risks posed by AI technologies requires a frame that captures the human (labour) and material (resource) dimensions of their production. 8 As Marie-Therese Png explains, adopting such a lens not only enables ‘a more comprehensive and whole-systems view of harms’, but also directs attention to harms related to natural resource exploitation that are ‘more visible at the peripheries of capitalism’. 9 At the same time, it is also important to remember that behind AI technologies are networks of State and non-State actors who are responsible for their design, development and deployment. 10
Bearing these frames in mind, it is possible to identify a range of risks and concerns related to at least four overlapping and interrelated intersections of climate change and AI technologies. 11
First, AI technologies are climate consumers that leave in their wake significant greenhouse gas (GHG) footprints. This occurs both through material extractivism across the entire AI compute lifecycle from hardware production to e-waste disposal, and data extractivism in the service of unsustainable consumerism and the provision of assistance to oil and gas companies.
Although opacity remains a challenge in this context, a number of studies have begun to reveal the extent of the GHG impacts of AI technologies. According to a 2022 expert study, for example, the GHG impacts of AI technologies encompass three categories: computing-related impacts, including the GHG emissions from operational energy use, as well as materials extraction, manufacturing, transportation and the end-of-life phase related to data centres, data transmission networks and connected devices, which accounted for approximately 1.4% of global GHG emissions in 2020; 12 immediate application impacts, including the use of AI ‘to accelerate oil and gas exploration and extraction by decreasing production costs and boosting reserves’ and ‘to help manage livestock at scale, which can increase cattle farming’; 13 and system-level impacts, including rebound effects where AI increases the efficiency of a service (fuel efficiency in autonomous vehicles, for example), thereby lowering the cost and triggering increased consumption of the same (or another) service (higher rates of individualised vehicle travel, for example). 14
Significantly, an expert study commissioned by the OECD recently concluded that direct environmental impacts stemming from, for example, the physical extraction and consumption of natural resources to build AI hardware, the energy and water consumption of training and deploying AI models, and the recycling or disposal of electronic waste, have been ‘most often negative’. 15 At the same time, the study also concluded that indirect environmental impacts stemming from particular deployments of AI applications have sometimes also proven detrimental, for example, through nurturing ‘unsustainable changes in consumption patterns’. 16 Moreover, these detrimental impacts tend to be unevenly distributed, ‘with the Global North enjoying the technological benefits and wealth accumulation from AI while the Global South is subjected to conditions of exploitation, environmental degradation, and premature death’. 17
Second, AI technologies have also been relied upon within climate mitigation and adaptation initiatives. This intersection concerns efforts to harness AI to reduce GHG emissions and improve the resilience of communities to the adverse effects of climate change. Expert studies have begun to elaborate a range of settings in which AI technologies may enable or accelerate climate mitigation and adaptation projects, spanning areas as diverse as electricity systems, buildings and cities, transportation, heavy industry and manufacturing, agriculture, and forestry. 18 Examples include the use of AI for: data mining and remote sensing to generate usable insights for policymaking and systems planning, for example by tracking deforestation; accelerating experimentation within scientific discoveries as part of research and development for low-carbon technologies; learning from time series to enhance forecasting, for example of crop yields or transportation demands; optimising the efficiency of complex systems, such as electricity grids; accelerating time-intensive simulations, for example for climate modelling; and enhancing predictive maintenance, for example to improve climate resilience. 19
As critics have noted, however, the AI industry has generally placed more emphasis on ‘what might be than what actually is’ when discussing AI for sustainability initiatives. 20 Beyond concerns already discussed related to AI's direct and indirect environmental footprint, whether climate mitigation and adaptation projects achieve their aims tends to be contingent on the contextual circumstances of their design, development and deployment. 21 Eric Nost and Emma Colven, for example, suggest that ‘AI for Good’ initiatives risk (re-)producing social inequalities and injustices by neglecting ‘questions of social vulnerability and political economic structures’ and erasing ‘the important socio-spatial topographies that research on adaptative, vulnerability and climate justice has so extensively documented’. 22 Specific risks in such contexts include: maladaptation, whereby AI technologies are transferred from one context to another without accounting for local conditions and hierarchies that vary between different societal settings; 23 co-option, whereby AI technologies are re-directed to align with the commercial interests of corporations and/or the security interests of governments at the expense of addressing the climate needs of communities; 24 data extractivism, including concerns related to privacy, security, transparency, and bias that arise from reliance on AI technologies; 25 and techno-centrism, whereby AI technologies are used for optimisation at the expense of policies that aim to more structurally transform how societies respond to climate change. 26
Third, AI technologies have been harnessed to inform climate policy and action through data-driven infrastructures of surveillance. To a certain extent, such practices overlap with the preceding intersection. This is the case in particular where algorithms and machine learning have been relied upon to extract patterns from data from climate monitoring infrastructures as part of climate mitigation and adaptation initiatives. 27 Yet, this intersection extends beyond such contexts. It encompasses situations where States have relied on AI-based surveillance technologies, including facial recognition software and zero-click forms of spyware, to monitor and undermine the work of human rights defenders, 28 a trend that poses a threat to the ongoing activism of environmental and climate campaigners. 29 In recent years, many States have also directed significant resources towards the construction of ‘Climate Walls’, demonstrating greater concern for deterring climate-induced migration than addressing its root causes. 30 For this purpose, States have begun experimenting with various forms of AI-based technologies as part of efforts to enhance the security of their borders, ranging from data-driven surveillance to automated forms of decision-making. 31 These technologies have given rise to concerns over privacy, discrimination, and the life, liberty and security of persons at the level of the individual, as well as concerns related to chilling effects on public discourse and the commodification of the public sphere at the level of society. 32
Finally, AI technologies have also become entangled in shaping climate discourse. This can be seen primarily from two perspectives. First, the concentration of power within global AI value chains is in the hands of a small number of market-dominant Big Tech companies. This has afforded them outsize influence over climate change discourse. 33 This discursive influence is used to advance narratives that concern the other intersections of climate change and AI discussed above. Examples include framing AI technologies as necessary and inevitable tools in the fight against climate change to the relative neglect of or as part of efforts to greenwash AI's climate consumptionism; promoting a techno-fix mindset that focuses attention on ‘AI for sustainability’ initiatives at the expense of more structural societal transformations for addressing climate change; and enabling the securitisation of climate policy through collaborations with States. 34 Big Tech firms have also invested substantially in lobbying for rules and regulations that protect their commercial interests. Notably, despite making ambitious public sustainability pledges in recent years, Big Tech firms have tended to remain silent or actively oppose major climate policy initiatives. 35 At the same time, it has been revealed that several firms have provided support to climate deniers and organisations that have campaigned against climate legislation. 36
Beyond their lobbying power, Big Tech firms have also accumulated forms of ‘structural power’ that enable them to shape and circumscribe how different actors – including States, corporations, civil society groups, and the general public – interact with and relate to one another through their online platforms. 37 A particular concern in this regard is the role of AI in catalysing the creation and dissemination of climate mis/disinformation. 38 In terms of creation, generative AI tools, for example, are lowering barriers to the creation of AI-generated text, photos, audio and videos – making it cheaper and easier to generate mis/disinformation at scale. 39 In terms of dissemination, social media algorithms have tended to enhance the spread of mis/disinformation by prioritising more inflammatory content that heightens user engagement – whether through the personalisation of organic content (including not only climate denial, but also distract and delay climate narratives) or the microtargeting of paid content (including oil and gas greenwashing ads). 40 Also, in recent years various forms of online harassment directed towards those engaged in climate advocacy have increased. 41 In addressing these concerns, platform systems of speech governance, which themselves rely to a significant extent on algorithmic forms of moderation, have often fallen short. 42
As this brief overview reveals, there are a diversity of risks and concerns associated with relying on AI technologies to address the climate crisis. The question, therefore, is not whether but how policymakers should confront these risks going forward.
THE PROMISE AND PERILS OF RIGHTS-BASED APPROACHES
One set of frameworks that may be relied upon to address the risks and concerns that arise at the intersection of climate change and AI technologies are rights-based approaches. For the purpose of this article, rights-based approaches are understood as argumentative frameworks, which encompass substantive standards that actors must adhere to, processes that ensure those standards are met, and accountability mechanisms and remedies for violations of those standards. 43
In terms of substantive standards, rights-based approaches establish an evolving set of red-lines and safeguards relating to some of the most pressing concerns raised by AI technologies, including privacy, freedom of expression, and non-discrimination. 44 Importantly, these concepts have been subject to debate and contestation by a diversity of stakeholders in the human rights community, including social movements, civil society groups, human rights institutions, and States. The result, as Anna Su explains, is that ‘the universe of potential definitions is circumscribed to an intelligible degree (…) [whilst remaining] malleable and flexible enough’ to be adaptable to different audiences and contexts of application. 45 The relative clarity and flexibility of these substantive standards enables them to function and evolve as a vocabulary of AI governance in ways that offer guidance whilst remaining sensitive to the particularities of different societal settings. 46
The substantive standards of rights-based approaches are given expression through frameworks that are designed to manage the interaction between different rights and interests – encompassing not only thresholds for assessing when rights have been interfered with, but also a series of tests for determining when rights may be restricted. As Yeung, Howes and Pogrebna explain, rights-based approaches elaborate ‘a well-established analytical framework through which tension and conflict between rights, and between rights and collective interests of considerable importance in democratic societies, are resolved in specific cases through the application of a structured form of reasoned evaluation’. 47 The tripartite tests of legality, legitimacy and necessity, in particular, are designed to ensure a meaningful and transparent process for evaluating restrictions to qualified rights. This includes assessing the adequacy of the legal basis and framework within which rights are restricted, as well as whether less intrusive measures to the rights interfered with could be adopted to achieve a measure's legitimate aim. 48 There is intrinsic value in this process of reasoned evaluation, which requires actors to transparently explain and contest the interaction between different rights and interests with due consideration and sensitivity to their particular contexts of application. 49
Importantly, rights-based approaches encompass standards for both State and private actors, including transnational businesses. At the international level, for example, human rights law requires States to adhere to their duties to respect, protect and fulfil their human rights obligations, while the UNGPs establish a non-binding responsibility to respect human rights based on a global standard of expected conduct. 50 Notably, these standards are not only intended to demarcate the outer limits of State and corporate power, but also establish obligations for States to promote collective values, such as media pluralism and diversity. 51
Beyond substantive standards, rights-based approaches also encompass a series of processes and remedies. At the international level, for example, actors are required to ‘put in place an accountability framework that prevents violations from taking place, establishes monitoring and oversight mechanisms as safeguards, and provides a means to access justice for individuals and groups who claim that their rights have been violated’. 52 Importantly, the interdependent components of this accountability framework apply on an ongoing basis across the full AI life cycle. 53 For States, this means complying with their duties to respect, protect and fulfil by adopting a smart mix of measures to ensure their own acts and omissions are human rights compliant as well as exercising due diligence to prevent and address human rights harms caused by private actors. For businesses, this means adhering to their corporate responsibility to respect by putting in place processes to identify, prevent, mitigate and account for any adverse human rights impacts they may cause or contribute to through their own activities, or which may be directly linked to their operations, products or services by their business relationships. 54
This accountability framework is designed to ensure that individuals and communities most affected by the design, development, and deployment of AI technologies are meaningfully consulted and afforded the opportunity to effectively access both relevant information and remedies. 55 In this regard, it is notable that rights-based approaches are embedded within an existing network of institutional mechanisms that have developed over time to monitor, maintain oversight, and provide redress at domestic, regional and international levels. 56 At the international level, for example, these mechanisms include a diverse set of actors, ranging from social movements, civil society groups and judicial institutions to UN bodies, independent human rights experts, and other States. As Kate Jones emphasises, while the substantive standards and institutions associated with rights-based approaches are not a panacea for the challenges posed by AI technologies, they offer an important shared normative and institutional starting point at a time when ‘the current geopolitical stasis is likely to prevent effective multilateral cooperation on new normative frameworks’. 57
To evaluate the promise and perils of rights-based approaches for addressing the risks and concerns that arise at the intersection of AI technologies and climate change, the remainder of this section explores three challenges in this context – concretisation, individualism, and marketised managerialism – and considers the extent to which rights-based approaches might evolve to meet these challenges in practice.
The challenge of concretisation
First, there is the challenge of concretisation. Whether at the international, regional, or (supra-)national level, rights tend to be drafted in abstract and open-textured terms that require concretisation into more specific rules and detailed criteria tailored to the particularities of the technologies, actors, and societal contexts in question.
58
As Smuha explains: A continuous assessment is hence needed of the current interpretations given to these principles, the ways in which these interpretations might fall short to providing satisfying protection against the novel issues raised by AI-systems, and the areas in which the introduction of new – more concretising – rules might advance this goal.
59
One concern is that rights-based approaches may afford State and corporate actors overly broad discretion with respect to how different rights and interests might plausibly be understood and, where relevant, balanced. 60 This follows from the facts that AI governance decisions typically affect multiple, sometimes competing, rights and interests and that authoritative guidance and concretisation through legislative and interpretative initiatives are either nascent or absent. 61 In the AI governance context, this concern is compounded by three factors.
First, AI technologies are transforming the conditions of possibility on which the exercise of rights depend. As Julie Cohen explains: Until relatively recently, rights discourse has operated with a set of unstated and often unexamined assumptions about the built environment's properties – assumptions both about constraint (for example, the physical impossibility of universal surveillance) and about lack of constraint (for example, the open-ended possibilities for construction of gathering space).
62
AI technologies are drawing those assumptions into question – dramatically expanding not only ‘the horizons of possibility for communication, association, and intellectual exploration, but… also… the horizons of possibility for surveillance, control of expression and association, and highly granular, microtargeted intermediation of the information environment’. 63 The inscrutability and opacity of AI technologies, their capacity to operate relatively autonomously in real-time and at scale via the online environment's global networked architecture, and their ability to generate insights and predictions based on patterns and relational associations from merged datasets, signal the importance of recognising ‘the central role of sociotechnical configurations in affording and constraining the freedoms and capabilities that people in fact enjoy’. 64 By dramatically transforming the conditions of possibility for individual, collective and organizational activity, it has been suggested that AI technologies require not only ‘a significant recalibration of existing human rights norms with a view to rendering them suitable to protect new needs and interests in an online environment’, 65 but also expanding ‘the frame of reference of rights discourse to encompass the architectural (…) [as part of] a separate and distinct discourse of rights-conceived-as-affordances’. 66
Second, AI technologies straddle public-private divides, diffusing agency across multiple components designed, developed and deployed by a diversity of State and non-State actors interacting via supply chains that can be transnational in scope. 67 Rights-based approaches have evolved to encompass both non-State actors (whether through the non-binding corporate responsibility to respect or regulatory measures adopted by States pursuant to their positive obligations under human rights law) and extraterritorial activities (including through functional approaches to jurisdiction that have expanded the extraterritorial scope of application of international and regional human rights law). 68 However, it remains the case that the substantive standards arising from such frameworks have been developed primarily within a Statist and territorial frame of application. 69 As a result, significant uncertainties remain regarding how such standards are to be understood and concretised within public-private, transnational, socio-technical assemblages, such as those responsible for the design, development and deployment of AI technologies.
Finally, AI also poses a temporal challenge – in the form of risks and uncertainties concerning the future development, application and functioning of AI technologies. Lane, for example, identifies three categories of long-term risks posed by AI technologies: 70 first, uncertainties in how the capabilities of AI technologies will develop in the future; second, the unpredictability of the impacts of AI technologies, particularly since machine learning systems have capabilities to dynamically learn and adapt over time, evolving in response to their deployment in different local contexts and environments in ways that may not be ‘fully deterministic’ from the outset; 71 and finally, unforeseen uses of AI technologies, for example due to function creep. These dimensions of uncertainty, unpredictability, and unforeseeability render effective concretisation of substantive standards more challenging in practice – particularly in light of concerns that although rights-based approaches generally include preventive ex ante obligations, the concretisation of how those obligations apply in particular contexts is often derived from after-the-fact accountability mechanisms. 72
To better understand the interrelation between these different dimensions of the concretisation challenge, as well as possible pathways for alleviating them, the remainder of this section considers how rights-based approaches might address risks and concerns related to two intersections of climate change and AI – climate consumptionism and climate mis/disinformation.
AI technologies and climate consumptionism
Consider, first, the concretisation challenges that arise in relying on rights-based approaches to address AI's climate consumptionism. The close relationship between climate change and human rights is now well-established, with an increasingly wide range of international treaty bodies and regional human rights courts acknowledging the different ways in which climate impacts are generating human rights concerns. 73 However, in elaborating substantive standards for States, human rights institutions have seemed more comfortable addressing adaptation than mitigation. 74 Where the latter has been addressed, standards have tended to be articulated in vague terms (for example, to ‘reduce emissions as rapidly as possible, applying the maximum available resources’ 75 or to ‘undertake measures for the substantial and progressive reduction of their respective GHG emission levels, with a view to reaching net neutrality within, in principle, the next three decades’ 76 ) and/or formulated in highly deferential terms (for example, requiring States to establish a carbon budget whilst affording significant discretion in defining the level of their climate mitigation ambition and designing the operational policies to meet it). 77 For businesses, there has been growing recognition that human rights due diligence pursuant to the UNGPs includes climate due diligence, 78 namely avoiding causing or contributing to climate-related human rights impacts and seeking to prevent such impacts they are linked to through their business relationships. However, a range of ambiguities persist in relation to defining the standard of conduct imposed by climate due diligence. This includes challenges in quantifying the level of GHG emissions that determines whether a business has ‘caused’, ‘contributed to’ or is ‘linked to’ a climate-related human rights impact, 79 as well as concerns over the integrity of how ‘net zero’ targets are defined and implemented by businesses in practice. 80
In the AI governance context, these challenges are exacerbated by the difficulty of quantifying the precise climate impact of the full lifecycle of AI technologies, particularly bearing in mind their opacity and the possibility of a wide range of indirect impacts including rebound effects, 81 the lack of aligned metrics concerning the measurement of GHG footprints of AI technologies to enable comparison, 82 and the propensity of powerful Big Tech firms to greenwash their operations by offering climate pledges of low or moderate integrity. 83 Moreover, while it may be the case that ‘easy-to-use measurement methods already exist for monitoring energy consumption, CO2-equivalent emissions, water consumption, the use of minerals for hardware, and the generation of electronic waste’, 84 quantitative measurements tend to neglect how harms manifest in particular societal contexts. As such, as Lehuedé emphasises, ‘situated, empirical and qualitative studies attending to the needs and visions of the communities and environments participating within AI value chains are [also] required’. 85
To address some of these challenges, several rights-based regulatory initiatives have begun to emerge within the EU in recent years that offer at least some pathways and entry points for better concretising standards addressing AI's environmental footprint. 86
The AI Act, for example, contains several provisions aimed at reducing AI's climate impact, albeit in significantly weakened form compared to an earlier draft advanced by the European Parliament. 87 Article 40(2), for example, requires the European Commission to request standardisation bodies to provide deliverables on ‘reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system's consumption of energy and of other resources during its lifecycle, and on the energy-efficient development of general-purpose AI models’. 88 Annex XI confirms that providers of general-purpose AI models must disclose information on the ‘known or estimated consumption of the model’. 89 Finally, Article 95(2)(b) requires the AI Office and Member States to facilitate the drawing up of voluntary sustainability codes of conduct, with the aim of ‘assessing and minimising the impact of AI systems on environmental sustainability, including as regards energy-efficient programming and techniques for the efficient deign, training and use of AI’. 90
These provisions may be critiqued for their vagueness (for example, the reference to ‘other resources’ in Article 40(2) leaves significant discretion to standardisation bodies to determine which environmental impacts beyond energy consumption fall within its scope), narrowness (for example, the exclusive application of Annex XI to providers of general-purpose AI models neglects other AI technologies), and potentially weak enforcement (with delegation of standard-setting to industry-dominated standardisation bodies and voluntary codes of conduct). 91 Nonetheless, these provisions signal a starting point both for reducing the opacity of the environmental impacts of AI technologies and for defining standards for developing and using AI systems in a sustainable manner. Moreover, as Hacker suggests, it remains possible to integrate more robust requirements of ‘sustainability by design’ during the next scheduled update of the AI Act – for example, requiring that environmental considerations are embedded within the design and implementation of both high-risk and non-high-risk AI systems through sustainability impact assessments and mitigation measures, carefully designed to ensure that ‘all reasonable levers are pulled to minimise the contribution of [AI technologies] to climate change’. 92
Beyond the AI Act, Articles 34–35 of the Digital Services Act (DSA) require providers of very large online platforms and very large online search engines (defined as those with over 45 million EU users per month) to assess and mitigate ‘systemic risks’ stemming from the design, functioning and use of their services including ‘any actual or foreseeable negative effects for the exercise of fundament rights’. 93 Reflecting on these requirements, Griffin suggests that the European Board for Digital Services could use its power to publish reports identifying the most prominent and recurrent systemic risks as well as best practices concerning how to mitigate such risks (Article 35(2) DSA) to elaborate clear sustainability by design standards, while the European Commission could use its ability to issue guidelines (Article 35(3) DSA) and industry codes of conduct (Article 45 DSA) to similar effect. 94 Even where formally non-binding, such initiatives could factor into evaluations of compliance with Articles 34–35, giving them ‘a quasi-binding character’. 95
Beyond the sphere of digital regulation, the recently adopted Corporate Sustainability Due Diligence Directive (CSDDD) requires businesses within its scope of application to adopt and implement annual transition plans for climate mitigation. 96 This provision has been critiqued for affording companies substantial leeway – both in setting absolute emission reduction targets (‘where appropriate’) and in ensuring that the business model and strategy of the company are compatible with the transition to a sustainable economy (‘through best efforts’). 97 Concerns have also been raised that national authorities are only empowered to supervise the adoption and design and not the effective implementation of such plans, while Member States are not required to provide for civil liability with respect to climate transition plans. 98 Nonetheless, as Buser observes, the CSDDD may be viewed as establishing a baseline that Member States can build upon, in particular by ‘establish[ing] more detailed guiding principles for climate transition planning and subject[ing] it to legal review, such as through administrative oversight and the possibility for civil society actors to submit substantiated concerns’. 99 In this way, the requirement that businesses produce transition plans could become an additional avenue for developing and clarifying corporate climate mitigation standards.
Therefore, although far from a panacea, these frameworks offer avenues for articulating more concrete standards for corporate actors to reduce both their own direct energy consumption as well as their downstream impacts. 100 In particular, these frameworks could be harnessed to nurture standards in a diversity of areas, including standards for the construction of data centres that take into account their ability to rely on renewable sources of energy, requirements to opt for less energy-intensive tools for software development and machine learning, processes for assessing whether the environmental risks of particular AI-enabled products are justified by their societal benefit, and content moderation standards for platforms concerning their amplification of environmentally sustainable products and their prohibition of advertising for the most environmentally harmful products. 101
AI technologies and climate mis/disinformation
A further illustration of the challenge of concretisation concerns the articulation of substantive standards within the context of addressing AI-driven climate mis/disinformation on online platforms. As traditionally understood, there are two dimensions of human rights standards that render them an awkward fit for addressing climate mis/disinformation on online platforms.
First, human rights standards governing freedom of expression have evolved primarily with States in mind. Applying those standards to private platforms requires grappling with questions such as: what purposes are considered ‘legitimate’ for restricting speech in the online content moderation context; what weight should be afforded to platform business interests within moderation decisions; and how to contend with the lack of competency of platforms to balance competing rights and interests, such as national security and public order, in the diversity of contexts in which platforms operate. 102
Second, human rights standards governing freedom of expression have generally been developed via what Evelyn Douek has termed an ‘ex post review mode of error correction and accountability’, 103 which fails to address or grapple with some of the unique affordances of online platforms, including the unprecedented scale and speed of speech that is distributed on platforms around the world which makes errors in content moderation inevitable, 104 the diversity of remedies available to platforms beyond the ‘leave up / take down’ binary, 105 and the ongoing evolution and impact of platform design choices and algorithmic recommender systems on the restriction and distribution of speech. 106
While these challenges are considerable, they need not be read as arguments for abandoning rights-based approaches in the content moderation context. Rather, they may be viewed more modestly as revealing the need for developing processes capable of rendering substantive standards better attuned to the particularities of online platforms. The EU's DSA holds some promise in this regard, although significant uncertainty remains concerning how the legislation will be understood and implemented in practice.
On the one hand, the DSA requires online platforms to put in place ‘notice and action’ mechanisms, pursuant to which users can notify platforms of content they consider to be illegal. 107 Sufficiently substantiated notifications can trigger platform liability unless the platform acts expeditiously to remove or disable access to the illegal content. 108 National judicial or administrative authorities are also empowered to issue ‘removal orders’ relating to specific items of content. 109 These provisions of the DSA reflect a more traditional ex post-model of human rights law that directs platform attention towards reviewing individual pieces of content at the expense of examining how individual outcomes may be the result of either ‘problematic system design’ or ‘a calculated and reasonable ex ante tradeoff between differing values’. 110 As several commentators have observed, this model is structurally limited in significant respects. It focusses narrowly on user content at the expense of behaviour, the remedy of removal at the expense of algorithmic amplification and other design choices, and the rights of users at the expense of non-users who may also be impacted by the consequences of content moderation. 111
More promisingly, however, the DSA also requires very large online platforms and very large online search engines to assess and mitigate ‘systemic risks’ to a range of public interests stemming from the design and functioning of their services. This component of the DSA is better equipped to address climate mis/disinformation in at least two respects: first, systemic risks extend beyond a narrow focus on illegal content (the scope of which climate mis/disinformation will generally fall outside) to encompass a broader set of public concerns including ‘any actual or foreseeable negative effects for the exercise of fundamental rights’; 112 and second, measures considered relevant to mitigate systemic risks extend beyond the removal of individual pieces of content towards a wider concern for platform design and user empowerment. 113 These dimensions of systemic risk assessments established by the DSA have the potential to provide a foundation for a more systemic conception of rights to develop over time. 114
A more systemic understanding of rights would accept that errors in content moderation at scale are inevitable. Hence, the pertinent questions become whether platforms are taking sufficient action to reduce errors, what kinds of false positives and false negatives platforms should err on the side of with respect to particular types of content in particular societal contexts, and which groups are likely to bear the costs of errors in practice. 115 Framed in these terms, standards could evolve to focus on ‘the upstream choices about design and prioritisation in content moderation that set the boundaries within which downstream paradigm cases can occur’. 116 This could occur through a dynamic and ongoing process of systemic risk assessment that allows for ‘innovation and iteration’ over time rather than one-off fixed solutions to the challenges of online speech governance. 117 Such an approach might even provide a pathway for scrutinising particular aspects of platform business models, including their reliance on behavioural surveillance to amplify organic content and microtarget paid content, to the extent that they are found to contribute to the promotion of mis/disinformation in specific contexts. 118
*****
While these attempts to address the concretisation challenge of rights-based approaches are laudable, it is important to recognise the limits of such efforts, which tend to address what Buser has termed ‘not enough light critiques’ – namely, critiques that highlight limits in the scope and specificity of rights-based frameworks, but risk overlooking the need for more fundamental reform. 119 Reflecting on AI's environmental footprint, for example, Luccioni, Strubell, and Crawford emphasise that AI currently operates within a ‘market-driven context’ that ‘rewards rapid growth and ever-increasing computational power’. 120 As such, in order to address AI's climate consumptionism, what is required is not merely ‘measuring or quantifying impacts more thoroughly within existing market logics’, but rather ‘a more substantial reimagining of the relationship between AI technologies, business objectives, and ecological imperatives’. 121 This raises the question whether rights-based approaches are conceptually capable of addressing the societal scale of the risks and concerns that arise at the intersection of climate change and AI technologies – a question to which this article will now turn.
The challenge of individualism
A prominent strand of more fundamental conceptual critiques of rights-based approaches centre on what may be termed the challenge of individualism, understood as the tendency of human rights frameworks to focus on individual cases of harm to the neglect of societal harms. As noted earlier in this article, reliance on AI technologies to address the climate crisis gives rise to a diversity of risks, many of which implicate collective and societal interests and values. 122 Although rights-based approaches encompass a number of collective rights, including the right to self-determination and the right to a healthy environment, much of the normative and institutional framework associated with such approaches remains individualised in orientation – focused on individual rights-holders and individualised conceptions of harm. 123 This individualism poses a number of challenges to the capacity of rights-based approaches to meaningfully address the societal dimensions of the risks that arise at the intersection of climate change and AI technologies.
First, it may prove difficult to shoehorn many larger societal concerns into the categories of rights, which are often more attuned towards expressing and addressing individualised rather than collective and relational harms (the expressive challenge). 124 As Salomé Viljoen explains, data production in the AI economy is ‘deeply – even fundamentally – relational’. 125 The data collection practices of leading technology companies are aimed primarily at ‘deriving (and producing) population-level insights regarding how data subjects relate to others, not individual insights specific to the data subject’ – a process that apprehends people in terms of their social relations in order to develop models to predict and change behaviour not only of the data subject but all individuals who share those population features. 126 Importantly, data's relationality is not only central to what makes data production economically valuable, 127 but can also have harmful distributive effects by ‘spread[ing] the benefits and risks of data production unevenly among actors in the digital economy, often along the lines of group identities that serve to inscribe forms of oppression and domination’. 128
By approaching AI governance through the prism of individual rights, the risk arises of ‘reduc[ing] legal interests in information to individualist claims subject to individualist remedies that are structurally incapable of representing the population-level interests that arise due to data-horizontal relations’. 129 Such an approach risks enabling ‘significant forms of social informational harm to go unrepresented and unaddressed in how the law governs data collection, processing, and use’. 130 In addition, by falling back on individuals to adjudicate between legitimate and illegitimate forms of information production, such an approach also risks ‘foreclosing socially beneficial forms of data production’. 131
Second, given the complexity and opacity of AI technologies, it may prove challenging for individuals to learn that their rights have been potentially violated and, consequently, to assert their rights in practice (the opacity challenge). 132 Moreover, even where individuals become aware of a possible violation, the impact on any particular individual may be deemed too minor to justify the energy and resources associated with doing so (the collective action challenge). 133 As Yeung explains, ‘one of the distinctive and novel challenges which AI systems now pose arises from their capacity to operate in a highly targeted and personalised manner, yet in real-time and at a population-wide scale, which could pose serious societal threats but for which the motivation for any individual to try and counter these threats may be extremely weak’. 134
Although these concerns gesture towards some of the limits of rights-based approaches, there are several ways in which the vocabulary of rights may be harnessed to address the societal dimensions of risks at the intersection of climate change and AI technologies at least to a certain degree.
The expressive challenge
First, in terms of the expressive challenge, rights-based approaches provide some scope to incorporate a more societal lens for addressing risks at the intersection of climate change and AI technologies. The tripartite test of legality, legitimacy and necessity, for example, enables human rights actors to assess not only individual harm but also the underlying rationale for designing and deploying particular AI technologies. 135 As Lorna McGregor explains, by exposing the evidence-base for relying on particular technologies in specific societal contexts, the tripartite test can locate justifications for the introduction of particular technologies ‘within wider policies which themselves may be the drivers for technological uptake’. 136 For example, where human rights law helps expose the deployment of surveillance technologies to borders to deter climate-induced migrants, ‘the problem definition widens from the human rights impact of surveillance technologies to the human rights impact of securitised border policies, thus expanding the governance approaches required to effectively protect human rights’. 137
Beyond the tripartite test, rights can also be examined in ways that recognise how AI technologies impact different rights in interconnected ways rather than in isolation from each other. Daragh Murray, for example, has explored how the chilling effect induced by retrospective facial recognition technology will typically bring the rights to privacy, expression and assembly simultaneously into play such that the harm that results is ‘greater than the sum of its parts’. 138 This exerts not only ‘a profound impact on the process by which individuals develop their personality, including their political opinion’, 139 but also results in ‘a society-wide effect that threatens to undermine the effective functioning of participatory democracy’. 140 Murray puts forward the concept of ‘compound human rights harm’ as a means for human rights actors to recognise and address ‘the interconnected nature of human rights’. 141 The concept also more convincingly confronts the ways in which rights combine to safeguard ‘not only individuals’ rights, but also the societal processes central to individuals’ development of their identity and to democratic functioning’. 142
Murray's proposal complements the general direction of travel of human rights institutions, whose ‘tunnel vision’ has gradually given way to an approach that demonstrates increasing interest in the application of the principle of systemic integration as well as the Vienna Declaration formula of the indivisibility, interdependence, and interrelatedness of all human rights – resulting in more frequent references to State obligations in different human rights treaties as well as other international legal instruments including those within the field of international environmental law. 143 It also complements approaches to the right to non-discrimination which reject the notion of examining discrimination based on only one ground at a time in favour of compound and intersectional approaches that understand the grounds of non-discrimination in both additive and relational ways – namely, as conduits for examining how different aspects of an individual's identity interact against the background of societal relations of power to produce disadvantage. 144 In a similar vein, recognising compound and intersectional human rights harms could provide a pathway for human rights actors to examine how different rights interact and intersect to produce societal harms that cannot be fully understood by focusing on individual rights in isolation.
Beyond the merits of the tripartite test and a compound-intersectional lens of human rights harm, rights-based frameworks may also be relied upon to drive the regulatory conversation towards establishing red-lines – requiring the prohibition of certain AI technologies, applications or use cases in light of the societal harms to which they give rise. 145 At the intersection of climate change and AI, red line argumentation has been advanced by civil society groups to call for banning behavioural surveillance-based advertising on online platforms implicated in the spread of harmful content including climate mis/disinformation, 146 spyware software implicated in enabling access to the entire digital life of human rights defenders and journalists including those working on climate activism, 147 and real-time biometric identification technologies in publicly accessible spaces which may be relied upon to establish digital borders potentially implicated in the deterrence of climate-induced migration. 148
In practice, whether red-line arguments are successful will always be contingent on the context in which they are advanced. The European Court of Human Rights, for example, has proven somewhat reluctant to establish red lines in the digital surveillance context, preferring to set procedural guardrails that enable what it considers to be rights-compliant deployments of AI technologies. 149 In the legislative arena, the EU's AI Act takes the important step of accepting the principle that certain AI systems must be prohibited due to their adverse impacts on fundamental rights, but it is riddled with carve-outs that aim to satisfy the national security interests of Member States. 150 On the one hand, these illustrations reveal that there is no guarantee that red-line argumentation will succeed and that rights-based approaches may even legitimate the development of repressive AI systems. At the same time, these examples also reveal the capacity of rights-based approaches to steer the regulatory conversation beyond a concern with ‘fixing’ or ‘perfecting’ AI systems towards more ‘fundamental existential questions’ concerning which AI technologies, applications and use cases deserve to be prohibited in light of the societal harms and concerns to which they give rise. 151
The opacity and collective action challenges
Turning to the opacity and collective action challenges, one avenue through which the capacity of rights-based approaches to address societal harms may be enhanced is through the adoption of more flexible procedural rights. Such rights may including providing researchers with access to data to gain a deeper understanding about the systems and processes of technology companies, as well as establishing looser standing conditions to enable organisations to exercise rights on behalf of individuals. An example of the former is Article 40(4) of the DSA, which provides that very large online platforms and very large online search engines must provide access to data to researchers vetted by national authorities. 152 An example of the latter is Article 80 of the General Data Protection Regulation, which not only grants data subjects the right to mandate non-profit bodies to lodge complaints on their behalf, but also affords EU Member States leeway to provide that such bodies have the right to lodge complaints independent of a data subject's mandate. 153 Although this right remains tied to a demonstration of individual harm, in practice organisations such as noyb have been able to use individual cases to bring representative complaints that target corporate business models and systems. 154 Drawing an analogy with EU environmental law, Smuha suggests that these types of provisions could be taken further by delinking them from the need to demonstrate individual harm given the societal interests at stake. 155 Such an approach could open up pathways for improving the scrutiny directed towards the societal risks associated with AI-based climate mitigation and adaptation projects. For example, one could ensure that these projects are designed and developed with adequate consideration for and in consultation with the societal contexts of their application. Adherence to data protection safeguards including data protection by design and by default as well as the principles of data integrity, confidentiality, and minimization could also be integrated. Finally, guardrails could be established to prevent their co-option or function creep.
Beyond procedural rights, rights-based approaches may also be envisaged in a more structural sense, 156 one that relies on ‘a structural understanding of power relations as providing a basis for legal intervention’. 157 A structural conception of rights places emphasis on positive State intervention as a means of safeguarding public and societal values such as the population-level interests impacted by data governance. A structural conception of rights may not always in and of itself offer a particular blueprint for the form such regulation should take. 158 It can nonetheless provide a basis for advocating for collective normative frameworks that seek to address the population-level interests impacted by AI technologies, in particular by situating affected rights within the structural imbalances of power that characterise the AI lifecycle. 159
Such frameworks might include, for example, public oversight and monitoring mechanisms to safeguard societal interests and ensure marginalised communities are at the forefront of decisions concerning the design, development and deployment of AI systems in climate mitigation and adaptation projects. 160 A promising approach for this purpose is ‘Design from the Margins’ (DFM), defined by Afsaneh Rigot as ‘a design process that centers the most impacted and marginalised users from ideation to production, in order to expand the scope of user needs, experiences, and risks considered during the development of technologies’. 161 This concept is based on the knowledge that ‘when those most marginalised are designed for, we are all designed for’. 162 As Rigot explains, by foregrounding decentred users – identified as ‘those most at risk and under-supported in the contexts in question’ – DFM seeks to shift the balance of power towards the interests, voices and experiences of those at the margins and to address ‘the effects of western-centrism on vulnerable and/or hard-to-reach communities’. 163 In a similar vein, Zalnieriute calls for decolonising AI technologies by incorporating ideas from the Global South and Indigenous epistemologies as a means of helping ‘recognize and explicitly acknowledge the power disparities, exploitation, and coloniality of data production as a collective rights deprivation and a new form of ongoing structural colonial violence’. 164 By centring the voices of affected communities, these approaches seek to provide a means for communities to resist the adoption of AI technologies where they are considered unnecessary or detrimental in particular societal settings, as well as to address the adverse societal impacts that may arise from their design, development and deployment in practice.
In the context of risks at the intersection of climate change and AI technologies, such approaches might involve centring the voices of those most at risk within AI supply chains as well as those at the margins within potential climate mitigation and adaptation projects such as smart city and smart agriculture initiatives. Viljoen's call for institutional forms of data governance that secure ‘affirmative rights to representation in the conditions and purposes of data production’ and Hacker and Neyer's proposal to establish ‘substantively smart cities’ that seek to centre community participation in smart environments within robust legal boundaries defined by fundamental rights could each be understood as grounded in a structural understanding of rights. 165 Such an understanding seeks to move beyond a focus on individual rights towards a perspective that takes imbalances of power as its point of departure for requiring States to establish regulatory frameworks for managing the societal interests at stake in AI governance. Gianclaudio Malgieri and Frank Pasquale's proposal for a system of ‘unlawfulness by default’ for AI systems, encompassing ‘an ex-ante model where some AI developers have the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its legal bases and purposes’ offers an additional example of a regulatory framework that seeks to take the skewed power dynamics of AI governance as its starting point. 166
*****
These attempts to address the challenge of individualism are significant, offering potential pathways for rights-based approaches to address the societal dimensions of risks at the intersection of climate change and AI technologies in ways that strive to take into account the structural imbalances of power that characterise the AI lifecycle. Again, however, it is important to recognise the possible limits of such efforts. In particular, while such efforts might strive to account for structural imbalances of power, they may not always prove successful in addressing the structural power of actors within the AI lifecycle. In this regard, as Law and Political Economy scholars explain, the challenge remains – whether in the AI context or beyond – of addressing ‘the legal structures that facilitate the accumulation of private power in the first place, which in turn leads to predictable patterns of human rights violations’. 167 The significance of this observation becomes clear when considering a final challenge to rights-based approaches – namely the challenge of marketised managerialism.
The challenge of marketised managerialism
Even if it is possible to address the concretisation and individualism challenges at least to a certain degree, a further challenge remains in the form of ensuring the effective implementation and enforcement of rights-based approaches in practice. At the intersection of AI and climate change, this challenge is heightened by the central role performed by Big Tech companies in designing, developing and deploying AI technologies across different societal contexts. To date, two models have emerged as the dominant forms of enforcement, each of which reflect what may be termed marketised managerialism – the reliance on informal, market-friendly modalities of compliance that devolve significant regulatory authority and discretion to the private sector.
The first form of marketised managerialism is self-regulation. In the field of business and human rights, the UNGPs establish a non-binding corporate responsibility to respect human rights based on ‘a global standard of expected conduct’. 168 Underlying the corporate responsibility to respect is an assumption that nurturing corporate processes to identify and disclose actual or potential adverse human rights impacts with which their businesses may be involved will incentivise corporate action to address those impacts, particularly bearing in mind resulting pressures that may arise from investors, civil society groups, and consumers. 169 The result is a framework that is innovative but vulnerable. As Klaas Eller explains, ‘the UNGPs contest the boundaries that the concepts of contract and corporate personality draw for individual responsibility… [but] at the price of exposing human rights to a logic of risk assessment in which companies enjoy broad interpretive and managerial authority’. 170 By failing to grapple with the significant political and economic power of technology companies, the self-regulatory framework established by the corporate responsibility to respect has risked generating what Zalnieriute has termed ‘procedural washing’, whereby businesses implement cosmetic safeguards and tick-box exercises of compliance in an effort to protect their reputation and strengthen their brand, guard against the development of binding regulation, and cement and legitimise the status quo, at the expense of implementing more substantive changes to their business models and practices. 171
The inadequacies of self-regulation in the corporate sector in general and in the technology sector in particular have led to a wave of new and updated rights-based regulatory frameworks in recent years. However, these frameworks have also tended to be characterised by a particular variety of marketised managerialism, which Cohen and Waldman have termed ‘regulatory managerialism’. 172 Regulatory managerialism is both ‘a governance toolkit’ and ‘a deeply internalized governance orientation’, 173 which is characterised by a number of attributes: 174 first, reliance on procedurally informal practices including best practice statements and compliance certifications rather than more formal rulemaking and enforcement processes; second, devolution of regulatory authority and oversight to private actors, including certification professionals and audit intermediaries, who construct ‘opaque technocracies’ of compliance structures that risk ‘conceal[ing] predatory corporate behavior beneath a veneer of procedural legitimacy and creat[ing] a feedback loop in which the organizational structures, processes, and vernaculars of managerialism silence and marginalize anti-managerial voices and traditions’; and third, a neoliberal ethos that privileges corporate interests and priorities for efficiency-maximization, data-driven economic growth, and capital accumulation at the expense of public values including social and political accountability. The result is a form of regulation that ‘invites – and often welcomes – co-optation’. 175 Drawing on the work of sociologist Laura Edelman, Waldman explains how vague and open-textured legal frameworks, including various rights-based digital regulations, provide corporate compliance professionals leeway ‘to frame the law in accordance with managerial values like operational efficiency and reducing corporate risk rather than the substantive goals the law is meant to achieve’. 176 Such frameworks, thereby, open the door for companies ‘to create structures, policies, and protocols that comply with the law in name only’ and generate a risk that judges and policymakers will defer to those structures as best practices, ‘mistaking mere symbols of compliance with adherence with legal mandates’. 177
Examples abound of regulatory managerialism within the sphere of digital regulation. Consider, for example, the DSA's requirement that systemic risk assessments undergo independent audits at least once per year. 178 As noted earlier in this article, the requirement that very large online platforms must conduct systemic risk assessments and implement mitigation measures could provide an important basis for addressing climate mis/disinformation. However, as Laux, Wachter and Mittelstadt explain, ‘due to their size, VLOPs will be in a position to leverage their market power against their mandatory auditors, hence creating the risk of “audit capture”’ – the dependency of auditors on a small number of very large online platforms for business which has a potentially detrimental impact on the incentives for auditors to comprehensively scrutinise their clients. 179 A further illustration is the AI Act's delegation of considerable interpretative discretion to European standardisation organisations to elaborate harmonised standards for high-risk AI systems. 180 Importantly, high-risk AI systems that conform with such harmonised standards will benefit from a presumption of conformity with the requirements set out in Chapter 2 of the AI Act. 181 Yet, as several commentators have cautioned, technical standardisation bodies tend to be dominated by industry actors who are prone to regulatory capture, populated by experts from the fields of engineering with minimal knowledge related to fundamental rights, and lacking in any significant public oversight from EU institutions. 182
*****
Reflecting on the challenge of marketised managerialism, it is suggested that this challenge should not be viewed as simply requiring more robust compliance and enforcement measures. Rather it is a signal of the need for a more fundamental reckoning with the institutional foundations of private power within the AI lifecycle. Importantly, as Kampourakis and Lane recently observed within a broader discussion of the business and human rights movement, ‘the emergence of such private power is neither a natural given nor historically contingent and accidental’. 183 As such, to confront the structures and logics of private power that enable forms of exploitation and structural inequality at the intersection of climate change and AI technologies requires a dual lens: 184 first, a legal-institutional lens that seeks to identify the ways in which the accumulation of private power within the AI lifecycle is ‘the historical product of existing legal regimes and institutional constellations’ and to transform those legal structures in ways that strive to enable ‘fundamentally different orderings’ better attuned to addressing rather than exacerbating the climate crisis; 185 and second, a legal-social lens that situates those existing legal regimes within ‘the social relations that condition the production of legal text and meaning’ and which therefore emphasises the importance of ‘collective action and social movements in transforming the underlying social relations of production’. 186
Rights-based interventions in support of such transformations might include some of the measures mentioned earlier in the article, such as red-line arguments that seek to prohibit certain types and uses of AI technologies, as well as initiatives that seek to account for the structural imbalances of power within the AI lifecycle. However, such interventions would need to be strategically-oriented in two senses. 187
First, in terms of perspective, to mobilise strategically is to conduct a particular intervention not to ameliorate the status quo but to advance longer-term structural transformational change 188 – in this case, to transform the structures of power within the AI lifecycle in ways that seek to ensure that AI technologies are subordinated to and become part of ‘a deep sustainability transformation’ aimed at ‘eliminating the root causes of unsustainable production and consumption patterns, not just alleviating their symptoms’. 189 To this end, the design, development and deployment of AI technologies would need to be integrated within and oriented towards transforming patterns of production and consumption based on ‘principles such as regenerative design, circularity, sufficiency and equity’ rather than exponential growth. 190 Importantly, rights-based interventions towards such ends need not be conducted in isolation, but can be harnessed as part of ‘emancipatory or critical multilingualism’ – wider campaigns that rely on diverse, sometimes conflicting, emancipatory languages beyond rights-based frameworks in their struggle for change. 191
Second, in terms of evaluation, to intervene strategically is to form a judgment about the relative merits of mobilising the vocabulary of rights in any particular context, in particular by evaluating the risk that rights-based frameworks may prove redundant or even legitimate interests to which the mobilisation is opposed. This judgment will always be a prediction – there is no form of mobilisation that is completely immune to co-option or which is guaranteed to contribute towards strategic ends. In this regard, as Dao has recently emphasised, it is important to remember that historically rights have taken myriad different forms, including ‘neoliberal, technocratic and datafied forms’ that arguably do more to stabilise the status quo than enable transformational change. 192 At the same time, there is also a history of anti-imperial and anti-neoliberal human rights struggles on which more transformative efforts might draw for inspiration – struggles that have sought to harness the vocabulary of human rights as one tactical component within wider strategic efforts to construct a more just political economy. 193
CONCLUSION
In an era of ecological breakdown, the turn to AI technologies to address climate change is understandable but accompanied by a range of risks and concerns. These risks and concerns range from AI's consumptionism and potential legitimation of techno-centric thinking within climate mitigation and adaption initiatives, to AI's entanglement in repressive forms of surveillance and facilitation of regressive climate narratives. At a time when these risks are beginning to garner greater public attention, this article has sought to assess the promise and perils of rights-based approaches for confronting them.
A central conclusion of the article is that harnessing rights-based approaches requires acknowledging their limits, uncertainties, and perils – recognising rather than understating the weaknesses of rights as a normative and institutional framework for addressing risks at the intersection of climate change and AI. To this end, the article explored three challenges in the form of concretisation, individualism, and marketised managerialism that may inhibit the value of rights-based approaches in practice – revealing the potential for rights-based frameworks to neglect and even legitimate risks at the intersection of climate change and AI rather than address them.
At the same time, this article also suggests that acknowledging the limits and perils of rights-based approaches need not signal their abandonment, but may rather enable avenues to be identified for their adaptation. In this spirit, several pathways were identified for rights-based approaches to evolve to meet the challenges of concretisation, individualism, and marketised managerialism at least to a certain extent. Substantively, these pathways include the development of standards that are more attuned to the affordances and uncertainties of AI technologies – whether in terms of concretising corporate climate mitigation targets, addressing the design and algorithmic dimensions of online platforms that enhance the spread of climate mis/disinformation, or establishing red-lines to guard against repressive forms of surveillance against climate campaigners and migrants. Procedurally, these pathways include the development of processes and accountability frameworks that are more attuned to the power disparities of the AI ecosystem – whether in terms of ensuring researcher access to data to gain a deeper insight into the systems of AI companies, enabling collective entities to exercise rights on behalf of individuals through looser standing conditions, or facilitating meaningful forms of participation that centre the most impacted and marginalised communities throughout the AI lifecycle.
Ultimately, while one should not be starry-eyed about the promise of rights-based approaches – particularly in confronting the legal and institutional foundations of power in the AI lifecycle – this article suggests that ‘the answer is not to disengage (…) but rather to make pragmatic use of the rights that do exist; and to find alternative channels through which to engage in the ongoing production of normative codes in order to redefine and radicalize rights from below’. 194
Footnotes
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
