Abstract
This article sets out the rationale for a United Nations Regulation for Artificial Intelligence, which is needed to set out the modes of engagement of the organisation when using artificial intelligence technologies in the attainment of its mission. It argues that given the increasing use of artificial intelligence by the United Nations, including in some activities considered high risk by the European Commission, a regulation is urgent. It also contends that rules of engagement for artificial intelligence at the United Nations would support the development of ‘good artificial intelligence’, by giving developers clear pathways for authorisation that would build trust in these technologies. Finally, it argues that an internal regulation would build upon the work in artificial intelligence ethics and best practices already initiated in the organisation that could, like the Brussels Effect, set an important precedent for regulations in other countries.
Keywords
On 21 April 2021, the European Commission published a proposal for a regulation on artificial intelligence (AI) (European Commission, 2021), the first of its kind to attempt to integrate the multi-tentacled beast that is AI. The document enters into detail in terms of definitions and uses of the technologies, as well as very important sections on the forbidden uses of AI and high-risk activities, proposing that the latter go through an authorisation process much like a drug or a car would before being made available to the public.
However, as set out in Article 2, the regulation does not apply to international organisations, regardless of whether they are operating in European Union (EU) territory. Naturally, the EU does not have jurisdiction over international organisations such as the United Nations, which is governed by international law (United Nations, 2021). The exclusion therefore does not come as a surprise but does point to a gap in the AI regulation, especially since technologies that are classified as forbidden or high risk for European citizens would presumably be also very risky for all other citizens, regardless of nationality. Furthermore, as explained by Bradford (2020) in The Brussels Effect, the EU is known to set regulatory precedents globally. It is therefore likely that the new regulation will have a ripple effect, which would reach many other countries. In that context, regulating at the United Nations itself would be an important way for the organisation to demonstrate its leadership on this important societal issue.
The activities of the United Nations are regulated in a series of legal instruments in international law that have been developed since the drafting of the Charter of the United Nations (United Nations, 1945). The UN Charter sets out the role of the Secretary General, the General Assembly, and the Security Council, as well as the activities of UN Peacekeepers. Since 1945, there have been a few amendments to the Charter, as well as Security Council and General Assembly resolutions on issues that were not considered at the onset of the organisation. The United Nations itself is very likely to develop a convention on AI, which member countries would likely sign and possibly ratify (Garcia, 2020; Latonero, 2018). However, such a document, like other United Nations conventions, would consist of a commitment by its member states to more ethical AI practices, rather than a regulation for the organisation itself to follow.
This article therefore sets out the rationale for a United Nations Regulation for Artificial Intelligence, which is needed to set out the modes of engagement of the organisation when using AI technologies in the attainment of its mission. It argues that given the increasing use of AI by the United Nations, including in some activities considered high risk by the European Commission, a regulation is urgent. It also contends that rules of engagement for AI at the United Nations would support the development of ‘good artificial intelligence’ 1 , by giving developers clear pathways for authorisation that would build trust in these technologies. Finally, it argues that an internal regulation would build upon the work in AI ethics and best practices already initiated in the organisation that could, like the Brussels Effect, set an important precedent for regulations in other countries.
The United Nations’ artificial intelligence ecosystem
AI technologies have, over the past decade, been increasingly used by United Nations agencies, funds and programs. Several research and development labs, including the United Nations Secretariat’s Global Pulse Lab (2021), United Nations High Commissioner for Refugees (UNHCR's) Jetson initiative (UNHCR, 2021), nnovation Labs (UNICEF, 2021) and United Nations Office for the Coordination of Humanitarian Affairs (OCHA) Centre for Humanitarian Data (OCHA, 2021a) have focused their work on developing AI solutions that would support the UN's mission, notably in terms of anticipating and responding to humanitarian crises. These research labs are largely composed of United Nations staff members and consultants trained in data science technologies. Many of the research labs have developed proofs of concept, which are yet to be integrated into the United Nations’ activities. 2 They aim, for example, to explore the use of AI modelling in anticipating refugee arrivals (Jetson), predict coronavirus disease-2019 (COVID-19) cases in countries with ongoing humanitarian crises or better understand risks to the Sustainable Development Goals (Global Pulse).
United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims. For example, Population Registration and Identity Management Ecosystem (PRIMES) was launched in 2018 by UNHCR to provide a centralised platform for the management of refugee data. By the end of 2018, the biometrics of 7.1 million refugees were managed by the platform (UNHCR, 2019). The World Food Programme (WFP) has also used biometric identification in aid distribution to refugees, coming under some criticism in 2019 for its use of this technology in Yemen (Raftree and Steinacker, 2019).
In parallel, however, the organisation has begun partnering with private companies that provide advanced analytical services. A notable example is the WFP, that in 2019 signed a USD45-million-dollar contract with Palantir, an American firm which specialised in data collection and AI modelling (Greenwood, 2019). In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a USD20 billion-dollar contract to Palantir, to track undocumented immigrants in the United States (Woodman, 2017). Their methodology was a classic case of what is called the ‘Mosaic Effect’, which is a technique used by intelligence agencies, which involves using multiple data sources to find personally identifiable information that might otherwise have been obscured. This technique is discussed in some detail by OCHA (OCHA 2021c), which explains that these databases can be used to reidentify highly vulnerable populations such as migrants to expel them from an area, recruit children for warfare, or otherwise threaten their safety. Palantir used this technique as part of its ICE contract by collecting and centralising large numbers of databases to reveal the location of migrants, leading to their eventual expulsion. Several human rights watchdogs, however, including Amnesty International, have raised concerns about Palantir, writing in a September 2020 report that: ‘Given Palantir's increasingly entrenched role in government operations and its multiple contracts across federal agencies, concerns about its human rights record are growing in urgency and deserve scrutiny’ (Amnesty International, 2020).
High-risk systems
The examples listed in the section The United Nations’ Artificial Intelligence Ecosystem highlight several issues that would suggest the need for regulation. First, the uses of AI come under the European Commission regulations list of high-risk activities, listed in Annex III of the regulation proposal. Second, the organisation is partnering with private companies who would, under normal circumstances, be subject to the upcoming AI regulation. Further, under the new regulatory framework, certain companies such as Palantir, would not have been able to operate as they have been, and would have been forced to be much more considerate of human rights and privacy.
In the European Commission proposal, developers of high-risk systems must go through an authorisation process akin to the approval process for a new pharmaceutical drug. This process, explained in Articles 8–16, involves several steps, including a risk management system, a data quality and governance system, technical documentation and record keeping, transparency and provision of information to users, human oversight, accuracy, robustness and cybersecurity of the system, and a quality management system. There is also in place a process by which non-conformity can be evaluated and corrective action (or sanction) can be applied.
The high-risk systems in question involve several uses, including biometric identification and categorisation of natural persons, evaluation of the eligibility of persons for public assistance benefits and services and the dispatching of emergency first-response services, 3 all of which, as we have seen above, are current uses of AI by the United Nations. Examples include the biometric systems used by UNHCR and WFP as well as the work done by Palantir, which optimises the distribution of humanitarian aid. Although there are compelling reasons to use AI in a humanitarian context, high-risk systems such as these would require additional oversight, given their potential to adversely impact vulnerable populations. Kaurin (2019), for example, details the adverse effects of errors in biometric identification on refugees, which have been shown to be less accurate on women and people of colour (Buolamwini and Gebru, 2018).
The European Commission also provides two additional categories of AI systems – prohibited and low risk. Prohibited technologies include the use of subliminal techniques that might influence people's behaviour beyond their consciousness, the use of AI in social scoring, and the use of real-time, remote biometric identification of persons in a public setting. According to the European Commission, these technologies present ‘unacceptable risks’, and are therefore forbidden completely. Any technology that is not included in the forbidden or high-risk category would be considered a low-risk technology. These technologies are not required to go through any formal certification process before going to market.
Unlike that of a national or regional body, the United Nations’ internal regulation would not need to include all possible uses of AI, but rather would focus on those that are likely to affect its own activities, such as the ones described above. Prohibited systems therefore appear to be less concerning that the high-risk ones, which are applicable in several of its activities.
Ethical frameworks and best practices
As we have seen, just like most AI initiatives developed in recent years globally, this work has happened largely without regulatory oversight. However, there have been many attempts within the organisation to set up ethical modes of operation, as well as to influence AI ethics globally.
For example, OCHA's Peer Review Framework sets out a method for overseeing the technical development and implementation of AI models (OCHA, 2021b). Its objective is to provide a method for the review of models developed by humanitarian organisations by peers in the same sector, thereby providing a layer of validation that might hitherto not have existed. While there is no legal obligation for developers working with vulnerable populations to use it, it does attempt to increase accountability and transparency of models, in the hope of attenuating its risks and building trust
ITU has also sought to encourage ethical uses of AI through its yearly conference entitled AI for Good, which is specifically oriented towards uses of AI that promote the Sustainable Development Goals. Several of AI for Good talks listed for 2021 address the ethics of AI use in specific cases, such as democratic participation or self-driving cars. However, the approach appears more focused on promoting implementations that drive sustainable development, rather than mitigating their possible risks (ITU, 2021).
Finally, United Nations Educational, Scientific and Cultural Organization (UNESCO) has done considerable work in AI ethics, notably by convening intergovernmental meetings since 2019 to articulate a recommendation on the ethics of AI. The 2020 draft of this document focuses on values, such as transparency, gender equity, human rights and fairness, among others. While different from the European Commission's risks-based approach, this is certainly aligned with other United Nations documents, such as the Universal Declaration of Human Rights, and the Convention on the Rights of the Child. Like the various United Nations Conventions, this document is therefore aimed at Member States, and could possibly inform the future United Nations convention on AI (UNESCO, 2020). Going a step further than the recommendation, the convention would be voluntarily ratified by Member States, who would then commit to following its principles in their own jurisdictions.
Even without a convention, the United Nations would certainly have an incentive to apply internally its own recommendations to Member States. As such, the UNESCO work, along with other efforts of the ITU, OCHA and other agencies, can certainly be seen as building blocks towards a regulatory framework. However, without legal backing, they are not sufficient to ensure the safety of AI technologies as they are currently being used.
Regulation to build trust in implementations
In addition, the lack of AI regulation at the United Nations can be considered a barrier for agencies seeking to adopt more effective and novel technologies. While some systems, such as the OCHA-Bucky Model, which predicted COVID-19 cases in Afghanistan, Central African Republic, Democratic Republic of the Congo, South Sudan, and Sudan, have been deployed for humanitarian use (OCHA, 2021d); others appear not to have been integrated into actual decision-making systems. An example of this is the Jetson tool, which was developed by UNHCR to predict the arrival of internally displaced persons to refugee camps in Somalia (UNHCR, 2021). According to the project's public-facing website, the tool has not been updated since 2019. Several of the numerous proof-of-concept projects developed by the UN Global Pulse have also not been deployed into production (Global Pulse Lab, 2021). One of the most important difficulties in deploying these solutions is related to trust, both in terms of internal adoption of the tools by decision makers, as well as acceptance by the target populations.
Trust in artificial intelligence systems is notoriously difficult to obtain, particularly in the work in which the United Nations engages, which is highly political and impacts very vulnerable populations. Some of the attempts at promoting transparency in AI, such as the OCHA Peer Review Framework, can be seen as a way to build trust. In fact, there have been other similar initiatives outside of the United Nations, such as Datasheets for Datasets (Gebru et al., 2018) and Model Cards for Model Reporting (Mitchell et al., 2019), as well as IBM's AI Factsheet, which provide tools for technical developers to explain their technologies, and therefore increase public trust in them. As explained by Aicardi et al. (2021), transparency is the most prevalent principle included in AI ethics. In the EU Guidelines on AI, which served to inform the 2021 regulation, transparency is listed as one of the ‘seven requirements in the realisation of a trustworthy AI.’ Rossi (2019) describes lack of trust as one of the key obstacles to the adoption of AI systems, regardless of the potential benefit.
The onus has largely been on the technical developers of these tools, such as data scientists, computer scientists and engineers, to develop the credibility of their tools. While the United Nations has invested, and continues to invest, large amounts of money into research labs and private partnerships, unless it is able to build internal and external trust in them, it will also face important barriers in their implementation.
A regulatory framework like the one proposed by the European Commission, however, especially with an authorisation procedure like the one described for high-risk activities, would take the pressure off technology developers in the humanitarian sector to individually justify their activities to decision-makers. Instead, agencies or research labs who wanted to develop an AI solution would work towards authorisation, likely quelling many internal doubts as to the effectiveness, safety and accuracy of their system.
Given the powerful nature of AI, the risks of research and development initiatives and the threat of unregulated public–private partnerships, an internal, enforceable United Nations regulation on AI is highly recommended. An UN-wide AI regulation would not only protect the risks posed by the technologies to the mission itself of the organisation, but it would also build trust in those applications that are actually supporting its work. This is an opportunity to concretise the efforts of the organisation in AI ethics and best practices, as well as further demonstrate its leadership on an issue of global importance.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author received no financial support for the research, authorship and/or publication of this article.
