Abstract
This article analyses the potential benefits and drawbacks of artificial intelligence (AI). It argues that the EU should become a leading force in AI development. As a goal that captures the public imagination and mobilises a variety of actors, the EU should develop mission-based innovations that focus on using this technological leadership to solve the most pressing societal problems of our time whilst avoiding potential dangers and risks. This leadership could be achieved either by adapting the EU’s available instruments to focus on AI development or by designing new ones. Be it seeking a visionary future for AI or addressing concerns about it, progress should always be driven with the human-centred perspective in mind, that is, one that seeks to augment human intelligence and capacity, and not to supersede it.
Introduction
Artificial Intelligence (AI) is currently being developed by major companies and pursued by governments worldwide. The origins of the field can be traced back to the 1950s, when British computer scientist Alan Turing posed the question ‘Can machines think?’ He saw that answering this question ‘should begin with definitions of the meaning of the terms “machine” and “think”’ (Turing 1950, 433). Initially, AI was described as ‘the science and engineering of making intelligent machines’ (McCarthy 2007, 2). More recently, AI has come to refer to a machine that has the ability to solve problems that are usually dealt with by us humans with our natural intelligence (Andersen 2002).
This article analyses the opportunities and threats created by AI and provides a human-centred perspective on the topic. It argues that AI is a technology with huge potential—both for good and bad. It all depends on how we use it. The article also gives an overview of the EU’s current position on AI, and provides policy recommendations to put the EU in the driving seat of AI development globally by focusing on augmenting human intelligence with an ethical emphasis.
The nature of AI
There are three main categories of AI. The first is artificial narrow intelligence (Kurzweil 2005). This refers mainly to the specialisation of AI machines, which means that they are able to do a specific task using machine learning and deep-learning tools. This type of technology has been able to beat chess and Go masters, and even win Jeopardy!—an American game show (Hassabis 2016). Albeit exciting, artificial narrow intelligence systems have been around for a long time and are used in many systems today, such as Google’s search engine. This serves as an example of the limitations of artificial narrow intelligence. It is able to quickly answer factual questions which it would be difficult for humans to answer: for instance, it knows the depth of the Atlantic Ocean. But it cannot answer the question of whether or not a pig can ride a bicycle, which would be an easy question for a child to answer. As Harvard cognitive scientist Steve Pinker (2007, 191) puts it: ‘The main lesson of thirty-five years of AI research is that the hard problems are easy, and the easy problems are hard.’
The second type of AI is artificial general intelligence, which refers to a human-level AI machine, one that would be as smart as a human across the board, and able to perform any intellectual task as we do with the capacity to understand and reason about its environment, applying intelligence to any problem rather than just one specific problem (Pennachin and Goertzel 2007; Goertzel and Pennachin 2007; Kaiser 2007, 298). An example of this type of AI can be seen in science-fiction films, in robots such as C-3PO and R2-D2 from Star Wars.
The third type of AI is artificial superintelligence, which refers to a machine that is smarter than the smartest ‘Einstein’ ‘in practically every field, including scientific creativity, general wisdom and social skills’ (Bostrom 2006, 11). Again, we can recognise this type of AI machine from science fiction, in robots such as Eva from Ex Machina and the Synths from the TV series Humans.
It is important to stress that in the case of artificial general intelligence and artificial superintelligence it has been problematic to agree on their definitions because it is difficult to define intelligence itself. In development terms, they both remain hypothetical.
The promise and threat of AI
The biggest threat from AI is the potential for its weaponisation. Much AI technology, whether already in place or yet to be developed, has the potential for dual use—that is, both commercial and military. AI could shake up armed conflict as significantly as nuclear weapons did. The US, Russia and China see AI as the key technology underpinning national power in the future and, thus, as vital to their national security (Allen and Chan 2017). Russian President Vladimir Putin has also noted that AI is the future not only for Russia, but for all humankind. According to him, ‘[w]hoever becomes the leader in this sphere will become the ruler of the world’ (Allen and Husain 2017). The Russian government’s Military Industrial Committee has set a target of making 30% of military equipment robotic by 2025 (Gady 2015). China intends to lead and be the global innovation centre for AI by 2030. The Chinese State Council’s 2017 strategy includes pledges to invest in research and development to apply AI in national defence, including military command and control, equipment and drills (China, State Council 2017).
Despite this threat, this author has a cautiously optimistic view of AI. As European Commissioner for Research, Science and Innovation Carlos Moedas has noted, ‘artificial intelligence is not a threat, how we choose to use it is’ (Moedas 2017). Therefore, we should be fostering the development of AI in areas that would, inter alia, undo the damage we have done to the planet through industrialisation, open the road to ending poverty and help eradicate disease. These global challenges should give rise to AI-driven mission-based innovations and should bring together citizens, scientists and engineers to address them (European Commission 2017a). We need to pursue a human-centred AI approach that augments our intelligence in a computer–human symbiosis—a sort of collaborative approach, where each partner brings its own superior skills to the partnership—instead of seeking AI that will supersede humans and be autonomous (Isaacson 2014). Such a development should therefore be seen as an evolution of human tools similar to our use of rocks, or the invention of the wheel, the computer or the smartphone.
Today, AI permeates economies and societies. How can we ensure that it benefits society as a whole? It is important to understand that AI, robots and automation are not interchangeable concepts. Machines have been present in every factory for a long time, doing highly repetitive and physically demanding tasks more efficiently and productively. This is ‘automation’, of which the automotive industry sets a good example. By contrast, AI systems ‘understand’ data rather than just collating it (Evans 2017). What is being replaced by automation and AI-powered machines are activities or tasks within jobs and not jobs themselves. The focus on occupations in this debate is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed and the jobs performed by people to be redefined, much like the bank clerk’s job was redefined with the advent of automated teller machines (Chui et al. 2015).
AI will transform the nature of work over the next decade, which is reviving concerns about mass unemployment, even though, in aggregate, the use of machines usually adds jobs to the economy (The Economist 2014). As higher skills and capital ownership are relatively better rewarded, automation might challenge the ‘fair’ redistribution of wealth, potentially calling for more state intervention (Amiot 2016).
The EU and AI
The EU has been working on unleashing elements of AI, although without an AI focus. The initiatives and legislative packages of the Digital Single Market Strategy, presented in May 2015, offer an example of this: they include the European Data Economy, with its framework for the free flow of non-personal data; the updating of ePrivacy rules to be in line with the General Data Protection Regulation; the enhancement of cybersecurity; the Digitising European Industry initiative; and, foremost, the European Cloud Initiative, which will offer ‘Europe’s 1.7 million researchers and 70 million science and technology professionals a virtual environment with super-computing power to store, share and re-use their data across disciplines and borders’ (European Commission 2016).
In 2016 the EU’s global strategy for foreign and security policy identified the need for global rules on AI (European External Action Service 2016, 43). Subsequent Council meetings in 2017 reinforced the need to pay more attention to AI (Council of the EU 2017a; Council of the EU 2017b; Council of the EU 2017c). The European Commission, in the Digital Single Market’s mid-term review report, considers that there is a need to adapt current legislation, and pledges that the EU will be ‘in a leading position in the development of artificial intelligence technologies, platforms, and applications’ (European Commission 2017b). It also puts forward an overall investment plan of €2.8 billion (for the period 2014–20) for interdisciplinary research into ‘intelligent robots’ through a Horizon 2020 public–private partnership named SPARC.
Whether the conversation is about intelligent machines or robots, such technologies have raised ethical concerns. In January 2017, the European Parliament’s Committee on Legal Affairs adopted a non-binding text containing recommendations for the Commission on Civil Law Rules on Robotics (the Delvaux report), which includes calls for the Commission to ‘consider the designation of a European Agency for Robotics and Artificial Intelligence’ and to ‘explore, analyse and consider the implications’ of ‘creating a specific legal status for robots’ (European Parliament 2017).
The European Economic and Social Committee (EESC) takes another approach and calls on the EU to take ‘a human-in-command approach to AI’, requesting that ‘machines remain machines and people retain control over these machines at all times’ (EESC 2017, 3).
In October 2017, the European Council identified a ‘sense of urgency to address emerging trends’—such as AI—and invited the Commission to ‘put forward a European approach to artificial intelligence by early 2018’ (European Council 2017). To address AI ethical concerns, Commissioner Moedas set up a group of experts that is expected to deliver a report on AI ethics in early 2018.
Policy recommendations
Making the EU a front-runner in AI development should be set as ‘the’ goal in the Commission’s upcoming AI strategy, scheduled to be presented in April 2018. This is not an over-ambitious goal—the EU offers the perfect environment to develop this technology. AI is largely based on fundamental research, which the EU is mastering. Creativity and innovation come from collaboration among multidisciplinary and diverse teams—diversity is a major characteristic of the EU. The EU also has a strong foothold in traditional industries, those that have not yet been digitalised and therefore offer a huge margin for digitalisation and profit from the productivity improvements brought about by AI. But to succeed, the EU must realign its AI-related initiatives and focus them on mission-based innovations—that is, large-scale projects to develop human-centred AI so that it augments our intelligence in a computer–human symbiosis to solve the societal problems of our time.
To achieve this goal, the EU should do six things. First, it should accelerate the development of the European Cloud Initiative, that is, the flagship initiative to develop quantum technology (due to begin in 2018) and a European high-performance computing, data storage and network infrastructure (due in 2020), and put them to use on an EU-wide AI programme.
Second, the regulation on a framework for the free flow of non-personal data in the EU should be swiftly agreed. Data might be the ‘new oil’ for modern businesses, but it is also the fuel that powers AI systems. Thus, non-personal data should be able to move freely and easily across country borders.
Third, the European Innovation Council should be supported by, or even merged with, the Executive Agency for Small and Medium-Sized Enterprises. This should increase autonomy, decrease red tape and allow a beefed-up budget for high-risk/high-payoff research so that it can foster disruptive and market-creating innovation, and boost breakthroughs. The first EU-wide joint programme should therefore focus on human-centred AI technology aimed at producing the mission-based innovations mentioned before.
Fourth, the EU should create a European Agency for Robotics and Artificial Intelligence, as suggested in the report by the European Parliament’s Committee on Legal Affairs, to provide policymakers with technical, ethical and regulatory expertise. This agency would oversee AI developments as they happen by identifying and approaching them one by one, deciding which need binding rules and then making the relevant proposals to policymakers. It should also explore a new regulatory framework for controlling algorithms and software, and the possibility of having independent auditors or regulators investigate AI-based decisions.
Fifth, policymakers both at the EU and national levels should regularly review whether the risks of using AI fall within the current regulatory regime, and if the existing rules adequately address those risks. For instance, the General Data Protection Regulation, the Security of Network and Information Systems Directive, the upcoming update on the ePrivacy Directive and the new regulation on the free flow of non-personal data will address some of the challenges of AI by upholding fundamental human rights in this field. The Defective Products Liability Directive and the Machinery Directive should also be reviewed to incorporate AI developments as they happen.
Sixth, EU member states, with the support of the Commission, should increase efforts and the pace of retraining people and changing the education systems to meet the employment system’s needs. By fostering skills such as creativity, curiosity, communication, team building and critical thinking, citizens will be in a better position to keep pace with a jobs market that is in a state of permanent change. We should target funding at universities and research laboratories that foster the development of talent in AI.
Conclusion
Should the EU take the necessary steps to look at AI development as key to its future or will it be playing catch-up, as it has been doing so far in the digital arena? Any such steps should be taken from a human-centred perspective, seeking mission-based innovations that will solve the societal problems of our time. Fears regarding AI are understandable, but the issues at the centre of such concerns do not stem from machines but from humanity. As for humans being replaced by AI-powered robots, what this author has learned is that as long as we remain creative creatures able to think differently, such a prospect is a long way off. Creativity encompasses intentions, emotions, aesthetic judgements, values, personal consciousness and moral sense—things that an algorithm, the basis of an AI system, cannot master. But, as Stephen Hawking puts it, ‘success in creating AI would be the biggest event in human history. It might also be the last, unless we learn how to avoid the risks’ (Cellan-Jones 2014). In the end, it all depends on humans, not on machines.
Footnotes
Author biography
