Abstract
Emerging technologies are not the danger. Failure of human imagination, optimism, energy, and creativity is the danger.
Keywords
Photoillustration credit: Shutterstock.com
Why the future doesn’t need us: Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species. —Bill Joy, co-founder and at the time chief scientist, Sun Microsystems, 2000
1
Although it was not clear at the time, Bill Joy’s article warning of the dangers of emerging technologies was to spawn a veritable “dystopia industry.” More recent contributions have tended to focus on artificial intelligence, or AI; electric car and space technology entrepreneur Elon Musk has warned that AI is “summoning the demon” (Mack, 2015), while physicist Stephen Hawking has argued that “the development of full artificial intelligence could spell the end of the human race” (Cellan-Jones, 2014). The Future of Life Institute (2015) recently released an open letter signed by many scientific and research notables urging a ban on “offensive autonomous weapons beyond meaningful human control.” Meanwhile, the UN holds conferences and European activists mount campaigns against what they characterize as “killer robots” (see, e.g., Human Rights Watch, 2012). Headlines reinforce a sense of existential crisis; in the military and security domain, cyber conflict runs rampant, with hackers accessing millions of US personnel records, including sensitive security clearance documents. Technologies such as uncrewed aerial vehicles, commonly referred to as “drones,” are highly contentious in both civil and conflict environments, for many different reasons. A recent US Army Research Laboratory report foresees genetically and technologically enhanced soldiers networked with their battlespace robotic partners and remarks that “the presence of super humans on the battlefield in the 2050 timeframe is highly likely because the various components needed to enable this development already exist and are undergoing rapid evolution” (Kott et al., 2015: 19).
How is one to think about this outpouring of analysis, hypothesis, events, and existential angst? A useful first step is to realize that there are three levels to such discussions of technology. 2 Level I is the instrumental level: a gun shoots a bullet and kills someone; a watch is used to tell time; a vaccine is used to prime an individual’s immune system to protect against a disease. Level II is the systems level: an uncrewed aerial vehicle conducting surveillance is part of a battlefield intelligence system; watches function in a globally standardized time system that was only institutionalized in the United States by an act of Congress in 1918; vaccinations are part of a public health system. Level III, the effect of a technology on individual psychology, society and culture, economic patterns, geopolitical status, and other Earth systems, is unpredictable and uncertain. One of the major drivers for standardized time, for example, was railroad technology, which was certainly not foreseen by those who first began developing steam locomotives. It is important to remember, however, that even if the specifics of Level III impacts cannot be predicted a priori, they will occur.
Level I effects are usually not difficult to figure out: They are the reasons that a technology is commercialized. For example, the Level I effect of a bomb-dismantling robot is clear: It helps save the lives of soldiers who would otherwise have to be doing that job. Level II effects can be more complex and may point in different directions than first-order effects. A robotic hummingbird surveillance device may have entirely beneficial effects if used in counterinsurgency, because it can improve targeting and thus reduce collateral damage (Level I effect). But if the same technology becomes widely available to political parties and divorce lawyers, it could have very negative effects on privacy and public discourse (a Level II effect). And, hypothetically, robotic bugs and hummingbirds, combined with data-mining software and massive databases, could become important tools of techno-totalitarian elites, a possible, but hypothetical, Level III effect.
This distinction among Level I, Level II, and Level III is useful because much of the confusion regarding emerging technologies comes from conflating relatively predictable Level I aspects of an emerging technology with highly unpredictable Level III hypotheticals, and treating them as equally valid insights into future technological trajectories. Not so. A concern about the use of drones to attack human targets in countries that are not participants in a conflict is qualitatively different than polemics against “killer robots,” and while conflating the two for purposes of argumentation may be effective, it is profoundly misleading. We have historical and operational data that enable us to evaluate the former; we don’t even know what a “killer robot” really is, except as an evocative term, and virtually no idea what would happen if such technologies became widespread in the real world. An analogous analytical mistake occurs when a particular use of a technology is treated as if it were separable from the technology itself. A medical advance in computer-brain interfaces in prosthetics, for example, is the same technology that might be used in the near future to directly connect a soldier to a remote weapons system. Any effort to ban “military AI” will fail because “military AI” is not a relevant technology category; rather, it is the advance of the underlying technology as a whole that ensures at some point that AI will be integrated into military devices. (Notably and presumably unintentionally, the proselytizing against “military AI” fails to admit that such a policy implicitly favors powers, such as Russia and ISIS, that are operating under doctrines of asymmetric warfare that privilege non-traditional tactics, technologies, and conflict.)
It is precisely this confusion that one notes in the language used in many of the comments on and critiques of emerging technologies, including some of the examples given above. It is not so much a question of whether these popular dystopian visions are accurate predictions: They almost certainly are not, because the ability to predict the future paths and implications of complex and powerful technology systems is simply nonexistent. Level I assertions of knowledge are being extended to inherently unpredictable Level III systems without understanding that an important conceptual shark has been jumped. But it is useful to explore the assumptions underneath the current rage for dystopian visions of emerging technologies, which are not as implausible as some have suggested.
To reduce such confusion, let me be clear from the beginning. Because much recent commentary regarding emerging technologies is generic and apocalyptic, that is what this essay will focus on. In other words, I will not concern myself with whether a particular weapon system, or smart phone app, or cyber worm, or AI tool is good or bad or competitively successful, a Level I question. Nor will I address the foreseeable Level II effects, an analysis which, as in the case of Level I, would focus on particular technological artifacts or applications and their systemic effects. Rather, since apocalyptic tends to be Level III stuff, that’s where we’ll go.
Emerging technologies as an Earth system
The first question to ask about emerging technologies is deceptively simple: Is today really that different? Is there something about today’s emerging technologies—which for purposes of this analysis include nanotechnology, biotechnology, information and communication technology (ICT), robotics, applied cognitive science, humtech (design and engineering of the human as a foundational emerging technology), and their various combinations and permutations—that is qualitatively different from those that characterized other eras of technological change? If there isn’t, much of today’s dramatic language can be understood as simply a reflection of the emphasis that all humans give to the particular era and landscape and culture within which they exist. Each generation tends to overemphasize the degree of change that it experiences, partly because of the immediacy of the stresses to which it is exposed, and partly because it is easy to underestimate how difficult and unpredictable life was in the past, since when one looks back at history it seems to flow logically and necessarily. Indeed, apocalyptic fears have been common when many major technology systems first emerged because of this immediacy, even as subsequent generations grew to view the technology as banal, even boring. In the early days of railroads, for example, there was a widespread belief that traveling at the heretofore unimaginable speed of 25 miles per hour would kill the passengers, in part because such technology was against the obvious will of God. As an Ohio school board put it, If God had designed that His intelligent creatures should travel at the frightful speed of 15 miles an hour by steam, He would have foretold it through His holy prophets. It is a device of Satan to lead immortal souls down to Hell. (Nye, 1994: 57)
3
In this case, however, a strong argument can be made that emerging technologies today are different not just in degree, but in kind, from those of the past. To begin with, the scope, scale, and speed of technological change are unprecedented. Where previous waves of technological change have involved a few core technologies, such as railroads or electrification, today technological evolution is occurring across the entire technological frontier. Partially as a result of such technologies rippling across a population of seven billion people, we now live on a terraformed planet, the first world we know of anywhere that has been shaped by the deliberate activities of a single species. That is not a discontinuous process, but it is qualitatively new.
Moreover, as the discussion of the engineered warrior of 2050 suggests, the human itself has become a design space. It is certainly true that people have always changed themselves in many ways, from consuming intoxicants of all kinds, to medicine, to education, but there is little question that the direct interventions that are now possible, combined with accelerating advances in fields such as neuroscience, genetics and molecular biology, and prosthetics, make virtually all aspects of the human, including cognitive and psychological domains, potentially subject to design. That the designer is not just engineering external systems, but him- or herself, adds a degree of reflexivity, nonlinearity, and complexity that makes simple predictions about particular technologies tangential and irrelevant at best.
It is worth emphasizing in passing that the argument that humans are at risk from emerging technologies is in an important sense circular. Humans are increasingly both designer and designed; they are, in other words, increasingly an emerging technology in their own right. People are many things, but they are now, and certainly will be in the future, a design project. Thus, in a meaningful way the argument that people are at risk from emerging technologies becomes the argument that emerging technologies are at risk from emerging technologies, which makes little sense, and isn’t very helpful analytically, or in guiding policy or practice.
Additionally, technological evolution is accelerating, which has significant implications. Past rates of technological change were slow enough that psychological, social, and institutional adjustments were possible, but today technology changes so rapidly that technology systems decouple from governance mechanisms of all kinds. All these factors, operating together, synergistically increase the impact, speed, and depth of change.
Any technology potent enough to be interesting will inevitably destabilize existing institutions, power relationships, social structures, reigning economic and technological systems, and cultural assumptions. Previous waves of technological change—from steam and coal, to electricity, to rail and automotive technologies—have destabilized and restructured human and natural systems at all scales, interacting unpredictably with contemporary natural, human, and built systems. Railroads, for example, opened up continental interiors, creating the underlying transportation infrastructure necessary to support industrialized agriculture, which, coupled to advances in production of artificial fertilizers and innovation in farm machinery, created the potential for dramatic increases in global human population. It also dramatically changed ecologies and landscapes; the American Midwest is an agricultural breadbasket, not a large swamp, because railroads provided the link between that farming region and the demand of the East Coast and, via steamship, Europe. The Earth’s atmosphere has been in part restructured by development of internal combustion engine technology coupled to a psychologically potent automotive technology, which is in turn based on a massive fossil fuel infrastructure. Proposals to address climate change through so-called “geoengineering technologies,” from designing the atmosphere to reflect incoming sunlight to deploying devices that capture carbon dioxide in the atmosphere, are explicitly intended to engineer major natural systems and cycles. In short, major new technologies are not just about artifacts; rather, they represent an unpredictable, sometimes apparently discontinuous, shift in the structure of integrated Earth systems. Moreover, these shifts are not predictable a priori; railroads, for example, required new systems of time, of communication, and, more subtly, of finance and of corporate management. Development of a mass consumption economy, with washing machines from new merchandising giants and cars from Detroit, required innovation in the development of consumer credit, and massive coupled innovation in everything from road systems to supply-chain management. Widespread consumer credit, in turn, generated an ability to consume, and a concomitant quality of life, that was beyond imagining for those generations of humans that lived prior to the 20th century.
It is thus highly likely that the first implicit assumption of the dystopian perspective is correct: Things are indeed different today, and the difference is fundamental and qualitative, not simply one of degree. Emerging technologies are making everything from individual molecules, to the human, to the planet itself, design spaces. Moreover, it is also likely that technological evolution, and all the concomitant changes in coupled institutional, social, economic, and cultural systems, will be more challenging and complex than anything humans have yet experienced. The remaining two issues, then, are: First, what can we do about it; and second, is this the end of humanity?
What can we do about it?
Precisely because new technologies are disruptive, they inevitably call forth opposition, both by conservative social forces and by threatened economic interests. Historical examples abound. With railroad technology, for example, conservative states such as the Austro-Hungarian Empire and Russia resisted rapid deployment, in part because it was feared that railroads might create social unrest in the still somewhat feudal and highly stratified cultures that characterized such countries; the French held back because of concerns it would destroy rural culture. The predictable result was that modernizing states that realized the commercial and military potential of railroad technology, such as Prussia, rapidly overtook the laggards in building rail infrastructure, with an eventual shift in geopolitical stature. In the United States, railroads were bitterly opposed by river transportation interests; in fact, Abraham Lincoln, when still a practicing lawyer, argued and won the seminal case for the Rock Island Railroad. 4 (River shippers at the time were arguing that any railroad bridge over a river was an unlawful obstruction of commerce; had they been successful, railroads would have been limited to operating between rivers and streams, but not crossing them.) A more recent example is provided by the thousands of people sued by the Recording Industry Association of America in its vain effort to defend a technologically obsolete business model for the distribution of music. There are plenty of reasons, in other words, why emerging technologies might be regarded as dangerous and disruptive, and thus worth stifling.
History, however, indicates that while local opposition can be successful, it will not halt the evolution of technology. Consider, for example, the Japanese attempt to limit gunpowder technology to preserve traditional Samurai culture; successful in the short term, it left Japan open to subjugation by Western naval forces with gunpowder technology. Similarly, environmentalists and governments in Europe have aggressively opposed genetic engineering (GMOs, or “genetically modified organisms”) in agriculture. Outside Europe, however, GMO technology has been one of the most rapidly adopted agricultural technologies in history. Efforts to regulate the proliferation of nuclear weapon technology have been somewhat successful, but it appears unrealistic to assume that the technology can be uninvented.
Especially given today’s globalized culture, and the strategic and military advantages that emerging technologies can provide, it is highly unlikely that meaningful constraints on technological evolution, whether derived from cultural, competitive, or religious foundations, will be successful. That is particularly true as all players in the global Great Game understand that leadership in science and technology domains is a necessary, if not sufficient, prerequisite for dominance. Moreover, given the complexity of many emerging technology systems, especially as they co-evolve with other natural, built, and human systems, it is unfortunately also likely that projecting their effects and evolutionary paths before they are actually adopted and become embedded in their social and cultural context is not just hard, but for all practical purposes impossible. One can, and should, generate scenarios. But exhortations that purport to elevate hypotheticals to predictions and implications of certainty about future states are misplaced.
In short, there is no certainty, and the genie is well and truly out of the bottle. However, that doesn’t necessarily imply that we can’t modulate future technological evolution, but that the way we think about it today may be too simple, and our institutions too slow and maladaptive, to be up to the task.
Beyond simplistic dystopianism
This analysis suggests that, as dystopians might argue, emerging technologies are indeed potent, and that, especially as the human is becoming an active design space, if AI doesn’t destroy humanity, something will. But this is a grossly incomplete perspective.
Humanity, as it appears at any particular time, is always doomed. Foragers and hunter-gatherers were doomed, as were the serfs of medieval Europe with their small plots and lives lived within a radius of a few miles of where they were born. And so, in our turn, are we. Doom is, in other words, evolution, and it is unlikely that we will stop it—or, really, that we should want to. In fact, the images that we cling to, personally and institutionally and culturally, are already obsolete. The ethics and values that we insist we will impose on the future are not only historically and culturally contingent, but already obsolete as well. We want the physical and cultural landscape we live in now to propagate into tomorrow, because we all unconsciously privilege the present, but that is not how complex systems work. They evolve, and indeed our world is evolving at a remarkable and accelerating clip.
The fallacy of the dystopians, then, is not in their analysis of the power of technology, or the accelerating and destabilizing rates of change. The fallacy is in equating evolution with dystopia, and, without admitting it, privileging the present over the promise and inevitability of the future. What is at risk is the limited mental model of “human” that all of us carry with us, not “humans” as an ongoing process. This is actually a common category mistake in modern discourse: Sustainability advocates and environmental activists often claim that “the planet is at risk,” but of course it is not. The planet is a large mass of rock and a film of various carbon compounds, and that is not at risk at all. What is at risk is a particular mental model of what the world should look like, a constructed snapshot. That does not mean that there aren’t many environmental issues that require attention; of course there are. But, as in the case of the emerging technology discourse, it does mean that existential catastrophe language is not only invalid, but can actually prevent seeking constructive adaptations to accelerating change.
Our only recourse is neither technological fatalism nor ethical relativism. It is true that we have not yet appreciated, much less begun to respond to, the challenge of a future that will indeed be more complex and difficult than anything we have experienced as a species. Nonetheless, we can already identify several important principles. For example, we need to stop thinking of “problems” with “solutions,” and think more in terms of “conditions” that will require long-term, adaptive management. Challenges such as ISIS and climate change will not be solved, but they can and must be managed in light of other relevant goals. In this, the experience with nuclear weapons is instructive: They are not a problem that can be unmade, but they are a condition that can be, and has so far been, relatively successfully managed.
We also need to focus on creating option spaces—portfolios of social, institutional, and technological choices that can be adaptively and flexibly deployed in complex environments. Similarly, we need to play with scenarios: If dystopian pronouncements are instead taken as scenarios—“What would you do if…?”—they are far more useful and informative than suggestions of doom.
Socially and institutionally, we need to become more agile and adaptive. This is uncomfortable for many, because it implies a degree of contingency and uncertainty, but that is precisely why such skills are necessary. The rate of technological change is unforgiving and has already decoupled to a large extent from traditional governance mechanisms. So we need to develop new ones.
Individually, we need to become far more humble about our ability to visualize and prognosticate on a complex and dynamic future. Cautionary scenarios and hypotheticals are welcome exercises in practicing to adjust to the unknowable that lies in front of us, but they are not appropriate foundations for policy or legal action in the present. Nightmares are seldom reality, and when bad things do happen they are seldom the ones we thought about. Fear and anger in the face of change are popular responses—witness the rise of far right and far left factions, and fundamentalisms of all stripes, around the world—but they are maladaptive, and those in responsible positions at least cannot afford such luxuries.
And perhaps most difficult of all, we need to learn to distrust that which we most fervently believe. In a world of complexity and change, perceiving reality is difficult enough without adding the blinders of ideology or simplistic worldviews.
The diagnosis of the dystopians is not far off: We do indeed have deep changes ahead of us. But evolution is what complex systems do, and wishing otherwise is unproductive. In other words, it isn’t so much that dystopians get the power of technology wrong; it’s that they make two mistakes. The first is the suggestion that we can know the costs and benefits of any emerging technology a priori. The second is a misunderstanding of the simple reality that, going forward, the “human” is in many ways becoming an emerging technology in its own right. What they get wrong is their assumption that humans are a fixed reality in a rapidly changing world, rather than a constantly evolving, complex, adaptive, inherently unpredictable, increasingly technological process. Emerging technologies are not the danger. As always, failure of human imagination, optimism, energy, and creativity is the danger.
And, given the over-simplicity of the current dialogue on both the utopian and dystopian scales, and the arrogance of assuming knowledge of future states that cannot possibly be known until they actually occur, the probability of a rational, ethical, and responsible embrace of the future is not high.
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
