Abstract
Revelations about the Stuxnet computer worm raise the possibility of damaging cyberattacks on the US electrical power grid or other critical infrastructure. Several approaches might mitigate cyberthreats to critical infrastructure and information technology targets. Among them are efforts to defend the information systems that support important infrastructure, to deter enemies from cyberattacks on those systems, to reduce or eliminate the forces that might be used in cyberattacks, or to reach arms control agreements that limit or proscribe such cyberattacks. Although each of these approaches is unlikely to be successful in and of itself, each is worthy of further exploration. Taken together, they may yield benefits that make cyberspace more secure.
Keywords
“The modern thief can steal more with a computer than with a gun. Tomorrow’s terrorist may be able to do more damage with a keyboard than with a bomb.” These sentences are from the first paragraph of a 1991 National Research Council report, Computers at Risk. In the summer of 2010, the emergence of the Stuxnet worm demonstrated that these words were not hyperbole. Unlike most previous cyber intrusions, which exfiltrate computerized information, Stuxnet caused physical damage to centrifuges at the Iranian uranium enrichment plant at Natanz—damage that could just as well have been caused by a bomb. 1 Although it came as no surprise to any serious cybersecurity analyst that genuine attacks—attacks with physical effects—could be made via computer, Stuxnet was a very loud wake-up call to policy makers, political analysts, and the general public around the world. For many observers, Stuxnet raises the possibility that a cyberattack on the US electrical power grid or other critical infrastructure might be catastrophic, and the prospect that large regions of the United States might be without electricity, for example, for many months is frightening, to say the least.
Several approaches might help mitigate cyberthreats to critical infrastructure and information technology targets. In principle, the United States could try to defend the information technology systems that support important infrastructure to deter enemies from cyberattacks on those systems, to reduce or eliminate the forces that might be used to conduct such cyberattacks, or to reach agreements that limit or proscribe cyberattacks of this sort. Although each of these approaches has problems and is highly unlikely to be successful in and of itself, each is also worthy of further exploration and pursuit; taken together, they may yield benefits that will make cyberspace more secure.
A primer on cyberconflict
Cyberconflict involves both offense and defense. Offensive tools and techniques allow a hostile party to do something undesirable; defensive tools and techniques seek to prevent a hostile party from doing so.
In the realm of technology, the offense requires three components:
Access—which refers to how the hostile party gets at the information technology of interest. It may be remote (e.g., through the Internet or penetration of the wireless network to which the equipment is connected). Alternatively, access may involve close physical proximity (e.g., spies acting or serving as operators, service technicians, or vendors). Vulnerability—an aspect of information technology that can be used to compromise it. Vulnerabilities may be accidentally introduced through a design or implementation flaw (i.e., a “bug”) or introduced intentionally (e.g., by a hostile programmer) that may open the door for opportunistic use of the vulnerability by an adversary. Payload—describes the mechanism for affecting information technology after access has been used to take advantage of a vulnerability (e.g., once a software agent such as a virus has entered a computer, its payload can be programmed to reproduce and retransmit itself or to destroy or alter files). Payloads can be designed to do more than one thing or to act at different times. If a communications channel is available, payloads can even be remotely updated.
The defense focuses on one or more of these elements. Some defensive tools (e.g., firewalls) close off routes of access. Other defensive tools identify vulnerabilities (e.g., programming errors) so that they can be fixed before a hostile party can take advantage of them. Still other defensive tools prevent hostile parties from doing bad things with any given payload; that is, a confidential file may be encrypted, so that even if a copy is removed from the system, it is useless to the hostile party.
People interact with information technology, and they can serve as both offensive and defensive “tools” for cyberconflict. It is, in fact, often easier to trick, bribe, or blackmail an insider into aiding a hostile party than to craft a solely technological attack on information technology. Access to a system may be obtained by bribing a janitor to insert a USB flash drive into a computer. A vulnerability may be installed by blackmailing a programmer into writing defective code. In such cases, technical tools and human-based techniques are combined.
In the human realm, defensive techniques essentially involve inducing people not to behave in ways that compromise security. Education teaches people not to fall for scams that are intended to obtain login names and passwords. Audits of activity persuade people not to use information technology in ways that are suspicious. Rewards persuade people to report questionable or suspicious activity to the proper authorities. Paying people more makes them more resistant to bribery. Of course, such methods are successful only some of the time.
Offensive operations in cyberspace come in two general forms—cyberattack or cyber exploitation—and it is useful to differentiate between the two. Cyberattack refers to the use of deliberate activities to alter, disrupt, deceive, degrade, or destroy computer systems or networks used by an adversary or the information or programs resident in or transiting these systems or networks. The activities may also affect entities connected to these systems and networks. A cyberattack might be conducted to prevent authorized users from accessing a computer or information service (a denial-of-service attack), to destroy computer-controlled machinery (the alleged purpose of the Stuxnet cyberattack), or to destroy or alter critical data (timetables for the deployment of military logistics). Note that the direct effects of a cyberattack, like damage to a computer, may be less significant than the indirect effects, such as damage to a system connected to the computer.
Cyber exploitation refers to deliberate activities taken to penetrate computer systems or networks used by an adversary in order to obtain information resident on or transiting through these systems or networks. Cyber exploitations do not seek to disturb the normal functioning of a computer system or network from the user’s point of view—indeed, the best cyber exploitation is one that a user never notices. The information sought is generally information that the adversary wishes not to be disclosed. A nation might conduct cyber exploitations to gather valuable intelligence, just as it might deploy human spies to obtain information. Or a country might seek information from a company network in another country to benefit a domestic competitor of that company. Of particular interest is information that will allow an adversary to conduct further penetrations on other systems and networks. The media often refer to cyberattacks when the activity conducted is in reality a cyber exploitation.
A key aspect of offensive operations in cyberspace involves attribution: The originator of a hostile cyber operation cannot necessarily be identified with high confidence, because there is no inherent connection between an action taken in cyberspace and any specific actor. The following scenario illustrates just some of the conceptual complexities of identifying malicious actors in cyberspace:
George writes software to attack a computer located in Washington, DC. He compromises computers in Iowa and uses those computers—without their owners’ knowledge—to launch the attack. But George is located in Canada, where he remotely launches the attack. In addition, he is a citizen of Russia, but he is working at the behest of—and receives cash payments from—the Iranian government.
Which computers are responsible? Which individual is responsible? Although George is the only specific person named in this scenario, do the owners of the Iowa computers have any responsibility for not securing their systems more effectively? What political entity should be held responsible?
The appropriate definition of “attribution” depends on why attribution is being sought. If the goal is to stop the attack as soon as possible, attribution in the form of an access path to the originating computers will be needed. If the goal is to punish the human actor, attribution in the form of the person’s name will be needed. If the goal is to deter future acts against the United States, attribution in the form of the political entity ordering the attack will be needed.
Mitigating cyberthreats
So how might it be possible to mitigate cyberthreats to a nation’s critical infrastructure? A country wanting to protect its infrastructure could:
Defend the assets on which the functioning of the infrastructure depends. This is the role of cyberdefense. Dissuade adversaries from launching attacks against the infrastructure. This is the role of cyber deterrence. Reduce the forces that an adversary might use to conduct a cyberattack against infrastructure. This is the role of cyber preemption and damage limitation. Come to a workable agreement with potential adversaries to refrain from launching attacks against the infrastructure. This is the role of cyber arms control.
Cyberdefense
When someone runs an anti-virus program on a personal computer, she is conducting cyberdefense by defending the computer against certain virus-infected e-mail attachments. Cyberthreats faced by important facilities are likely to be much more sophisticated, and there are correspondingly more sophisticated defenses.
Two fundamental issues affect the state of cyberdefense today. First, the actual state of cyberdefense is significantly worse than it would be if every vendor, owner, and operator of information technology systems and networks would put into practice what is actually known about mitigating cyberthreats. That is, the cybersecurity posture of the nation could be strengthened significantly if individuals and organizations collectively adopted “best practices”— both human and technological—that are known to improve cybersecurity. 2
Even in the unlikely event that best practices were universally adopted, today’s state of knowledge is inadequate to defend against a high-level cyberthreat originating from a major nation state. A National Research Council report (Goodman and Lin, 2007) called for research that emphasizes, among other things, the importance of:
Blocking and limiting the impact of compromise of an information technology system. Such research would focus on developing secure information networks that resist technical compromise; convenient and ubiquitous encryption that can prevent unauthorized parties from obtaining sensitive data; containment, backup, mitigation, and recovery systems; and ways to lock down systems that are under attack. Enabling accountability, with the goal of holding anyone or anything that has access to a system component—a computing device, a sensor, an actuator, a network—accountable for the results of such access. Promoting deployment of known security technologies and practices. Many technologies and practices that would strengthen security are not used because they are inconvenient, expensive, or both, and hence get in the way of doing useful work. Thus, it would be helpful to develop technologies that facilitate ease of use by both end users and system implementers, to find incentives that promote the use of security technologies, and to remove barriers that impede the use of security technologies. This has social and organizational dimensions, as well as technological and psychological ones.
Cyberdefense alone will not be sufficient to protect critical systems and networks. In general, defenses are deployed against specific technical threats rather than specific adversaries; as defenses are erected, adversaries will find other means to attack. Furthermore, such defenses are invariably imperfect: They can reduce threats but cannot eliminate them entirely. To be useful, computer systems need programs and data supplied externally, and there is no general method for assuring that an adversary has not tampered with such programs or data. Also, cyberdefense measures must succeed every time an adversary conducts a hostile action, whereas the adversary’s action need succeed only once. These facts place a heavy and asymmetric burden on a defensive posture that is wholly reliant on cyberdefense.
Cyber deterrence
The notion of deterrence was and is a central construct in nuclear strategy. Because effective defenses against nuclear weapons are difficult to construct, many regard the threat of retaliation as the most plausible and effective alternative. 3 Indeed, deterrence of nuclear threats during the Cold War established the paradigm in which the conditions for successful deterrence are largely met.
But it is an entirely open question as to whether extending traditional deterrence principles to cyberspace is a viable strategy. Although nuclear weapons and cyberweapons share one key characteristic—the superiority of offense over defense—they differ in many other ways. Nuclear deterrence and cyber deterrence do raise many of the same questions, but the answers to these questions are quite different.
One of the most fundamental differences between nuclear deterrence and cyber deterrence involves ubiquity and attribution. Only a few nations have nuclear weapons, and so those responsible for directing a retaliatory threat have only a few possible choices of enemy to deter. Not so in cyberspace, where every nation can afford cyberweapons, and most uses of cyberweapons cannot be clearly associated with state actions.
This is not to say that attribution would always be impossible for the United States. Attribution will be difficult or impossible if a perpetrator—either a state or an individual— uses new tools and techniques and leaves no clues; if the perpetrator maintains perfect operational security; if the perpetrator makes no demands; and, most important, if the hostile actions require a rapid response. But if the perpetrator makes mistakes, and the United States deploys both its full cyber-forensic resources and also its formidable all-source intelligence gathering and analytic capability, it may well be only a matter of time before political responsibility for a consequential cyberattack can be ascertained. Thus, what is very hard in general is prompt attribution of the source of a cyberattack. And although the United States has always maintained that it will respond to provocations in a time, place, and manner of its own choosing, the circumstances under which delayed retaliation is sufficient for deterrence are not entirely clear.
A second issue that complicates cyber deterrence centers on the scope and nature of an appropriate retaliatory threat. As a matter of policy, the United States has stated (White House, 2011: 14) that: When warranted, the United States will respond to hostile acts in cyberspace as we would to any other threat to our country. All states possess an inherent right to self-defense, and we recognize that certain hostile acts conducted through cyberspace could compel actions under the commitments we have with our military treaty partners. We reserve the right to use all necessary means—diplomatic, informational, military, and economic—as appropriate and consistent with applicable international law, in order to defend our nation, our allies, our partners, and our interests. In so doing, we will exhaust all options before military force whenever we can; will carefully weigh the costs and risks of action against the costs of inaction; and will act in a way that reflects our values and strengthens our legitimacy, seeking broad international support whenever possible.
The phrase “inherent right of self-defense” mirrors Article 51 of the United Nations Charter, which provides that “[n]othing in the present Charter shall impair the inherent right of individual or collective self-defense if an armed attack occurs against a member of the United Nations.” However, arguably none of the cyberattacks that the United States has experienced to date have come anywhere near any reasonable threshold for “armed attack.” Indeed, the toughest policy problems with cybersecurity today are how to respond to hostile cyber operations that do not rise to the level of this kind of attack—or, for that matter, do not correspond to other terms recognized in international law, such as use of force or armed conflict.
Preemption and damage limitation
In a 2010 Washington Post op-ed, Michael McConnell, former director of the National Security Agency, argued that preemptive strategies were needed to deal with adversaries before they launch devastating cyberattacks against the United States and that such preemption would involve “degrading, interdicting and eliminating their leadership and capabilities to mount cyberattacks” (McConnell, 2010). Preemption—sometimes also known as anticipatory self-defense—is the first use of force against an adversary by a nation that has good reason to conclude that the adversary is about to attack and that there is no other alternative that will forestall such an action.
Preemption as a defensive strategy is a controversial subject. But without taking a stand on its wisdom, it is important to consider what might be required to execute a preemptive strike in cyberspace. In the nuclear domain, a preemptive strategy might call for attacking an adversary’s nuclear delivery platforms and its nuclear command-and-control facilities. To the extent that such an attack were successful, the argument for preemption is that it would be better to face a degraded nuclear response from an adversary than its full nuclear arsenal.
Do such arguments apply in cyberspace? Although it might make sense under some circumstances to interdict and eliminate leaders who would order a cyberattack, degrading an adversary’s capabilities to mount cyberattacks is quite challenging, for two reasons. First, preemption by definition requires information that an adversary is about to attack. How would a country know who is about to launch an attack, especially since the number of possible adversaries is almost limitless? Keeping all such parties under surveillance using cyber means and other intelligence sources would seem to be a quite daunting task and yet necessary in an environment in which threats can originate from nearly anywhere. Second, the assets that an adversary might use to launch a cyberattack are either very inexpensive (how much does a PC cost?) or already deeply embedded in their intended targets and ready to be used. The same argument applies to preemption intended to cripple command-and-control facilities that an adversary might use to launch an attack. A prepared adversary would have backup facilities that could be brought online when needed to order an attack. If this argument is correct, a preemptive strike would not seem to hold much hope for preventing or even seriously degrading the launch of a cyberattack on the United States.
On the other hand, a preemptive cyberstrike might be able to shake an adversary’s confidence in its ability to carry out its attack. Under these circumstances, a preemptive strike would not necessarily be directed at an adversary’s cyberweapons or command-and-control facilities, but at any adversary asset of which adversary leaders might take notice. At the very least, a preemptive strike could give the adversary warning that it does not have the advantage of surprise. Furthermore, such a warning does not require detailed knowledge of the adversary’s particular targets. So it is possible, at least in principle, that a preemptive strike might cause an adversary to reconsider its decision to attack. Even in the most favorable circumstances for preemption, however, time would be an important issue. A preemptive strike would have to occur, and be noticed by the adversary, with enough lead time to allow an adversary to reconsider a command decision to launch a cyberattack.
Arms control
The approach to arms control agreements is generally formulaic. They can be bilateral or multilateral and be cast formally as treaties, informally as memorandums of understanding, or even more informally as coordinated unilateral policies. They may limit the acquisition of certain kinds of weapons, where acquisition can be understood to mean research, development, testing, production, or some combination thereof; limit the field deployment or use of certain weapons; limit the circumstances of weapons use, such as an agreement to refrain from the first use of nuclear weapons; or call for confidence-building measures—that is, signatories may be obliged to take, or to refrain from taking, certain actions to reassure other signatories about their benign intent.
Could arms control work in cyberspace? The answer to that question is different for different aspects of arms control and not generally definitive.
Consider first the possibility of an arms control agreement to restrict research and development on offensive capabilities in cyberspace or deployment of those capabilities. Such an agreement is likely to be infeasible for a number of fundamental reasons.
For kinetic weapons, operational capability is primarily a function of numbers. After all, 100 tanks provide more capability than three tanks. In contrast, operational capability for cyberweapons is more a function of technical insight and sophistication—that is, of research and development. The number of CD-ROMs containing a cyberweapon implemented as software or the number of lines of code constituting that software is clearly not a measure of cyber-operational capability. More meaningful might be the number of vulnerabilities present in an adversary’s key software systems, but the only way to increase that number is to do the research necessary to discover them. Furthermore, research and development for cyberweapons can be undertaken clandestinely in test laboratories quite easily shielded from prying eyes in the sky— indeed undertaken in unremarkable office buildings indistinguishable from commercial establishments. Because they can be hidden in plain sight, they are notoriously hard to detect.
Also, offensive cyber capabilities have legitimate uses, and offensive tools are routinely developed and deployed by both military and civilian entities. For example, one of the most powerful ways of identifying cyber vulnerabilities in an information technology facility is to subject it to penetration testing. Indeed, authorized operators of information technology facilities are usually encouraged to do their own penetration testing for this very purpose. How could an arms control treaty distinguish between offensive capabilities developed for cyberattack from those used to shore up defenses against cyberattack?
Verification looms large as a challenge to any such agreement, as well. No conceivable inspection regime could verify the non-possession of cyberweapons to any reasonable degree of confidence. Because the human and technical infrastructure needed to conduct even large-scale cyberattacks would be much smaller than that needed to conduct cyberdefense on a large scale, such an infrastructure could be easily hidden and also largely indistinguishable with respect to the signatures of other routine IT development and operations.
What about restricting the use of cyberweapons in some way? For example, signatories of a cyberarms control pact might agree to refrain from launching cyberattacks against national financial systems or power grids, much as nations today have agreed to avoid targeting hospitals in a kinetic attack. To reduce the likelihood of mistaken attacks on such facilities, nations might agree to take measures to electronically identify systems as being associated with prohibited targets.
The international law of armed conflict offers a precedent for restrictions on use of certain weapons. The law is codified under the UN Charter, which prohibits the use or threat of force and also acknowledges the unilateral right of a nation to engage in self-defense if it is subject to armed attack, and the Geneva Conventions, which regulate the conduct of armed conflict. The law is acknowledged by all nations to apply to kinetic conflict, and parties to the UN Charter and the Geneva Conventions have agreed to abide by the restrictions in these documents, even if adversaries may violate the restrictions from time to time.
Not all signatories of the UN Charter have explicitly affirmed its applicability and that of the Geneva Conventions to conflict in cyberspace, although the United States has publicly stated that the laws of armed conflict do apply to the use of cyberweapons. To the extent that the law of armed conflict does apply to the use of cyberweapons, it would most likely forbid nations from using them to create real-world effects that would violate the law had they been created with kinetic weapons.
Agreements to restrict use are by their nature not verifiable. That is, no amount of information collected before a conflict can guarantee that restrictions on use will be observed during conflict. With the onset of acknowledged armed conflict, restrictions on the use of cyberweapons are less likely to be observed. Thus, agreements to restrict use would be more helpful before the outbreak of conflict. Nevertheless, such agreements do create international norms, and they do inhibit training for cyberattacks that would cause the kinds of damage that traditional warfare causes.
Still, restrictions on the applicability of the law of armed conflict to cyberspace are complicated by many factors. For example, arms control agreements have historically presumed a state monopoly on the arms being regulated. But in the case of tools that might be used for cyberattack, at least some of the technology with which to conduct cyberattacks is most assuredly not exclusively or even mostly controlled by governments. Hackers, who are usually private citizens, conduct many cyberattacks on their own every day. Non-state actors such as terrorist groups or transnational criminal organizations could develop significant cyberattack capabilities as well. None of these parties would be likely to adhere to any putative international agreement that restricted their activities. Under such circumstances, domestic laws in the relevant nations may be the only legal means of regulating the activities of such parties, and, even then, the effectiveness of domestic laws depends on the availability of some enforcement mechanism, which may not be present in some nations.
An example may bring the dilemma into clearer focus: A nation’s military forces may refrain from targeting the power grids of an adversary, but patriotic hackers or terrorist groups on that nation’s soil might launch a cyberattack without explicit government approval. Thus, compliance with such an agreement may entail a somewhat bizarre scenario in which two nations are in conflict, perhaps even kinetic conflict, but each is simultaneously conducting actions in its own jurisdiction to suppress cyberattacks intended to advance their cause.
Another complication to a cyberarms control agreement is the functional similarity between cyber espionage against an adversary’s information systems and cyberattack against those information systems. An act of cyber espionage may well be assessed by the target as a damaging or destructive act, or at least the prelude to such an act. (Consider the cyber penetration of an adversary’s military command-and-control networks to gather intelligence information in a time of crisis. If the adversary detected it, how would it know that such a penetration did not have a destructive intent?) Yet forbidding cyber espionage actions would go beyond the bounds of international law as it stands today and further would fly in the face of what amounts to standard operating procedure today for essentially all nations.
A final complication arises from the difficulty of tracing cyberattacks to their ultimate origin. If the ultimate origin of a cyberattack can be concealed successfully, holding the violator of an agreement accountable becomes problematic. Some have suggested that the deployment of an “alternative Internet” could help on the accountability front. A more technologically secure network on a different physical infrastructure—the use of which would be restricted to activities deemed critical to national well-being and to users willing to subject themselves to stronger authentication protocols—would indeed help to identify attackers under some circumstances. Nevertheless, attackers would invariably seek other ways to counter the authentication capabilities of this alternative, such as compromising the operation of the strongly authenticated machines themselves. All experience further suggests that connections, whether physical or logical, between an alternative Internet and the regular Internet would inevitably emerge, for reasons of operational and business efficiency. And last, it simply is not known whether one could construct a more secure alternative that would retain the economies of the present-day Internet.
In traditional arms control, transparency and confidence-building measures—adherence to mutually agreed “rules of the road” for naval ships at sea, pre-notification of large troop movements, noninterference with national technical means of verification, and the like—have been used to promote stability and mutual understanding.
Meaningful analogs to these measures in cyberspace are, however, difficult to find. There is no analog to large-scale troop movements—cyberforces can be deployed for attack with few visible indicators. Conventions for behavior, such as “rules of the road,” do not cover intent, and, in cyberspace, intent may be the primary difference between a possibly prohibited act, such as certain kinds of cyberattack, and an allowed one, such as cyber espionage. Verification measures to monitor offensive military activity in cyberspace would necessarily be extensive and highly intrusive—but in the end easy to evade.
Perhaps the most important challenge faced by transparency and confidence-building measures in cyberspace is the fact that offensive cyber operations fundamentally depend on stealth and deception. Transparency and confidence-building measures are, as the name suggests, intended to be reassuring to an adversary; the success of most offensive cyber operations depends on an adversary being falsely reassured. Thus, the misuse of these measures may well be an element of an adversary’s hostile use of cyberspace.
Some commentators on security in cyberspace argue that the challenges described above convincingly and definitively refute, even in principle, the possibility of meaningful arms control agreements in cyberspace. Others worry that today’s cybersecurity challenges are so urgent as to require quick action to reach such agreements. Certainly the challenges to a cyberarms control agreement are real and must not be ignored. But our current inability to address these challenges may stem from a lack of imagination and a failure to go beyond our experiences with arms control for nuclear and conventional weapons, rather than anything else.
Progress in cyberarms control, if it is feasible at all, is likely to be slow, barring unforeseen events that might prompt the United States and potential adversaries to enter into agreements. It is worth noting that nearly two decades elapsed between the mushroom clouds at Hiroshima and Nagasaki and the signing of the first nuclear arms control treaty in 1963. And, in many ways, the problems of cyberarms control are more complex than those of nuclear arms control.
Nevertheless, three relatively modest steps might help lay a foundation for engaging in serious discussions about arms control in cyberspace.
First, the challenges in reaching useful cyberarms control agreements deserve serious research to ascertain the extent to which they are fundamental or specific to the forms of cyberarms control that have been proposed to date. That is, the jury is still out on whether some kind of arms control agreement in cyberspace could serve useful purposes.
Second, all nations concerned with the problems of international cyberconflict benefit from finding a common ground of understanding and a common language with which to describe the issues. This common ground might include resolving questions such as: What activities constitute a significant cyberattack and what activities are evidence of hostile intent? How should cyber exploitation and intelligence gathering be differentiated from cyberattacks? How, if at all, should military computer systems and networks be separated from those of the civilian population? And how, if at all, should exploitations for economic purposes be differentiated from exploitations for national security purposes? And the list goes on.
Third, it may be possible to reach an agreement on certain topics related to cybersecurity outside the domain of national security (see, for example, Sofaer et al., 2010). Nations have sometimes agreed on the need to protect some area of international activity, whether it be airline transport, telecommunications, or maritime activities. In cyberspace, many nations recognize fraud and child pornography as significant problems and have, to some extent, agreed on measures to deal with them. Protection of national infrastructures from third-party attacks may be another common interest. If areas of common interest can be identified and agreements reached for those areas, the resulting regime may go a long way toward improving the overall cybersecurity postures of the nations involved.
A research bonanza
A safer and more secure cyberspace will not be achieved with one magic technological advance, one new strategy, or one comprehensive arms control agreement. Rather, progress will be incremental and, likely, slow. For now, responsible nations need to use what they know, and they also need to develop new options for protecting themselves against cyberconflict, the implications of which will need to be researched. Cyberdefense, cyberdeterrence, preemption and damage limitation, and cyberarms control are potential building blocks with which a more secure cyberspace will have to be built—but how best to improve each of these blocks and how to arrange them in a mutually supportive structure remains to be seen. It is time to get to work.
Footnotes
Editor’s note
The intellectual content of this report is drawn primarily from the National Research Council’s Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities (Owens et al., 2009). To the extent that any of the judgments and conclusions differ from those found in this report, they are those of the author alone.
Funding
The production of the National Research Council report, on which this article is based, was partially funded by the John D. and Catherine T. MacArthur Foundation and the Microsoft Corporation.
