Advent of cyberwarfare raises a whole host of new dangers
There was always something unusual about Stuxnet, the computer worm which infected and temporarily disabled centrifuges at Iran’s nuclear plants last year: it was far more sophisticated than the malware produced by amateur hackers.
Now, the mystery is over. In interviews with The New York Times, United States officials revealed that Stuxnet was the result of an American-Israeli government partnership, part of a secret operation codenamed “Olympic Games.”
More remarkably, they also admitted that U.S. President Barack Obama ordered the worm’s insertion into Iranian computer systems. This is the first public acknowledgment by any government that it has used cyberweapons in an offensive capacity.
It represents the clearest proof yet that cyberwarfare has made the transition from science fiction to full military deployment.
The term “cyberwar” was coined during the 1990s and initially covered any phenomenon involving a disruptive use of computers, whether engineered by a state or by individuals, and regardless of its purpose or intensity.
Sniffing a good business opportunity, security companies piled in to offer “protection.” And quite a few retired military officers bagged lucrative consultancy contracts.
But cyberwarfare became so enmeshed in myths that Howard Schmidt, the U.S. cybersecurity czar, once dismissed it as a “terrible metaphor.” Although the media continued to churn out reams of articles of the “be afraid, be very afraid” variety, most military strategists were inclined to treat cyberwarfare as just hype.
And for good reasons. The world has witnessed a number of cyber offensives which were almost certainly perpetrated or ordered by governments. These include the massive 2001 attacks on U.S. government websites in response to a midair collision between an American surveillance plane and a Chinese jet fighter ― an episode dubbed rather implausibly as “the first cyber world war.” There was also the deliberate silencing of Estonia’s national Internet infrastructure in 2007 ― allegedly at the behest of Russia, which was then locked in a dispute with its small northern European neighbor.
But as cyberwarfare skeptics pointed out, nobody has ever died from a cyber incident.
Cyberwarfare may also fail in the most basic function of any military offensive, which is to coerce or subdue an enemy. Since no territory is physically occupied, an opponent has every incentive to continue fighting. Since neither the attacker nor the attacked has any means of knowing what damage may be inflicted by offensive cyber operations, it is difficult to see how cyberwarfare can be integrated into long-term military planning, because the outcome is so unpredictable.
For years, the doubters of cyberwarfare held sway not only inside the academic community, but also among military commanders and politicians. Yet, the mood has now radically changed.
At a February gathering of Western strategists in Germany, Britain’s top military commander, General David Richards, called on his colleagues “to learn to defend, delay, attack and maneuver in cyberspace, just as we might on the land, sea or air, and all together at the same time.”
This was echoed by U.S. Defense Secretary Leon Panetta, who warned during his congressional confirmation hearings that the “next Pearl Harbor” military surprise confronting the U.S. “could very well be a cyberattack.” It was his way of saying that America’s armed forces must develop comparable capabilities.
Nor are these just words, for countries are now devoting increasing resources to cyberwarfare operations.
The U.S. established a Cyber Command two years ago. So has Japan, which began investing in cyberweapons in 2008, and Britain, which now admits to working on what it gingerly calls a “toolbox of cyberweapons to complement existing defence capabilities.”
The shift is partly driven by classic arms race considerations: nations wish to acquire offensive capabilities in order to deter other countries from attacking. As Robert Gates predicted in his retirement speech as U.S. defense secretary, “future American administrations will have to consider new declaratory policies about what level of cyberattack might be considered an act of war, and what type of military response is appropriate.
The fact that until now, cyberwarfare did not kill people or inflict much damage is also largely immaterial.
For a cyber operation is not usually to destroy or occupy a country, but rather to deprive an opponent of a capability, even on a temporary basis. Stuxnet may have slowed down Iran’s quest to acquire a nuclear weapon by between 12 and 18 months ― an achievement in itself, despite the fact that it is not militarily decisive.
Future cyberwarfare operations will be much more lethal, particularly if they are targeting computers controlling large infrastructure installations.
Throughout April this year, the computers running U.S. gas pipelines were subjected to sustained probing attacks. The U.S. government’s Industrial Control Systems Cyber Emergency Response Team refused to say who was behind the attacks, but the potentially catastrophic damage which could be inflicted is not in doubt.
The days when cyberwarfare merely paralyzed automated teller machines ― like what happened in Estonia in 2007 ― are a distant memory.
A cyberwar could either start deliberately ― as one country believes it can administer a strike debilitating its enemy’s critical systems ― or as a result of an escalation in a war which has already erupted. But cyberwarfare will never replace other forms of war; most planners regard it as a “force multiplier,” part of broader strategies.
So the inherent limitations of cyberwarfare are not as important as they previously seemed.
And far from representing a huge financial outlay, cyberweapons are cheap. Indeed, they are so cheap compared to other military hardware that the Pentagon recently established a Cyber Investment Management Board to administer cyberwarfare spending which would otherwise be simply too small to trigger normal accounting oversight procedures.
The advent of cyberwarfare raises a whole host of new dangers. The potential for miscalculation is large: nations may be tempted into launching an attack on the basis of a mistaken belief in their cyber superiority.
The assumption that a cyberattack is relatively risk-free may also encourage many more nations to try them, thereby lowering the threshold for the use of force.
And nobody can quantify the potential casualties from such attacks.
On the eve of the Iraq war in 2003, then U.S. President George W. Bush vetoed a plan for a cyber onslaught on Iraq’s computer networks largely because he feared its impact on the country’s public services and its potential deadly consequences.
Since then, the life of any country is even more networked, and the scope for a disaster bigger than before.
More depressingly, the old theories of deterrence which prevented countries from resorting to force may not apply to cyberwarfare, largely because an attacker can disguise his identity and thereby escape retribution. It is also difficult for a country to credibly threaten its opponents with retaliation without revealing its capabilities.
Some governments are advocating the adoption of new treaties to govern the use of cyberspace, similar to the Geneva Conventions. But if the past is anything to go by, this new frontier in warfare will, like all its predecessors, only be regulated after the technologies mature and are well-known.
And only after some bloody confrontations have already occurred.
By Jonathan Eyal
(The Straits Times)