Some thoughts on the nature of cyber-weapons
With inputs from Anuj Srivas.
There’s a hole in the bucket.
When someone asks for my phone number, I’m on alert, even if it’s so my local supermarket can tell me about new products on their shelves. Or for my email ID so the taxi company I regularly use can email me ride receipts, or permission to peek into my phone if only to see what other music I have installed – All vaults of information I haven’t been too protective about but which have off late acquired a notorious potential to reveal things about me I never thought I could so passively.
It’s not everywhere but those aware of the risks of possessing an account with Google or Facebook have been making polar choices: either wilfully surrender information or wilfully withhold information – the neutral middle-ground is becoming mythical. Wariness of telecommunications is on the rise. In an effort to protect our intangible assets, we’re constantly creating redundant, disposable ones – extra email IDs, anonymous Twitter accounts, deliberately misidentified Facebook profiles. We know the Machines can’t be shut down so we make ourselves unavailable to them. And we succeed to different extents, but none completely – there’s a bit of our digital DNA in government files, much like with the kompromat maintained by East Germany and the Soviet Union during the Cold War.
In fact, is there an equivalence between the conglomerates surrounding nuclear weapons and cyber-weapons? Solly Zuckerman (1904-1993), once Chief Scientific Adviser to the British government, famously said:
When it comes to nuclear weapons … it is the man in the laboratory who at the start proposes that for this or that arcane reason it would be useful to improve an old or to devise a new nuclear warhead. It is he, the technician, not the commander in the field, who is at the heart of the arms race.
These words are still relevant but could they have accrued another context? To paraphrase Zuckerman – “It is he, the programmer, not the politician in the government, who is at the heart of the surveillance state.”
An engrossing argument presented in the Bulletin of the Atomic Scientists on November 6 did seem an uncanny parallel to one of whistleblower Edward Snowden’s indirect revelations about the National Security Agency’s activities. In the BAS article, nuclear security specialist James Doyle wrote:
The psychology of nuclear deterrence is a mental illness. We must develop a new psychology of nuclear survival, one that refuses to tolerate such catastrophic weapons or the self-destructive thinking that has kept them around. We must adopt a more forceful, single-minded opposition to nuclear arms and disempower the small number of people who we now permit to assert their intention to commit morally reprehensible acts in the name of our defense.
This is akin to the multiple articles that appeared following Snowden’s exposé in 2013 – that the paranoia-fuelled NSA was gathering more data than it could meaningfully process, much more data than might be necessary to better equip the US’s counterterrorism measures. For example, four experts argued in a policy paper published by the nonpartisan think-tank New America in January 2014:
Surveillance of American phone metadata has had no discernible impact on preventing acts of terrorism and only the most marginal of impacts on preventing terrorist-related activity, such as fundraising for a terrorist group. Furthermore, our examination of the role of the database of U.S. citizens’ telephone metadata in the single plot the government uses to justify the importance of the program – that of Basaaly Moalin, a San Diego cabdriver who in 2007 and 2008 provided $8,500 to al-Shabaab, al-Qaeda’s affiliate in Somalia – calls into question the necessity of the Section 215 bulk collection program. According to the government, the database of American phone metadata allows intelligence authorities to quickly circumvent the traditional burden of proof associated with criminal warrants, thus allowing them to “connect the dots” faster and prevent future 9/11-scale attacks.
Yet in the Moalin case, after using the NSA’s phone database to link a number in Somalia to Moalin, the FBI waited two months to begin an investigation and wiretap his phone. Although it’s unclear why there was a delay between the NSA tip and the FBI wiretapping, court documents show there was a two-month period in which the FBI was not monitoring Moalin’s calls, despite official statements that the bureau had Moalin’s phone number and had identified him. This undercuts the government’s theory that the database of Americans’ telephone metadata is necessary to expedite the investigative process, since it clearly didn’t expedite the process in the single case the government uses to extol its virtues.
So, just as nuclear weapons seem to be plausible but improbable threats fashioned to fuel the construction of evermore nuclear warheads, terrorists are presented as threats who can be neutralised by surveilling everything and by calling for companies to provide weakened encryption so governments can tap civilian communications easier-ly. This state of affairs also points to there being a cyber-congressional complex paralleling the nuclear-congressional complex that, on the one hand, exalts the benefits of being a nuclear power while, on the other, demands absolute secrecy and faith in its machinations.
However, there could be reason to believe cyber-weapons present a more insidious threat than their nuclear counterparts, a sentiment fuelled by challenges on three fronts:
- Cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss
- Lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons
- Computer scientists have been slow to recognise the moral character and political implications of their creations
That cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss
In 1995, Joseph Rotblat won the Nobel Prize for peace for helping found the Pugwash Conference against nuclear weapons in 1955. In his lecture, he lamented the role scientists had wittingly or unwittingly played in developing nuclear weapons, invoking those words of Zuckerman quoted above as well as going on to add:
If all scientists heeded [Hans Bethe’s] call there would be no more new nuclear warheads; no French scientists at Mururoa; no new chemical and biological poisons. The arms race would be truly over. But there are other areas of scientific research that may directly or indirectly lead to harm to society. This calls for constant vigilance. The purpose of some government or industrial research is sometimes concealed, and misleading information is presented to the public. It should be the duty of scientists to expose such malfeasance. “Whistle-blowing” should become part of the scientist’s ethos. This may bring reprisals; a price to be paid for one’s convictions. The price may be very heavy…
The perspectives of both Zuckerman and Rotblat were situated in the aftermath of the nuclear bombings that closed the Second World War. The ensuing devastation beggared comprehension in its scale and scope – yet its effects were there for all to see, all too immediately. The flattened cities of Hiroshima and Nagasaki became quick (but unwilling) memorials for the hundreds of thousands who were killed. What devastation is there to see for the thousands of Facebook and Twitter profiles being monitored, email IDs being hacked and phone numbers being trawled? What about it at all could appeal to the conscience of future lawmakers?
As John Arquilla writes on the CACM blog…
Nuclear deterrence is a “one-off” situation; strategic cyber attack is much more like the air power situation that was developing a century ago, with costly damage looming, but hardly societal destruction. … Yes, nuclear deterrence still looks quite robust, but when it comes to cyber attack, the world of deterrence after [the age of cyber-wars has begun] looks remarkably like the world of deterrence before Hiroshima: bleak. (Emphasis added.)
… the absence of “societal destruction” with cyber-warfare imposed less of a real burden upon the perpetrators and endorsers.
And records of such intangible devastations are preserved only in writing, in our memories, and can be quickly manipulated or supplanted by newer information and problems. Events that erupt as a result of illegally obtained information continue to be measured against their physical consequences – there’s a standing arrest warrant while the National Security Agency continues to labour on, flitting between the shadows of SIPA, the Patriot Act and others like them. The violations are like a creep, easily withdrawn, easily restored, easily justified as being counterterrorism measures, easily depicted to be something they aren’t.
That lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons
What makes matters frustrating is a multilateral instrument called the Wassenaar Arrangement (WA), which was originally drafted in 1995 to restrict the export of potentially malignant technologies leftover from the Cold War, but which lawmakers resorted to in 2013 to prevent entities with questionable human-rights records from accessing “intrusive software” as well. In effect, the WA defines limits on its 41 signatories about what kind of technology can or can’t be transferred between themselves or not at all to non-signatories based on the tech’s susceptibility to be misused. After 2013, the WA became one of the unhappiest pacts out there, persisting largely because of the confusion that surrounds it. There are three kinds of problems:
1. In its language – Unreasonable absolutes
Sergey Bratus, a research associate professor in the computer science department at Dartmouth College, New Hampshire, published an article on December 2 highlighting WA’s failure to “describe a technical capability in an intent-neutral way” – with reference to the increasingly thin line (not just of code) that separates a correct output from a flawed one, which hackers have become adept at exploiting. Think of it like this:
Say there’s a computer, called C, which Alice uses for a particular purpose (like to withdraw cash if C were an ATM). C accepts an input called I and spits out an output called O. Because C is used for a fixed purpose, its programmers know that the range of values I can assume is limited (such as the four-digit PIN numbers used at ATMs). However, they end up designing the machine to operate safely for all known four-digit numbers and neglecting what would happen should I be a five-digit number. By some technical insight, a hacker could exploit this feature and make C spit out all the cash it contains using a five-digit I.
In this case, a correct output by C is defined only for a fixed range of inputs, with any output corresponding to an I outside of this range being considered a flawed one. However, programmatically, C has still only provided the correct O for a five-digit I. Bratus’s point is just this: we’ve no way to perfectly define the intentions of the programs that we build, at least not beyond the remits of what we expect them to achieve. How then can the WA aspire to categorise them as safe and unsafe?
2. In its purpose – Sneaky enemies
Speaking at Kiwicon 2015, New Zealand’s computer security conference, cyber-policy buff Katie Moussouris said the WA was underprepared to confront superbugs targeting computers connected to the Internet irrespective of their geographical location but the solutions for which could potentially emerge out of a WA signatory. A case in point that Moussouris used was Heartbleed, a vulnerability that achieved peak nuisance in April 2014. Its M.O. was to target the OpenSSL library, used by a server to encrypt personal information transmitted over the web, and force it to divulge the encryption key. To protect against it, users had to upgrade OpenSSL with a software patch containing the solution. However, such patches targeted against bugs of the future could fall under what the WA has defined simply as “intrusion software”, and for which officials administering the agreement will end up having to provide exemptions dozens of times a day. As Darren Pauli wrote in The Register,
[Moussouri] said the Arrangement requires an overhaul, adding that so-called emergency exemptions that allow controlled goods to be quickly deployed – such as radar units to the 2010 Haiti earthquake – will not apply to globally-coordinated security vulnerability research that occurs daily.
3. In presenting an illusion of sufficiency
Beyond the limitations it places on the export of software, the signatories’ continued reliance on the WA as an instrument of defence has also been questioned. Earlier this year, India received some shade after hackers revealed that its – our – government was considering purchasing surveillance equipment from an Italian company that was selling the tools illegitimately. India wasn’t invited to be part of the WA and had it been, it would’ve been able to purchase the surveillance equipment legitimately. Sure, it doesn’t bode well that India was eyeing the equipment at all but when it does so illegitimately, international human rights organisations have fewer opportunities to track violations in India or be able to haul authorities up for infarctions. Legitimacy confers accountability – or at least the need to be accountable.
Nonetheless, despite an assurance (insufficient in hindsight) that countries like India and China would be invited to participate in conversations over the WA in future, nothing has happened. At the same time, extant signatories have continued to express support for the arrangement. “Offending” software came to be included in the WA following amendments in December 2013. States of the European Union enforced the rules from January 2015 while the US Department of Commerce’s Bureau of Industry and Security published a set of controls pursuant to the arrangement’s rules in May 2015 – which have been widely panned by security experts for being too broadly defined. Over December, however, they have begun to hope National Security Adviser Susan Rice can persuade the State Department push for making the language in the WA more specific at the plenary session in December 2016. The Departments of Commerce and Homeland Security are already onboard.
That computer scientists have been slow to recognise the moral character and political implications of their creations
Phillip Rogaway, a computer scientist at the University of California, Davis, penned an essay he published on December 12 titled The Moral Character of Cryptographic Work. Rogaway’s thesis is centred on the increasing social responsibility of the cryptographer – as invoked by Zuckerman – as he writes,
… we don’t need the specter of mushroom clouds to be dealing with politically relevant technology: scientific and technical work routinely implicates politics. This is an overarching insight from decades of work at the crossroads of science, technology, and society. Technological ideas and technological things are not politically neutral: routinely, they have strong, built-in tendencies. Technological advances are usefully considered not only from the lens of how they work, but also why they came to be as they did, whom they help, and whom they harm. Emphasizing the breadth of man’s agency and technological options, and borrowing a beautiful phrase of Borges, it has been said that innovation is a garden of forking paths. Still, cryptographic ideas can be quite mathematical; mightn’t this make them relatively apolitical? Absolutely not. That cryptographic work is deeply tied to politics is a claim so obvious that only a cryptographer could fail to see it.
And maybe cryptographers have missed the wood for the trees until now but times are a’changing.
On December 22, Apple publicly declared it was opposing a new surveillance bill that the British government is attempting to fast-track. The bill, should it become law, will require messages transmitted via the company’s iMessage platform to be encrypted in such a way that government authorities can access them if they need to but not anyone else – a fallacious presumption that Apple has called out as being impossible to engineer. “A key left under the doormat would not just be there for the good guys. The bad guys would find it too,” it wrote in a statement.
Similarly, in November this year, Microsoft resisted an American warrant to hand over some of its users’ data acquired in Europe by entrusting a German telecom company with its servers. As a result, any requests for data about German users using Microsoft to make calls or send emails, and originating from outside Germany, will now have to deal with German lawmakers. At the same time, anxiety over requests from within the country are minimal as the country boasts some of the world’s strictest data-access policies.
Apple’s and Microsoft’s are welcome and important changes in tack. Both companies were featured in the Snowden/Greenwald stories as having folded under pressure from the NSA to open their data-transfer pipelines to snooping. That the companies also had little alternative at that time was glossed over by the scale of NSA’s violations. However, in 2015, a clear moral as well as economic high-ground has emerged in the form of defiance: Snowden’s revelations were in effect a renewed vilification of Big Brother, and so occupying that high-ground has become a practical option. After Snowden, not taking that option when there’s a chance to has come to mean passive complicity.
But apropos Rogaway’s contention: at what level can, or should, the cryptographer’s commitment be expected? Can smaller companies or individual computer-scientists afford to occupy the same ground as larger companies? After all, without the business model of data monetisation, privacy would be automatically secured – but the business model is what provides for the individuals.
Take the case of Stuxnet, the virus unleashed by what are believed to be agents with the US and Israel in 2009-2010 to destroy Iranian centrifuges suspected of being used to enrich uranium to explosive-grade levels. How many computer-scientists spoke up against it? To date, no institutional condemnation has emerged*. Though it could be that neither the US nor Israel publicly acknowledging their roles in developing Stuxnet could have made it tough to judge who may have crossed a line, that a deceptive bundle of code was used as a weapon in an unjust war was obvious.
Then again, can all cryptographers be expected to comply? One of the threats that the 2013 amendments to the WA attempts to tackle is dual-use technology (which Stuxnet is an example of because the virus took advantage of its ability to mimic harmless code). Evidently such tech also straddles what Aaron Adams (PDF) calls “the boundary between bug and behaviour”. That engineers have had only tenuous control over these boundaries owes itself to imperfect yet blameless programming languages, as Bratus also asserts, and not to the engineers themselves. It is in the nature of a nuclear weapon, when deployed, to overshadow the simple intent of its deployers, rapidly overwhelming the already-weakened doctrine of proportionality – and in turn retroactively making that intent seem far, far more important. But in cyber-warfare, its agents are trapped in the ambiguities surrounding what the nature of a cyber-weapon is at all, with what intent and for what purpose it was crafted, allowing its repercussions to seem anywhere from rapid to evanescent.
Or, as it happens, the agents are liberated.
*That I could find. I’m happy to be proved wrong.
Featured image credit: ikrichter/Flickr, CC BY 2.0.