1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

This cyberwar just got real

May 24, 2018

Is death and destruction the only way to do war? Or can it be less graphic, like a cyber attack on air traffic control, or an artificial intelligence bombarding emergency services with fake calls?

12.10.2012 DW SHIFT Cyberwar

Cyberwar may not feel like "real" war — the kind we've known and loathed for eons and the very same we perversely reenact in video games.

But some military and legal experts say cyberwar is as real as it gets.  

David Petraeus, a retired US General and (some say disgraced) former intelligence chief says the internet has created an entirely distinct domain of warfare, one which he calls "netwar." And that's the kind being waged by terrorists.

Then there's another kind, and technically any hacker with enough computer skills can do it — whatever the motivation. This kind takes out electricity grids, or jams emergency phone lines with fake calls powered by artificial intelligence and machine learning technology.

As far as UK Attorney General Jeremy Wright is concerned, there should be no difference in the way those attacks are treated, compared to physical attacks, by international law.

"If it would be a breach of international law to bomb an air traffic control tower with the effect of downing civilian aircraft, then it will be a breach of international law to use a hostile cyber operation to disable air traffic control systems which results in the same, ultimately lethal, effects," said Wright in a speech at London-based think tank Chatham House on Wednesday.

UK Attorney General Jeremy Wright is calling for international law to keep pace with cyber threatsImage: UK Attorney General's Office

The same goes for cyber attacks on nuclear reactors and medical facilities, said Wright, and these are "no less unlawful and no less deserving of a robust and legitimate response when they are undertaken by cyber means than when they are done by any other means." 

Read more: FBI thwarts potential cyberattack on Ukraine

Multi-pronged problem

The problem, however, is at least two-fold.

First, "there are few areas in which the world has moved faster than in the development of cyber technology," said Wright. And one of the biggest challenges for international law — as well as government and society as a whole — is keeping pace. We are too focused on the idea of war with guns, tanks and bombs.

"When people talk of a cyberattack they often refer to espionage activities and other breaches of confidentiality, while an attack in the context of international law means a use of force or an armed attack, which effectively means death and destruction," said Alexander Klimburg, author of 'The Darkening Web — The War for Cyberspace' in an interview with DW.

An initial cyber attack needn't cause death or destruction. However, it may lead to that, for instance, through a critical mass of misinformation — whether it's automatically produced by online bots or vague truths and vague lies spread by state actors.

Julia Skripal, allegedly poisoned along with her father, Sergei. Some accuse Russia of misinformation over the affairImage: Reuters/D. Martinez

It's what Klimburg calls the "weaponization of information." And that brings us to the second problem: cyber space is an entirely new theater of war.

"Cyberspace is effectively a completely artificial fifth domain that has been added to air, land, sea and space," says Klimburg.

It's often compared with nuclear energy or chemicals.

Both can be used for good and bad, and yet we seem to have landed on a kind of ethical and legal footing for their use.

Chemicals for medicine = good. Chemicals for warfare = bad.

But the internet is different.

It is everywhere. It is used everywhere. And so its promise and threat are everywhere too. As Wright puts it: "Even those not online themselves are using public or private sector services whose operations depend on interconnectivity via cyberspace."

Now for number three

The third prong will hurt the worst, possibly because we've already lost control over it. And that's artificial intelligence (AI) and/or machine learning (ML).

Artificial intelligence software can paste your face onto another person's head. It can also make you voice sound like them.Image: MPI Saarbrücken

AI or ML will have a deeper impact on our lives than cyberspace itself, certainly in the area of misinformation through fake content like photos, videos, and audio. These automated, self-training technologies may hold a lot of promise for medicine and other facets of research. But they also threaten to demolish all human trust — perhaps in the most unsuspected areas of life.

Take for example the social media trend of deep fake porn, a particularly heinous abuse of any technology. And while our digital lives are overwhelmingly visual, fake audio is an issue as well.   

"I think [audio manipulation] is as big a threat as that of video," said Shahar Avin, a research associate at the Center for Existential Risk at Cambridge University in the UK.

In February, Avin co-published "The Malicious Use of Artificial Intelligence," a report written by 26 authors from 14 institutions in academia, civil society and industry, including OpenAI and the Electronic Frontier Foundation.


AI-powered facial recognition may help track terrorists, but it also dumps all of our data into a potential "black box"Image: Imago/J. Tack

Avin says it's the same kind of deep neural network technology that's used to scan for patterns in images that can also be used in audio. As with nuclear and chemicals, AI can be used for good and bad — you can scan for patterns to find and remove child abuse images online, just as much as you can paste the face of a famous actor onto the body of a porn star.

You choose. Or, indeed, we all do, as every human interaction with AI feeds an ever expanding universe of automation.

But what if you don't get a chance to choose? What if you don't realize you have a choice? What, for instance, if you can't tell whether the person on the other end of your phone line is your bank manager, or in fact some automated audio forgery? The choice of "hang up or keep talking" wouldn't even occur to you.

"AI can be used to falsify audio evidence and get people sent to jail, or have disagreements about contracts," Avin told DW. "And another thing we don't think about much is our ability to have a real-world denial of service."

Sometimes it's not state actors behind a cyber attack, but just a teenage boy, like Kane Gamble, who tried to hack the CIAImage: picture-alliance/Zuma Press/London News Pictures/T. Nicholson

What's a Distributed Denial of Service, or DDoS, attack?

"I can hack into a bunch of computers and have them all send packets [of information] to some website, and if the website does not have sufficient defenses, it will go down," Avin explained.

In one of the most recent cases, a DDoS attack took down GitHub — a platform for coders — albeit, for less than 10 minutes. That was taken as a sign of the increasing sophistication of DDoS attacks and that of website defenses. GitHub is an open repository for computer code, so the damage was somewhat limited. But just think what a DDoS attack could do to a commercial competitor of yours (not that we advise that). Or if it's a government website, a DDoS could be considered a "low-intensity attack," and that's getting awfully close to an act of war.

"Now imagine I have these computers call your phone from different numbers that I've bought, and they all sound like people. You would have no way of telling which are genuine people trying to call you and which are bots. So, effectively, I have disabled your phone line," says Avin.

It may not sound so bad. We've all left home without our phones. But let's say you rely on your phone line — say you're an emergency service — and, more to the point, you rely on true audio information. That would be bad.

Someone could create a 100 percent fake radio interview and have a state leader 'say' something to annoy or insult another leader — who wouldn't know it was all a set-up. If that happened it would be a mere hop to a Franz Ferdinand style outbreak of war.

"Sentiments can be seen as a cause of war: Things like 'you hurt my feelings' or 'you insult my president and I attack you,'" Klimburg said. "That is extremely worrying because at the end of the day, if that happens, there is no such thing as free speech, and there's no such thing as democracy."

Where we're headed

So if it's a question of democracy, how do we defend it? Especially as … sorry … there's a fourth prong. And that's the thorny issue of knowing exactly who just attacked you. Even if you can recognize a cyber attack as an act of misinformation — and you've spotted the fake — you may not be able to attribute blame, because the lines between the actions of governments and those of individuals are starting to blur.

In his Chatham House speech, Wright said this was one of the biggest challenges for international law: "Without clearly identifying who is responsible for hostile cyber activity, it is impossible to take responsible action in response."

As it stands, the law "requires a state to bear responsibility […] These principles must be adapted," said Wright, "and applied to a densely technical world of electronic signatures, hard to trace networks and the dark web."

When it comes to artificial intelligence and fakery on the web, some swear by using AI to root out bad content generated by AI. So, we fight fire with fire. But Avin says that's "only a transitory solution."

"You can get to the point where — pixel to pixel or waveform to waveform — there is just no difference, there is no statistical difference between the real world and the forgery," says Avin, "even when you take context into account, and if that were the case, then an AI won't help you because there's no pattern for it to detect."

At that point, says Avin, you need to move to cryptographic measures for guaranteeing the authenticity and the source of photos, for example, "and that means we need different hardware out there and different international standards."

Every human interaction with self-training technologies, including our phones and robots, feeds automationImage: picture-alliance/Photoshot

When the age of quantum computing finally alights on Earth, dynamics may change yet again — for better and worse. Quantum computers will be so much faster and many times more powerful than today's technology.

"So while it will be possible to conduct ever more realistic fakes," says Klimburg, "through the processing power and the mathematical computing, the new AI algorithms will have a higher ability to identify these [fakes] as well."

It begs the question whether scaling up our technologies is always the right way. If faster computing means you can improve your chances of fighting "cyber wrongs," but at the same time those wrongs can also be unleashed at a greater rate, what's the point?

"In some cases, the right thing is just not to develop, where there is a clear consensus that there is going to be more bad than good," says Avin.

But then, he says, that can give rise to technology races between nations and other actors.

"One community might decide they're not going to use a particular technology, but if that puts them at a disadvantage to everyone else, that puts them under quite a lot of pressure," says Avin. "So it's not always a compelling argument to say, 'If we don'T do it, it'll definitely be developed elsewhere,' but sometimes it's the dynamic that emerges."

And that's something we as a global, digital society "need to deal with."

Zulfikar Abbany Senior editor fascinated by space, AI and the mind, and how science touches people
Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW