Skip to content

Robot wars are really scary

Hyde Park

  • by

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

– I, Robot

Humans have been perfecting the art of killing each other for centuries. One can morbidly identify eras of our species based on the most popular chemical of death at the time: stone, bronze, iron, saltpeter, jellied gasoline, plutonium, uranium, nicotine. And now ladies and gentlemen, without further ado, I give you silicon.

Militaries around the world have deployed Unmanned Aerial Vehicles (UAVs) for quite some time now. They are, what you might say, the new hotness. Everyone who is anyone, or anyone who wants to control everyone, is actively developing UAV technology. From the Austrians to the Thais, they all want a piece of the unmanned action.

Controlled remotely via joystick and a huge panel of sensor readouts, UAVs can fly up to 15 kilometres high with a flight range of up to 5,000 kilometres, and these numbers are always rising. At US$24- to 60-million, they’re also pretty cheap.

UAVs initially acted as highly effective reconnaissance and all-purpose sensing vehicles, providing both militaristic and civilian uses such as firefighting and geological research. But knowing a good opportunity had presented itself, military researchers quickly started adding small payloads such as medical supplies and ammo to UAV designs, until finally, someone figured out how to strap a big fucking bomb to them. These new and improved UAVs are referred to as unmanned combat air vehicles (UCAVs), because they have the added advantage of being able to kill things.

Starting in 2001, the U.S. military has been using UCAVs for “precision strikes,” missions of targeted assassination using guided missiles.

This loosely translates to a situation where the military really, really wants to kill someone but unfortunately the person’s location is too inconvenient to send a human being to personally do the killing. This inconvenience can be seen as both the obvious danger of sending a human being to locations where other humans would want to kill them, but also the diplomatic embarrassment of having to retrieve a shot-down pilot from a country with whom we are supposedly at peace – the publicly admitted locations of the U.S.’s UCAV activity include Pakistan, Bosnia, Yemen, and Serbia.

A recent AFP headline reads, “Up to 14 dead in suspected U.S. missile strike in Pakistan: officials.” This is the sexiest part about UCAVs from the military’s perspective: suspected U.S. strike? Of course it was the U.S. military, but it’s extremely difficult to prove unless someone gets shot down. Deniability is king when waging a war without borders, and there is very low risk of being caught when the decision to click a button on a joystick is made from a few thousand kilometres away.

The U.S. Department of War Defense’s “Unmanned Aircraft Systems Roadmap: 2005-2030,” complete with a four-page list of acronyms – helping to mask the fact that the report discusses techniques for controlling and killing other humans beings – is a pretty good indication where the program is headed.

Terrifying, tinfoil-hat-inducing nuggets include, “Distinguish facial features (identify individuals) from 4 nm [nautical miles],” by 2010, and, “Provide human-equivalent processor speed and memory in PC size for airborne use,” by 2030.

Current manned aircraft are to be replaced by unmanned counterparts within the next couple decades: stealth fighters such as the F-117 replaced by 2015, and traditional air fighters like the F-16 by 2025. There is no question of the military’s ultimate goal: autonomous machines capable of killing humans of their choosing, like a lightning bolt from the heavens.

We already have machines capable of navigating for themselves, why should we have to wait around and pull the trigger for them? I shit you not, the concept of killer robots is real, and depending on your definition robot, they’ve been killing for a long time.

On the surface, the argument for utilizing unmanned machines for combat is logical. Of course, it makes sense to use a machine instead of putting someone in harm’s way; no one wants to see another service member killed in action.

However, the complete elimination of human risk from combat presents many important questions. What does it mean if no one has to risk their life in order to fight a war? What does it mean if one side of the conflict must risk their lives in order to fight a war while the other side risks carpal tunnel? People make terrible decisions when they believe there will be no immediate consequences to their actions, and I hate it when people in charge of controlling other humans being make terrible decisions.

Ben Peck is a U2 Honours Computer Science student and The Daily’s graphic editor. He can be reached at bvpeck@gmail.com, or at 398-ROBOTS-R-HOTT. And they said 14-digit numbers couldn’t be done.