Skip to content

Thinking Critically - Killer robots are inevitable

And so the debate begins. Fully autonomous weapon systems (or killer robots as they are being called by opponents) are no longer a thing of science fiction.

And so the debate begins.

Fully autonomous weapon systems (or killer robots as they are being called by opponents) are no longer a thing of science fiction.

At least eight countries, the United States, the United Kingdom, Germany, Israel, India, Russia and the Republic of Korea are working on them—and you can bet they are a lot further along with it than they are letting on. The U.S., U.K. and Korea are already deploying weapons with some degree of autonomy and lethality.

The biggest problem with human beings is that there are those among us, who actually think this is a good idea. The next biggest problem is that when we get a really bad idea like this, we inevitably build it and ultimately use it.

So, despite the fact there is a preemptive movement to ban further development, and despite the fact the ethical issues with robots that can kill without human intervention makes the drone debate look like a friendly chat over coffee, we are going to have these things running around making their own life and death decisions.

Have we learned nothing from the Terminator and Matrix movies?

Okay, I don’t believe the AI used in these things is all of a sudden going to become self-aware and decide humans in general are the problem—although it could probably make a pretty compelling argument for it—and wipe us all out, but giving machines the power over life and death really is an unacceptable application of technology.

There has always been the same argument against new military technology and tactics. We establish international rules around warfare such as the treatment of prisoners of war and the proportionality of counterattacks.

The rules change all the time. It used to be that armies would line up on the battlefield facing each other and wade into it “honourably.”

Inevitably somebody breaks the rules and a new equilibrium is established, but there has always been one common thread, human judgment.

I am not saying human judgment is infallible. Quite the contrary, our judgment fails us all the time. Deploying atomic bombs to end World War II and “weapons of mass destruction” as justification for invading Iraq come to mind. An argument can be made, at least, that those things were justifiable. At the very least, we know who is accountable, even if we can’t always hold them so.

Removing humans from the equation altogether, however, really crosses a fundamental moral line. Humans cross those lines as well—the torture of prisoners at Abu Ghraib for example—but can be held accountable.

Who do you hold accountable when an autonomous weapon guns down a schoolyard full of children because it does not get the nuance of context that a human being does.

What happens when the enemy figures out how to hack the software and they turn them back on us?

And, if we can protect our own human troops by deploying robots instead, how much easier does that make the decision to go to war in the first place? Unfortunately, I do not share the optimism of the Campaign to Stop Killer Robots that a ban can be achieved through international treaty.

Call me cynical, but I do not think there is any way in the world that they do not get developed, deployed and start wreaking havoc on unintended targets.

This, in fact, is not the first time humans have used autonomous killing devices. The original ones continue to claim lives and limbs every day. They are called land mines. They are not smart like their next generation counterparts, but they cause a lot of damage and not to the people they were deployed to damage.

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks