Stars and Stripes recently ran an article from the Pittsburgh Tribune-Review with the provocative title “Militaries’ growing use of ground robots raises ethics concerns.” It rehashes old concerns about “killer robots” — lack of accountability, ethical responsibility — but its thesis is a fear of modern technology.
This irrational fear assigns higher moral standards to robots vs. flesh-and-blood troops, retarding technological progress and putting lives at risk. We should never downplay anything with the power to kill, but fear of the unknown shouldn’t paralyze us.
In the case of unmanned weapons systems (aka, “killer robots”), which have the power to reduce collateral damage and save lives, we should support and encourage their development, not preemptively ban them and set disproportionately high ethical standards as a function of their deployment.
And these tricky ethical concerns — responsibility, accountability — won’t get away anytime soon. The war on terror — Afghanistan and Waziristan, in particular — has dramatically raised the profile of unmanned weapons systems. Since 2004, we’ve launched 354 UAV strikes in the vicinity of northwest Pakistan, and the frequency has shot up under the present administration.
A wide swath of human rights groups, political factions, and interested parties have charged these artificial warriors with collateral rates up to 35%. Human Rights Watch has called for a preemptive ban. But drones are almost certainly more humane than 20th century tools of warfare, which killed up to three civilians for every enemy soldier.
Let’s be clear — every civilian death is tragic; collateral damage always has the same result, regardless of the source. But shouldn’t we emphasize weaponry that causes fewer civilian casualties?
In the minds of critics, it’s preferable to die a noble death at the behest of a human than suffer the indignity of losing your life to a cold, unfeeling robot. Here’s a hint: The poor sap is dead either way.
This quote from Steve Goose, Arms Division director at Human Rights Watch, says it all: “Giving machines the power to decide who lives and dies on the battlefield would take technology too far.” “Human control of robotic warfare is essential to minimizing civilian deaths and injuries,” he said.
How does Mr. Goose know that humans are more adept at preventing collateral damage? Why would unmanned weapons systems — operating with robotic precision — cause more civilian casualties?
If Human Rights Watch and other NGOs were truly concerned with reducing collateral damage, they would support the development of military robots.
Robots aren’t prone to mental distractions and have far more information at their disposal than we do. Their identification friend or foe (IFF) programs are infinity more sophisticated than the fallible judgment of a human.
The IEEE Spectrum nails it: “A robot can use high-resolution cameras, infrared imaging, ultraviolet imaging, radar, LIDAR, data feeds from other robots, and anything else you can think of all at once to determine very quickly how tall a person is, how much they weigh, and whether they’re holding an ice cream made of ice cream or a gun made of metal.”
Robots don’t get fatigued. They don’t experience stress. They won’t succumb to post-traumatic stress disorder (PTSD) and the associated dangers. And humans define their technical and moral parameters.
Forget Hollywood’s killer-robot hyperbole. Real autonomous military robots — like the Navy’s PHALANX Close-in Weapon System (CIWS) — operate in a highly restrictive environment and can only make reactive “decisions”. The CIWS, for example, detects incoming projectiles and autonomously defeats them.
Their “autonomy” is based entirely on hypothetical scenarios. They can’t formulate their own strategies, and they won’t plot the enslavement of mankind. Even autonomous robots — which act without human input — cannot help but follow their programming. They have no choice in the matter.
As The Spectrum points out, calling a robot “killer” ascribes a sinister motive to it, but robots have no will of their own.
So why the irrational fear of “killer” robots? Fear of the unknown — of advanced technology that critics don’t fully understand — is the biggest obstacle. And we should never let fear stand in the way of progress.