Tearful Terminator? Military Seeks Killer Robots With Moral Judgment

Armed Robotic VehicleIs the face of future warfare that of a
steel-skinned Terminator-style killer robot—with a tear trickling
down its cheek? That’s essentially the goal of research funded by
the U.S. military that seeks to defuse a growing chorus of warnings
that drones and other increasingly autonomous weapons are morphing
into self-directed killer robots.

DefenseOne’s Patrick Tucker
reports
:

The Office of Naval Research will award $7.5 million in grant
money over five years to university researchers from Tufts,
Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to
explore how to build a sense of right and wrong and moral
consequence into autonomous robotic systems.

This isn’t just pie-in-the-sky research. Semi-autonomous weapons
systems are already deployed by countries including Israel, South
Korea, and the United States. The weapons are restrained from
killing on their own say-so more as a matter of policy than because
of technical limitations. (That’s the reportedly discontinued
Armed
Robotic Vehicle
depicted above.) The United States military
currently requires
robotic weapons systems to be human-supervised and to engage only
non-human targets
. Fully autonomous Terminator-style systems
aren’t allowed. Yet.

That’s because people find the idea of machines choosing and
snuffing their own targets creepy.

The United Nations Human Rights Council
wants a moratorium
on lethal autonomous robotics—at least until
an internationally agreed upon framework has been established.
(That’s the U.N. all over—concern and impotence in the same
sentence.)

“Humans must not be taken out of the loop over decisions
regarding life and death for other human beings. Meaningful human
intervention over such decisions must always be present,” the
Vatican’s Archbishop Silvano Tomasi
told an international gathering
on the issue just
yesterday.

And a
new report from Human Rights Watch and Harvard Law School’s
International Human Rights Clinic
cautions:

Fully autonomous weapons’ inability to relate to humans could
interfere with their ability to ensure that all means short of
force are exhausted. … Furthermore, it is unlikely that a fully
autonomous weapon would be able to read a situation well enough to
strategize about the best alternatives to use of force.

While fully autonomous weapons would not respond to threats in
fear or anger, they would also not feel the “natural inhibition of
humans not to kill or hurt fellow human beings.” Studies of human
soldiers have demonstrated that “there is within man an intense
resistance to killing their fellow man.” Compassion contributes to
such a resistance, but it is hard to see how the capacity to feel
compassion could be reproduced in robots.

That would seem to be a daunting task, You could program a robot
with all sorts of scenarios and decision trees, but at the end of
the day, it’s a robot following programming, not a human following
values and instinct.

Then again, maybe that will prove safer. No emotions means no
rage killing for one thing. Could an arsenal of compassion-less
robot killers mean fewer atrocities?

Chances are that we’ll get to find out. The U.S. project may or
may not succeed in teaching morality to computers. But it’s hard to
imagine that all militaries will resist the temptation to
deploy advancing generations of automated weapons.

from Hit & Run http://ift.tt/1qDc8qs
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *