How To Thwart A Robot Apocalypse: Oxford Professor Nick Bostrom on the Dangers of Superintelligent Machines

How To Thwart A Robot Apocalypse: Oxford Professor Nick
Bostrom on the Dangers of Superintelligent Machines was shot by
Todd Krainin and Joshua Swain. Edited by Swain. About 6.5
mins.

Original realease date was October 9, 2014 and the original
writeup is below.

“If we one day develop machines with general intelligence that
surpasses ours, they would be in a verypowerful position,”
says Nick
Bostrom
, Oxford professor and founding director of
the Future of Humanity
Institute
.

Bostrom sat down with Reason science
correspondent Ron
Bailey
 to discuss his latest book,Superintelligence:
Paths, Dangers, Strategies
, in which he discusses the risks
humanity will face when artificial intelligence (AI) is created.
Bostorm worries that, once computer intelligence exceeds our own,
machines will be beyond of our control and will seek to shape the
future according to their own plan. If the AI’s goals aren’t
properly set by designers, a superintelligent machine will see
humans as a liability to completing its goals–leading to our
annihilation. 

How do we avoid a robot apocalypse? Bostrom proposes two
solutions: either limit the AI to only answering questions in a
preset boundary or engineer the AI to include human preservation.
“We have got to solve the control problem before we solve the AI
problem,” Bostrom explains. ”The big challenge then is to
reach into this huge space of possible mind decisions, motivation
system designs, and try to pick out one of the very special ones
that would be consistent with human survival and flourishing.”

Until such time, Bostrom believes research into AI should be
dramatically slowed, allowing humanity ample time to understand its
own objectives. 

View this article.

from Hit & Run http://ift.tt/1t00sh5
via IFTTT

Leave a Reply

Your email address will not be published.