91热爆

Explore the 91热爆
This page has been archived and is no longer updated. Find out more about page archiving.


Accessibility help
Text only
91热爆 91热爆page
91热爆 Radio
Today91热爆 Radio 4

Today
Listen Again
Latest Reports
Interview of the Week
About Today
Today at 50
Contact Today

Contact Us

Like this page?
Send it to a friend!

Weekdays 6-9am and Saturdays 7-9am How to listen to Today
Latest Reports

Robots ... Friends Or Foes?

PRINT VERSION


Robots ... could they become our enemies?Could 'laws' stop future generations of robots from harming us?
Can mankind ever guarantee that its cybernetic creations wont turn around one day and wipe it out?

LISTEN
Hear Professor Evans and Professor Warwick, in discussion with Ed Stourton (05/08/04).
Robots are becoming more and more advanced.

Robots are becoming more and more intelligent (such as this advanced model developed by Sony).听But how clever do we want the robots of the future to be?
USEFUL LINKS




The 91热爆 is not responsible for the content of external websites

Will Smith, at the UK premier of 'I, Robot'.

Will Smith, star of the 2004 film inspired by Asimov's book 'I, Robot' arrives at the UK premier.
Robot Wars.

In the future, will robot wars involve us (humans) versus them (our creations)?

The theme of robots (or androids) turning on their human creators has been a recurring听one in science fiction films for years.

Whether it be SkyNet (Terminator), Ash (Alien), or the bleak reaction of robots who overcome their programming to develop emotions (Bladerunner), some computer experts believe it's a problem we may have to address within the next couple of decades, not centuries.

One of the first times the issue was explored was in the book 'I, Robot' by Isaac Asimov in 1950. It's a collection of short stories that has now inspired a 21st century Hollywood adaptation.

Asimov outlined some core laws that would (it was argued) prevent robots from turning on their masters. The first says that a robot may not injure a human being, or allow a human being to come to harm. The second - that a robot must obey the orders given to it by human beings, except where such orders would conflict with the first law. Finally, the third rule stated that the robot must protect its own existence, as long it doesn't conflict with the first or second laws.

But would such core values prevent robots from turning on their masters?

"The laws, as Isaac Asimov himself well knew, don't work," argues Dylan Evans, senior lecturer in autonomous systems at the University of the West of England. "In fact, most of the stories that Asimov wrote were intended precisely to bring out the difficulties of constraining the behaviour of intelligent robots by means of any set of rules."

Evans told us that the problems come when one tries to define the categorical statements contained in any such 'laws'.

"If you want a robot to be able to avoid harming a human, you have to be able to teach it what a human is and how to distinguish a human from, for example, things that look similar - like mannequins and other robots," he explained. "You also have to tell it to distinguish between, say, an order and a command, or a request."

As we all know, life - and the concept of 'doing the right thing' - is never really that black-and-white. And it's when one tackles the deeper moral conundrums that have faced human beings throughout the ages that the true difficulties involved in creating a failsafe robot become apparent.

"(If) a robot policeman had a terrorist in the cell and the terrorist knows where the bomb is that's going to go off in 5 minutes ... should the robot torture the terrorist in order to find out where the bomb is?", asked Dylan Evans.

"If he does torture the terrorist, he's harming a human. If he doesn't he's allowing humans to come to harm through inaction. So all the same moral dilemmas that humans face would be faced by any intelligent robot and no simple three laws would enable them to escape the moral dilemmas."

For many experts, hypothesising about what the future is pointless unless we examine the direction technology is taking us today. By assessing developments currently being made in the field of artificial intelligence, it's argued we can make more accurate judgments as to what may be the reality in two or three decades' time.

Kevin Warwick is professor of cybernetics at the University of Reading. He also has a computer chip implanted in his arm. He's not seen any indications that fundamental laws or failsafe programming is being incorporated into the intelligent technology being developed right now.

"I don't know a single robot that has those laws in it", he told us. "Far from it, military machines (eg. cruise missiles) are set to break those laws. If you look to even 2020, not 2035, the directive - certainly from the American military - (is to have) no body bags. Autonomous fighting machines. So the direction we're heading in is towards the 2035 scenario depicted in the film (I, Robot)."

And for Professor Warwick, it's not a case of 'if' we're going to have to confront the potential problem of robots harming humans, it's 'when'.

"If we don't have slave-like helpers by 2035, it will be a surprise."

But is this fear of super-intelligent robots outgrowing their programming and outthinking those who constructed them simply a paranoid construction of science fiction writers' fevered imaginations? Does logic dictate that such robots would be far too clever and developed to want turn around and destroy us all?

"There are a lot of positive aspects of robotics and I think 'why should we assume that more intelligent robots will necessarily be more aggressive?'", asked听Dylan Evans. "I think it's just as likely, perhaps even more plausible, that robots (as they become more intelligent) could become more friendly."

One thing's for sure ... computers are becoming an increasing part of our daily lives (remember - you're using one right now). The question is, just how clever do we really want them to be?

Will artificial intelligence be the end of us? Email us your thoughts.


Back to Reports 91热爆page

Latest Reports

Back to Latest Reports 91热爆page

Audio Archive
Missed a programme? Or would you like to listen again?
Try last 7 days below or visit the Audio Archive page:

Saturday
Monday
Tuesday
Wednesday
Thursday
Friday

Today | Listen Again | Latest Reports | Interview of the Week | About Today | Today at 50 | Have Your Say | Contact Today



About the 91热爆 | Help | Terms of Use | Privacy & Cookies Policy