| | Could 'laws' stop future generations of robots from harming us? Can mankind ever guarantee that its cybernetic creations wont turn around one day and wipe it out?
| | | | | Hear Professor Evans and Professor Warwick, in discussion with Ed Stourton (05/08/04). | | | | | | | | Robots are becoming more and more intelligent (such as this advanced model developed by Sony).听But how clever do we want the robots of the future to be?
| | | | | | USEFUL LINKS | | | | | |
The 91热爆 is not responsible for the content of external websites
| | | | | | | | Will Smith, star of the 2004 film inspired by Asimov's book 'I, Robot' arrives at the UK premier.
| | | | | | | | In the future, will robot wars involve us (humans) versus them (our creations)?
| | | | | | | |
The theme of robots (or androids) turning on their human creators has been a recurring听one in science fiction films for years.
Whether it be SkyNet (Terminator), Ash (Alien), or the bleak reaction of robots who overcome their programming to develop emotions (Bladerunner), some computer experts believe it's a problem we may have to address within the next couple of decades, not centuries.
One of the first times the issue was explored was in the book 'I, Robot' by Isaac Asimov in 1950. It's a collection of short stories that has now inspired a 21st century Hollywood adaptation.
Asimov outlined some core laws that would (it was argued) prevent robots from turning on their masters. The first says that a robot may not injure a human being, or allow a human being to come to harm. The second - that a robot must obey the orders given to it by human beings, except where such orders would conflict with the first law. Finally, the third rule stated that the robot must protect its own existence, as long it doesn't conflict with the first or second laws.
But would such core values prevent robots from turning on their masters?
"The laws, as Isaac Asimov himself well knew, don't work," argues Dylan Evans, senior lecturer in autonomous systems at the University of the West of England. "In fact, most of the stories that Asimov wrote were intended precisely to bring out the difficulties of constraining the behaviour of intelligent robots by means of any set of rules."
Evans told us that the problems come when one tries to define the categorical statements contained in any such 'laws'.
"If you want a robot to be able to avoid harming a human, you have to be able to teach it what a human is and how to distinguish a human from, for example, things that look similar - like mannequins and other robots," he explained. "You also have to tell it to distinguish between, say, an order and a command, or a request."
As we all know, life - and the concept of 'doing the right thing' - is never really that black-and-white. And it's when one tackles the deeper moral conundrums that have faced human beings throughout the ages that the true difficulties involved in creating a failsafe robot become apparent.
"(If) a robot policeman had a terrorist in the cell and the terrorist knows where the bomb is that's going to go off in 5 minutes ... should the robot torture the terrorist in order to find out where the bomb is?", asked Dylan Evans.
"If he does torture the terrorist, he's harming a human. If he doesn't he's allowing humans to come to harm through inaction. So all the same moral dilemmas that humans face would be faced by any intelligent robot and no simple three laws would enable them to escape the moral dilemmas."
For many experts, hypothesising about what the future is pointless unless we examine the direction technology is taking us today. By assessing developments currently being made in the field of artificial intelligence, it's argued we can make more accurate judgments as to what may be the reality in two or three decades' time.
Kevin Warwick is professor of cybernetics at the University of Reading. He also has a computer chip implanted in his arm. He's not seen any indications that fundamental laws or failsafe programming is being incorporated into the intelligent technology being developed right now.
"I don't know a single robot that has those laws in it", he told us. "Far from it, military machines (eg. cruise missiles) are set to break those laws. If you look to even 2020, not 2035, the directive - certainly from the American military - (is to have) no body bags. Autonomous fighting machines. So the direction we're heading in is towards the 2035 scenario depicted in the film (I, Robot)."
And for Professor Warwick, it's not a case of 'if' we're going to have to confront the potential problem of robots harming humans, it's 'when'.
"If we don't have slave-like helpers by 2035, it will be a surprise."
But is this fear of super-intelligent robots outgrowing their programming and outthinking those who constructed them simply a paranoid construction of science fiction writers' fevered imaginations? Does logic dictate that such robots would be far too clever and developed to want turn around and destroy us all?
"There are a lot of positive aspects of robotics and I think 'why should we assume that more intelligent robots will necessarily be more aggressive?'", asked听Dylan Evans. "I think it's just as likely, perhaps even more plausible, that robots (as they become more intelligent) could become more friendly."
One thing's for sure ... computers are becoming an increasing part of our daily lives (remember - you're using one right now). The question is, just how clever do we really want them to be?
Will artificial intelligence be the end of us? Email us your thoughts.
Back to Reports 91热爆page
| | | | |
| | |