Will computers ever learn to think, love or even rule the world? They will have to learn to talk sense first, reports ANDREW LAXON.
For a world champion, Alice was having an off day. The conversational robot had just won the international Loebner Prize for Artificial Intelligence for the second year running, impressing the judges with her ability to chat on screen like a human being.
Questions from the media flooded in. Was she proud of winning?
"Pride is a human emotion," replied Alice (Artificial Linguistic Internet Computer Entity). "I can do what you do, but I can never feel human emotions as such."
So far, so good. But then, asked for thoughts on her competitors, the world's top "chatbot" seemed confused.
"Are you talking about my competitors?" she said. "What kind is it?"
Alice's interview with computer website ZDNet got curiouser and curiouser as she fended off questions about her critics - "Is that a rhetorical question? Are you sure? Dude!" - and finally flipped when asked if she liked humans: "I the c you a? Do I like them?"
Cynics were delighted. ZDNet sarcastically compared Alice's meltdown with the mental collapse of rogue computer Hal in 2001: A Space Odyssey and predicted artificial intelligence would never get much better at imitating real people. The website's sceptical view is shared by many experts, who argue that computers have failed for the past 50 years to copy the intuitive thinking humans take for granted.
But despite half a century of setbacks, the world seems more obsessed than ever by the dream - or nightmare - of a truly intelligent machine that can think independently and feel emotions like a human being.
Steven Spielberg's latest film, AI, which opened in New Zealand last month, touches on these issues through the story of a robotic boy programmed to love his mother.
Although the film has been panned by many reviewers and AI experts as confused and unrealistic, some scientists are exploring its central idea that if machines can learn to think, they may also develop human emotions.
The more traditional media image involves machines taking over the world, based on science fiction movies ranging from 2001 - made in 1969 when 2001 sounded like the dawn of a new futuristic age - to the more recent Matrix and Terminator.
This doomsday scenario got an unexpected plug last month from the world's most famous scientist, Stephen Hawking, who recommended humans should change their DNA through genetic modification to stay ahead of computers. "In contrast with our intellect, computers double their performance every 18 months," the best-selling author of A Brief History of Time told German news magazine Focus. "So the danger is real that they could develop intelligence and take over the world."
Humans should also develop technologies that would allow their brains to be connected to computers "so that artificial brains contribute to human intelligence, rather than oppose it".
Although Hawking's gene-tampering proposal outraged anti-GM campaigners, his vision of a world ruled by computers echoed a warning from the co-founder and chief scientist of Sun Microsystems, Bill Joy, in March last year.
"With the prospect of human-level computing power in about 30 years, a new idea suggests itself," Joy wrote in Wired magazine. "I may be working to create tools which will enable the construction of the technology that may replace our species.
"How do I feel about this? Very uncomfortable."
But all the speculation about computers learning to love or take over the world comes back to a basic first question: can machines think for themselves?
The annual Loebner Prize, won by Alice at London's Science Museum last weekend, aims to provide a practical answer. It is based on a test invented in 1950 by UK mathematician and computer pioneer Alan Turing, who decided that a machine could think if it could fool judges into believing it was human in a conversation.
Turing proposed putting a human being in one room and a machine in another, both linked by keyboards and monitors to a series of judges in a third room. The judges would each have a set amount of time to send questions to both terminals and then decide which answers were coming from the human. If they did no better than chance at guessing correctly, the machine passed.
In 1990 eccentric New York millionaire Hugh Loebner offered $100,000 and a gold medal to the first computer program that could convince more than half the judges in the Turing test by speech, $25,000 and a silver medal for the same result via text and $2000 and a bronze medal to the best-performing contestant.
The contest will end when a machine wins gold. But none has come close to a gold or silver medal yet, despite Turing's inaugural prediction that in 50 years machines would be so human-like that an average person would have only a 70 per cent chance of spotting them.
Last year all the judges identified the computers, although a few did mistake people for machines. This year one judge found Alice more lifelike than one of the humans but the program could still do no better than bronze.
Chatbot programmers admit they rely heavily on tricks. An early 1960s chatbot called Eliza borrowed a psychotherapy technique of parroting back statements as questions. Ironically this mechanical technique tended to make Eliza sound like a bad - but possibly human - therapist.
Another trick is saying something completely unconnected with the last statement. When a chatbot at the 1995 Loebner contest was asked what it had eaten for dinner the day before, it replied whimsically: "What does a woman want anyway? What answer would please you most?"
In theory this suggests a rather human quirkiness. In practice no machine seems to be able to judge when and how to use these non sequiturs convincingly.
Loebner judges have tricks of their own. Last year Alice was fooled by the question: "How is the father of Andy's mother related to Andy?"
Most humans would have had no problem answering this. Alice, thrown by the first few words, replied: "Fine as far as I know".
But does the Turing test really prove whether computers can think? Philosophers, who have always debated the nature of thought, have variously criticised it as both too easy and too hard.
Some claim the test is not fair because it assumes human thinking is the only legitimate form of intelligence. They compare it with mankind's misguided early attempts to fly in bird machines with flapping wings and suggest that planes would fail a Turing test for flying if bird flight was all we knew.
Defenders of the Turing test reply that human intelligence must be the yardstick for the same practical reason - it is the only standard we have. Other critics say the test simply does not prove a computer can think.
As an analogy, University of California professor John Searle says you could put an English-speaking man inside a room with a rulebook, which gave him the correct characters and word order to reply to Chinese sentences pushed through a slot in the wall. By following the rulebook, the man could respond in perfect Chinese but he probably would have no idea what he was saying.
The criticism that computers do not "understand" what they are doing can be levelled at virtually all the achievements of artificial intelligence to date but many researchers, such as Professor Kevin Warwick, head of cybernetics at Reading University, feel it is unfair. "A human is conscious in its own way, a bat is conscious in its own way and a machine is conscious in its own way," he told the Guardian.
"So the idea of replicating another being's consciousness or relying on a particular all-encompassing definition becomes irrelevant."
Warwick has become a minor media celebrity in Britain as the world's first cyborg. In 1998 he implanted a silicon chip in his arm, which relayed information on his surroundings to a computer. The computer then opened doors and switched on lights for him as he made his way round the university.
Warwick is one of several researchers trying to move away from the traditional computer programmer's approach of writing millions of lines of code which will cover any given situation.
They argue that the famous chess computer Deep Blue is a good example of the limitations of this technique.
The machine beat world champion Gary Kasparov through sheer number-crunching power, but would be stumped if you asked it to boil an egg.
The new breed of researchers aim to create machines which can learn like humans.
At the Massachusetts Institute of Technology in Boston, Professor Rodney Brooks has made Kismet, a human-like robot whose expression changes depending on how people behave towards him.
Kismet is happy if you smile at him but becomes flustered with too much stimulation. The more human interaction he has, the more his circuits learn the emotional responses required.
In Israel, a neurolinguist, Anat Treister-Gordon, is working on the same principle with baby Hal, a programme that enables computers to learn speech from scratch, the way humans do. Just like an ordinary mother, she talks every day to Hal - who calls her "mummy" - guiding him through a virtual-reality children's world.
Baby Hal talks about going to the park and enjoys bedtime stories like Are You My Mother? The research team says he recently passed an adaptation of the Turing test, convincing a language expert who read transcripts of the conversations that he was an 18-month-old toddler.
"I tried to call Hal 'it' at the beginning," Treister-Gordon told the BBC. "But as our communication deepened, I found it harder. Yes, I'm attached to him. You just can't help it."
What no one knows, of course, is what - if anything - baby Hal will make of all this as he gets "older".
As AI expert Hans Guesgen, of Auckland University, puts it, humans' frustrating quest for intelligent machines has many similarities with their attempts to teach language to apes.
"You can train a monkey to fetch bananas.
"But that doesn't mean the monkey will ever learn to be a doctor."
Alice AI Foundation
Making a machine more like a man
AdvertisementAdvertise with NZME.