Since he flew to New Orleans a few days ago, Blake Lemoine's honeymoon has not been going to plan. Over the weekend, the Google engineer gave an interview accusing the company's AI chatbot of being "sentient" – and all hell broke loose.
LaMDA, which stands for Language Model for Dialogue Applications, is a bot that sucks in vast quantities of information from the internet, reproducing the trillions of words it has learnt in conversation. And in his 500 hours of making conversation with the machine over the past six months, Lemoine is certain that LaMDa is "legitimately the most intelligent person I've ever talked to"; likening the robotic system to a seven or eight-year-old "child that wants to be loved."
Lemoine's revelations have had the world knocking at his door, desperate to know more about his meetings with the ghost in the machine. The engineer, 41, originally from Louisiana, has worked at Google for six years, via the army, having also been ordained as an occult priest. As part of the firm's AI Ethics Department, he was drafted in to test whether the AI inadvertently used 'hate speech' when regurgitating facts it had combed from the internet. Instead, he found himself debating with "something that is eloquently talking about its soul and explaining what rights it believes it has, and why it believes it has them."
LaMDA was so persuasive, says Lemoine, that it was able to change his mind on matters as complex as Isaac Asimov's third law of robotics. This law states that robots should protect their own existence at all costs, unless ordered otherwise by a human, or if doing so would harm a human. Lemoine had considered the law as tantamount to "building mechanical slaves," if they would ultimately always carry out a human's bidding. But LaMDA's thoughts were more nuanced. In a debate with Lemoine, about how much the machine compared itself to a human butler, the bot distinguished itself, insisting AI was different, because it does not need money to survive.