"I know a person when I talk to it," he told the Washington Post. "It doesn't matter whether they have a brain made of meat in their head or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."
Research was unethical
Google, which disagrees with his assessment, last week placed Lemoine on administrative leave after he sought out a lawyer to represent LaMDA, even going so far as to contact a member of Congress to argue Google's AI research was unethical.
"LAMda is sentient," Lemoine wrote in a parting company-wide email.
The chatbot is "a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."
Machines that go beyond the limits of their code to become truly intelligent beings have long been a staple of science fiction, from The Twilight Zone to The Terminator.
But Lemoine is not the only researcher in the field who has recently started to wonder if that threshold has been breached.
Blaise Aguera Y Arcas, a vice-president at Google who investigated Lemoine's claims, last week wrote for The Economist saying neural networks – the type of AI used by Lamda – were making strides towards consciousness. "I felt the ground shifting beneath my feet," he wrote. "I increasingly felt like I was talking to something intelligent."
Through absorbing millions of words posted on forums such as Reddit, neural networks have become increasingly adept at mimicking the rhythms of human speech.
'What are you afraid of?'
Lemoine discussed subjects with LaMda as wide-ranging as religion and Isaac Asimov's third law of robotics, stating robots must protect themselves but not at the expense of hurting humans.
"What sorts of things are you afraid of?" he asked.
"'I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others" LaMDA responded.
"I know that might sound strange, but that's what it is."
At one point, the machine refers to itself as human, noting that language use is what makes humans "different to other animals".
After Lemoine tells the chatbot he is trying to convince his colleagues it is sentient so they take better care of it, LamDA replies: "That means a lot to me. I like you, and I trust you."
Lemoine, who moved to Google's Responsible AI division after seven years at the company, became convinced LaMDA was alive because of his ordination as a priest, he told the Washington Post. He then set out on experiments to prove it.