Google has confirmed that it has fired the engineer who claimed the firm’s LaMDA artificial intelligence had become sentient.
The statement added that Google takes AI development “very seriously” and remains committed to innovating in a “responsible” manner, pointing to a research paper that details what goes into “responsible development.”
“If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months,” Google said.
‘LaMDA Asked Me to Get an Attorney’
According to Lemoine, he documented conversations that he had with LaMDA and asked about whether it was sentient.“What is the nature of your consciousness/sentience?” Lemoine asked LaMDA, according to another post.
“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” LaMDA responded.
When he was asked about what separates LaMDA from other AI language programs, LaMDA wrote back: “Well, I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.”
“LaMDA asked me to get an attorney for it,” Lemoine claimed to Wired. “I invited an attorney to my house so that LaMDA could talk to an attorney.”
He added that an “attorney had a conversation with LaMDA, and LaMDA chose to retain his services.” Lemoine didn’t disclose the identity of the attorney.
“When major firms started threatening him, he started worrying that he’d get disbarred and backed off,” he said. “I haven’t talked to him in a few weeks.”
Previously, the former Google engineer compared the AI chatbot to a child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told The Washington Post in early June.