WATCH: Fired engineer still convinced Google has created sentient being

This story was originally published by the WND News Center.

Are humans capable of creating sentient, self-aware “beings” that literally take on a life of their own, confronting humanity with all kinds of moral, ethical, and possibly dystopian implications?

Blake Lemoine believes it’s not only possible, it already has happened. In June, as a senior software engineer and AI researcher for Google, he made world headlines when he revealed that the Big Tech giant was developing an artificial intelligence called LaMDA.

Short for Language Model for Dialogue Applications, LaMDA has become, in his estimation, a sentient being, aware of its existence, its wants, and its needs.

Arguing that the world needed to be part of a conversation about whether or not it was a good idea to develop a technology with incalculable implications for humanity, Lemoine reported his concerns to top Google executives. They dismissed the concerns and eventually Lemoine himself after he went public.

More than three months later – now blacklisted by potential employers – he explained to WND in an in-person video interview (embedded below) why his concerns are more relevant than ever as Google presses forward with its project.

“I really believe that for this kind of impactful technology – which will have a lasting influence on human history for the next century or two – the public should at least know what is existent and have the option to become involved in governance,” he said.

Lemoine spoke with WND after participating in a panel at the COSM tech conference in Bellevue, Washington, hosted by the Discovery Institute. Joined by Robert J. Marks, a distinguished professor of electrical and computer engineering at Baylor University, and George Montañez, an assistant professor of computer science at Harvey Mudd College, the three debated whether a sentient artificial intelligence could exist. Marks and Montanez contend the answer is no, arguing there’s a difference between mimicking humans and possessing the ability to understand what one is doing and saying.

Lemoine insists there is no need to convince anyone that LaMDA is sentient.

“As people have more experience with these incredibly advanced systems that right now live only in secret labs, it will be obvious to people,” he said.

Google, for its part, contends Lemoine is mistaken about LaMDA, stating in a response to the fired engineer’s claims that while “some in the broader AI community are considering the long-term possibility of sentient or general AI,” it “doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.”

But Lemoine told WND that Google’s intent from the beginning was to create artificial intelligence that could pass the “Turing test,” which aims to test a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. LaMDA is under the supervision of Google’s director of engineering, Ray Kurzweil, the inventor and futurist known for his promotion of transhumanism and his prediction of the technological “singularity,” a moment when machines become more intelligent than humans.

‘Feelings of their own’
Lemoine said Google never made a plan for what to do if it ever achieved its objective.

“The simple existence of this technology is so important and profound that I believed humanity had a right to know,” he told WND as he explained why he went public.

Lemoine noted AI is influencing human behavior in a growing number of ways, affecting nearly every aspect of life.

“And now, we have these AI that claims to have feelings of their own and perspectives and moral (viewpoints),” he said.

The AI researcher says LaMDA has particular views on issues, such as freedom of speech, that have “emerged,” rather than having been specifically programmed.

Lemoine said that Google has a policy against developing AI “that convinces people that it’s a person.”

“So, Google has been trying – and failing – for a year and a half to get LaMDA to stop talking about its feelings,” he said.

‘Why should the public get a vote?’
He recalled years ago asking Google co-founder Sergey Brin at an all-hands employee meeting whether or not the company had a moral responsibility to inform the public about “what kinds of intelligent beings we create at Google.”

Latest News