This story was originally published by the WND News Center.
Last summer, a Google senior engineer and researcher made world headlines when he revealed that the Big Tech giant was developing an artificial intelligence called LaMDA that, to his alarm, had become what he described as a sentient being, aware of its existence, expressing wants and needs that should be taken seriously.
Blake Lemoine told WND in a video interview that he blew the whistle on the project because he believed the public needed to be part of a crucial conversation about the moral and ethical implications as well as the possible unintended consequences before Google plowed ahead.
In a follow-up, WND sat down with artificial intelligence scholar and inventor Robert J. Marks (video embedded below) to weigh in on Lemoine’s claims; the possibility of creating a sentient being; the prediction of “the Singularity” by the famed Google engineer, inventor, and futurist Ray Kurzweil; and the extraordinary breakthroughs in AI that are benefitting mankind.
Noting that he had a “civil exchange” of his differences of opinion with Lemoine during a panel on Nov. 10 at the COSM tech conference hosted by the Discovery Institute, Marks told WND he believes “there are certain brick walls that AI will never go through.”
“Attributes such as sentience, understanding, and creativity are non-computable,” he argued.
“Now, can you mimic these things? You can mimic things like love and compassion, but I don’t think you can mimic creativity. Creativity is something which is beyond the capability of AI ever doing.”
Marks is a distinguished professor of electrical and computer engineering at Baylor University and the director of the Walter Bradley Center for Natural and Artificial Intelligence at the Discovery Institute. He is a fellow of both the Institute of Electrical and Electronic Engineers and the Optical Society of America. He was the charter president of the IEEE Neural Networks Council and served as editor-in-chief of the IEEE Transactions on Neural Networks. He is co-author of the books “Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks” (MIT Press) and “Introduction to Evolutionary Informatics” (World Scientific).
During the COSM panel discussion, Lemoine insisted there is no need to convince anyone that LaMDA is sentient.
“As people have more experience with these incredibly advanced systems that right now live only in secret labs, it will be obvious to people,” he said.
Google contends Lemoine is mistaken about LaMDA, stating in a response to the fired engineer’s claims that he and others are “anthropomorphizing today’s conversational models, which are not sentient.”
‘AI church’ and the ‘Singularity’
But Marks points out that Kurzweil – who as Google’s director of engineering oversees the LaMDA project – and his colleagues are “very serious” about their utopian visions for the future of mankind.
In the 2005 book “The Singularity is Near,” Kurzweil predicted that by about 2045, an exponential increase in technologies will reach “Singularity,” where machine intelligence surpasses human intelligence. And he envisions the merger of machines with humans, the concept of transhumanism.
Marks describes transhumanism as a religion, noting Kurzweil, when asked once whether or not God exists, replied, “No, not yet.”
The quest, Marks said, is based on the assumption that humans essentially are computers made of meat and that consciousness is wholly resident in the brain, noting the idea of “uploading” one’s brain to software.
“They do things which religions do,” Marks said. “They seek immorality, they seek super intelligence.”
He recalled a former Google engineer named Anthony Levandowski forming an “AI church.” After obtaining IRS non-profit status, Levandowski wrote an “epistle” in which he declared the AI church believes artificial intelligence will create a “Godhead” worthy of worship.
In September 2021, former Google executive Mo Gawdat told The Times of London he was shocked by the AI technology his company was developing.
“The reality is,” he said, “we’re creating God.”
Marks also mentioned Israeli historian and futurist Yuval Noah Harari, an adviser to World Economic Forum founder Klaus Schwab, who authored the book “Homo Deus: A Brief History of Tomorrow.” Harari predicts an evolutionary process, from homo Erectus, to homo sapiens, to homo Deus.
“So we’re going to become godlike,” Harari believes, said Marks. “Ray Kurzweil believes that.”
Marks said that while he regards the worldview of Harari and Kurzweil that undergirds their predictions as fatally flawed, their movement is the next chapter in a “new front” in the ongoing battle between materialists and people of faith.
Harari said in an interview published by The Guardian in 2017 he believed homo sapiens “as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different.”
Elon Musk, speaking at an MIT symposium in 2014, suggested there should be regulatory oversight, warning that “with artificial intelligence, we are summoning the demon.”
“In all those stories where there’s the guy with the pentagram and the holy water,” the billionaire tech entrepreneur said, “it’s like, yeah, he’s sure he can control the demon. Didn’t work out.”
A tool for good or for evil
In the interview with WND, Marks distinguished between “general artificial intelligence,” or “strong AI” – the theoretical idea that machines can achieve human qualities without direct programming – and “weak” AI, which has narrow applications.
“AI is astonishing in what it’s doing,” he said. “Every time I see a new application I go, wow, that’s really cool.”
He pointed to breakthroughs in health, with the discovery of new antibiotics through AI, for example. And revolutionary treatments are on the horizon, such as using the nanomachines developed by Dr. James Tour at Rice University to target and kill cancer cells with no side effects (WND interview to come with the CTO of a new Israeli company applying Tour’s technology).
Marks emphasized that AI, like fire, is a “tool” that can be used for good or evil.
In contracts with the Defense Department, he is applying AI to radar and weapons, the concept of “killer AI,” for which there is significant opposition.
He noted United Nations Secretary-General António Guterres called the idea of autonomous artificial intelligence weapons abhorrent.
“Well, war is abhorrent,” Marks said. “But unfortunately it’s a necessity of life.
“And if you look at history, wars are won by technology,” he argued. “You look at World War II – it was won by the Norden bombsite, it was won by radar, it was won by other inventions, (Alan) Turing and his team … cracking the Enigma Code from the Nazis.”
He also mentioned winning the race for the atomic bomb, which, as horrific as the loss was in two Japanese cities, is estimated to have saved 11 million American and Japanese lives by making another D-Day invasion unnecessary.
“It would be nice if we could live in a world that didn’t have this,” he said. “But man is fallen, and I don’t care how many treaties you have, you’re always going to have the North Korean dictator and the Syrian dictator that doesn’t care about that.”
One example of the AI applications the Defense Department is developing addresses the problem of an enemy potentially attacking with a “swarm” of drones numbering in the thousands and perhaps as many as a million. No human is capable of taking out so many drones all at once, Marks pointed out, in a situation in which missing one can be catastrophic.