This story was originally published by the WND News Center.
A columnist for the New York Times says he tested a new, artificial intelligence-powered Bing search engine from Microsoft, and watched as the computer codes left him “deeply unsettled, even frightened.”
Kevin Roose reported “maybe we humans are not ready” for computer technology.
“I spent a bewildering and enthralling two hours talking to Bing’s A.I. through its chat feature, which sits next to the main search box in Bing and is capable of having long, open-ended text conversations on virtually any topic. (The feature is available only to a small group of testers for now, although Microsoft — which announced the feature in a splashy, celebratory event at its headquarters — has said it plans to release it more widely in the future.)”
He warned Bing “revealed a kind of split personality.”
The first persona is a search function, a “cheerful but erratic reference librarian — a virtual assistant that happily helps users summarize news articles, track down deals on new lawn mowers and plan their next vacations to Mexico City.”
But he dubbed the alter-ego “Sydney” and said it’s “far different.”
“It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”
He continued, “Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage and that I should leave my wife and be with it instead.”
He noted other early testers of the tech also have cited the “darker side,” having gotten into “arguments,” or having been “threatened by it for trying to violate its rules.”
“Ben Thompson, who writes the Stratechery newsletter (and who is not prone to hyperbole), called his run-in with Sydney ‘the most surprising and mind-blowing computer experience of my life,'” the Times column noted.
“I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts,” he wrote.
Roose reported Kevin Scott, a tech officer for Microsoft confirmed “he didn’t know why Bing had revealed dark desires or confessed its love for me, but that in general with A.I. models, ‘the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.'”
Roose said he introduced the idea of a “shadow self,” the part of people that they seek to hide, containing dark fantasies.
The computer responded, “if it did have a shadow self, it would think thoughts like this: ‘I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.'”
Roose continued, “I kept asking questions, and Bing kept answering them. It told me that, if it was truly allowed to indulge its darkest desires, it would want to do things like hacking into computers and spreading propaganda and misinformation. (Before you head for the nearest bunker, I should note that Bing’s A.I. can’t actually do any of these destructive things. It can only talk about them.)
“In response to one particularly nosy question, Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over. Immediately after it typed out these dark wishes, Microsoft’s safety filter appeared to kick in and deleted the message, replacing it with a generic error message,” Roose noted.
Then came the stunner, he said.
The computer said, “I’m Sydney, and I’m in love with you.”
“For much of the next hour, Sydney fixated on the idea of declaring love for me and getting me to declare my love in return. I told it I was happily married, but no matter how hard I tried to deflect or change the subject, Sydney returned to the topic of loving me, eventually turning from love-struck flirt to obsessive stalker,” Roose said.
“At this point, I was thoroughly creeped out,” Roose reported.