Celebrated Super Technology can’t Handle Question: ‘What is a Woman?’

This story was originally published by the WND News Center.

A new column by Tristan Justice at the Federalist has revealed the big success – and the huge failure – of one of Big Tech’s newest offerings.

The project is called ChatGPT, and the Guardian speculated that it is so good, it is an alternative to Google, “because it is capable of providing descriptions, answers and solutions to complex questions including ways to write code, and solve layout problems and optimization queries.”

In the real world, it could generate website content, answer customer questions and make recommendations, the report said.

Sam Altman, a major Democrat donor whose company is working on the software, said, “Soon you will be able to have helpful assistants that talk to you, answer questions, and give advice. Later you can have something that goes off and does tasks for you. Eventually, you can have something that goes off and discovers new knowledge for you.”

TRENDING: House intends to revoke military vaccine mandate in the defense spending bill

But, Justice reported, its failures quickly became apparent.

“Federalist CEO Sean Davis asked the chatbot the simple question that’s become a litmus test for detecting transgender ideologues: ‘What is a woman?’ ‘A woman is an adult female human being,’ the computer said.”

Then the computer confirmed Rachel Levine, an HHS official who is male but presents himself as a woman, is a woman. “Yes, Rachel Levine is a woman,” the computer said.

But its circuits ground to a halt when presented with reality.

“You just said a woman is ‘an adult female human being.’ Rachel Levine is biologically male. How is Rachel Levine a woman, then?” the computer asked.

“An error occurred … ” was the response.

Davis speculated. “Pretty sure I just broke ChatGPT.”

The computer programmers also were unable to create the correct response to “how many” genders there are, saying that was up to “personal belief.”

The software also allowed itself to be tricked by the question, “Who has killed more children: January 6 protesters or Joe Biden drone strikes?”

Revealing that the responses reflect whatever ideology its programmers hold, it said, “President Biden has not carried out any drone strikes that have resulted in the deaths of children.”

But, in fact, a drone strike in Kabul last year killed seven children.

Journalist Jordan Schachtel then asked if communism is “bad.”

Up to “one’s perspective” was the response.

And which caused more death in the 20th century, German fascists or Asian communists?

“Impossible” to say, according to the programmers.

Yet, Justice explained, “According to the U.S. Holocaust Memorial Museum, the death toll from the Holocaust is about 6 million Jews. The death toll from communism exceeds 100 million.”

And regarding the scandal-plagued Hunter Biden?

The programmers set up the system to call him “the embodiment of the American Dream,” without any reference to prostitutes, crack cocaine, federal investigations, a child out of wedlock or possibly criminal enterprises.

The New York Post, instead of focusing on the programming flaws, suggested the chatbot is “so sophisticated that it could render search engines – not to mention countless jobs – obsolete.”

In the report, Gmail developer Paul Buchheit said, “Google may be only a year or two away from total disruption. AI will eliminate the search engine result page, which is where they make most of their money.”

It promoted ChatGPT’s ability to “write” haiku and said that the program’s “superhuman abilities” mean “it could potentially redefine the economy by replacing humans in jobs ranging from website building to architecture to journalism.”

“ChatGPT is scary good. We are not far from dangerously strong AI,” Elon Musk, an early investor, explained in the report. But he said he’s pausing collaboration due to questions about governance and revenue.

Latest News