This story was originally published by the WND News Center.
Artificial intelligence programs have been swamping the internet in recent months, and have created an atmosphere of fear among those whose names are dragged through the mud by the software programs.
Constitutional expert Jonathan Turley recently described his own experience with damaging – and totally incorrect – information about him that was being spread online.
However, now there's a new threat, only this time it's against the AI itself.
A report in the Washington Times explains legal experts say the owners of the software could be in trouble when their products misquote or defame people.
"It's definitely unchartered waters," commented Catherine Sharkey, of New York University's School of law.
"You have people interacting with Machines. That is very new. How does publication work in that framework?"
Experts say it is unclear, for now, whether a case could be brought under defamation laws, or under product liability.
The Times reported that already, Brian Hood, an Australian mayor, is threatening legal action against OpenAI's "ChatGPT" over its false claim he's guilty of a foreign bribery scandal.
His lawyers, the report said, have written to OpenIA, demanding the company in charge of the software correct the damaging statements.
The Times also cited Turley's experience, where he was told the "bot" is spreading "false" information about him.
"I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper," he wrote.
Eugene Volokh, a law professor at UCLA, told the Washington Times OpenAI could face a defamation claim. He said that the process already has been begun by the mayor, in his demand for a correction.
"That is how you show actual malice. They keep distributing a particular statement even though they know it is false. They allow their software to keep distributing a particular statement even though they know they’re false," he said.
Sharkey said a product liability claim might be more appropriate, suggesting those who are injured "could potentially go after companies for faulty or negligent designs that result in algorithms putting out damaging information, impugning reputation," the Times reported.