Deceptive new tech has people voicing words they never said

 April 23, 2024

This story was originally published by the WND News Center.

Now it appears that artificial intelligence can make anyone say anyone. Literally anyone and literally anything. And the proof is in the Mona Lisa … RAPPING?

All that's required is a still image of a face, a recording of words, singing, anything, and the new software.

Microsoft's product exhibits the possibilities.

report at CNN detailed one of the recent "advances" in computer tech, with Microsoft's codes able to "take a still image of a face and an audio clip of someone speaking and automatically create a realistic looking video of that person speaking."

That software is called VASA-1 and the report calls the results "a bit jarring."

"Microsoft said the technology could be used for education or 'improving accessibility for individuals with communication challenges,' or potentially to create virtual companions for humans. But it’s also easy to see how the tool could be abused and used to impersonate real people," CNN documented.

"Wow. Creating videos realistically depicting people saying words they never said? What could possibly go wrong with that?" commented author and WND Managing Editor David Kupelian. "Today's ruling elites, from the Deep State to Big Tech, are so dependent on lies and deception – while censoring and attacking unwelcome truth as 'disinformation,' 'misinformation' and 'malinformation' – it's easy to imagine that before long they’ll be using technology like this to enhance their daily practice of portraying the innocent as guilty and the guilty as innocent."

CNN noted that experts now worry the tech could "disrupt" existing industries of film and advertising, and elevate the level of "misinformation" to which consumers are subjected.

The report said Microsoft isn't going to release the software … yet.

"The move is similar to how Microsoft partner OpenAI is handling concerns around its AI-generated video tool, Sora: OpenAI teased Sora in February, but has so far only made it available to some professional users and cybersecurity professors for testing purposes," the report said.

Online, Microsoft researchers claimed they are "opposed" to anything that creates "misleading" content.

However, they've designed to code to take into account face and head movements, lip motion, expression, eye gaze, blinking and much more.

Latest News

© 2024 - Patriot News Alerts