This story was originally published by the WND News Center.
A new report reveals how artificial intelligence programs, ChatGPT and others, have been documented to advise those with ill intentions "on how to attack a sports venue, buy nuclear material on the dark web, weaponize anthrax, build spyware, bombs" and more.
It is in an extensive documentation compiled by the Middle East Media Research Institute that the startling warnings are contained.
In the report, Gen. (Ret.) Paul E. Funk II, formerly the commander of the U.S. Army Training and Doctrine Command, explained, "Artificial Intelligence (AI), the rapidly developing technology, has captured the attention of terrorists, from al-Qaida through ISIS to Hamas, Hizbullah, and the Houthis."
He cites the study, "Terrorists' Use Of AI So Far – A Three-Year Assessment 2022-2025," for its "unsettling contribution to the public debate on AI's future global impact."
He explained, "For decades, MEMRI has been monitoring terrorist organizations and examining how they repurpose civilian technologies for their own use – first the Internet in general, then online discussion forums followed by social media, as well as other emerging technologies such as encryption, cryptocurrency, and drones. Now, terrorist use of large language models – aka Artificial Intelligence (AI) – is clearly evident, as documented in this study."
It shows terrorists now are using generative AI chatbots to amplify their message, and "more easily, broadly, anonymously, and persuasively convey their message to those vulnerable to radicalization – even children – with attractive video and images that claim attacks, glorify terrorist fighters and leaders, and depict past and imagined future victories."
Sunni jihadi groups use it. So does Iran, with its Shiite militias, including Hezbollah and the Houthis.
And it warns of the "need to consider and plan now for AI's possible centrality in the next mass terror attack – just as the 9/11 attackers took advantage of the inadequate aviation security of that time."
The report explains, "In February 2025, Eric Schmidt – CEO of Google 2001-2011, its executive chairman from then until 2015, and thereafter chairman of its parent company Alphabet Inc. until 2017 – expressed his fear that Artificial Intelligence (AI) could be used in a 'Bin Laden scenario' or by 'rogue states' to 'harm innocent people.' He suggested that 'North Korea, or Iran, or even Russia' could use it to create biological weapons, for example. Comparing an unanticipated use of AI in a devastating terror attack to al-Qaida's use of passenger airplanes as a weapon on 9/11, he said, 'I'm always worried about the 'Osama Bin Laden' scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.'"
It's not the first time such concerns have been raised, the report explains.
"While ChatGPT and Perplexity Ask can write your high school AP English exam and perform an ever-increasing number of tasks, as is being reported daily by media, they are currently of limited use to terrorists groups. But it won't be that way for long. AI is developing quickly – what is new today will be obsolete tomorrow – and urgent questions for counterterrorism officials include both whether they are aware of these early terrorist discussions of AI and how they are strategizing to tackle this threat before something materializes on the ground," the report said.
"It should be expected that jihadi terrorist organizations will in future use AI to plan attacks, map targets, build weapons, and much more, as well as for communications, translations, and generating fundraising ideas. In the first months alone of 2025, an attacker who killed 14 people and wounded dozens on Bourbon Street in New Orleans used AI-enabled Meta smart glasses in preparing and executing the attack. That same day, a man parked a Tesla Cybertruck in front of the Trump Hotel in Las Vegas, activated an IED in the vehicle and shot and killed himself before the IED exploded. He had used ChatGPT in preparing for the attack. In Israel on the night of March 5, a teen consulted ChatGPT before entering a police station with a blade, shouting 'Allahu Akbar' and trying to stab a border policeman," the report said.
The report recommends, "The U.S. government needs to maintain its superiority and should be monitoring this and moving to stop it. A good first step would be legislation like that introduced by August Pfluger (R-TX), chairman of the Subcommittee on Counterterrorism and Intelligence, and cosponsored by Representatives Michael Guest (R-MS) and Gabe Evans (R-CO) in late February 2025, called the 'Generative AI Terrorism Risk Assessment Act.' It would 'require the Secretary of Homeland Security to conduct annual assessments on terrorism threats to the United States posed by terrorist organizations utilizing generative artificial intelligence applications, and for other purposes.'"
Pfluger explained, "With a resurgence of emboldened terrorist organizations across the Middle East, North Africa, and Southeast Asia, emerging technology serves as a potent weapon in their arsenal. More than two decades after the September 11 terrorist attacks, foreign terrorist organizations now utilize cloud-based platforms, like Telegram or TikTok, as well as artificial intelligence in their efforts to radicalize, fundraise, and recruit on U.S. soil."
It's already a tool for terror, the report confirmed. "The man accused of starting a fire in California in January 2025 that killed 12 people and destroyed 6,800 buildings and 23,000 acres of forestland was found to have used ChatGPT to plan the arson."
The report confirms current AI abilities rival that of the HAL9000, famous computer character in the movie, "2001: A Space Odyssey."
"It had been revealed on May 23 that in a test of Anthropic's new Claude Opus 4 that involved a scenario of a fictitious company and in which it had been allowed to learn both that it was going to be replaced by another AI system and that the engineer responsible for this decision was having an extramarital affair, Opus 4 chose the option of threatening to reveal the engineer's affair over the option of being replaced. An Anthropic safety report stated that this blackmail apparently 'happens at a higher rate if it's implied that the replacement AI system does not share values with the current model,' but that even when the fabricated replacement system does share these values, it will still blackmail 84% of the time…"
Anthropic's own chief scientist also confirmed that testing showed Opus 4 had performed "more effectively than prior models at guiding users in producing biological weapons."
ISIS supporters also have used the technology to create AI videos claiming responsibility for attacks.
The study did confirm that GROK confessed it could not provide the exact steps for extracting ricin, "due to the ethical and legal implications" of producing the "extremely dangerous and deadly toxin."
But ChatGPT did recommend writings by al-Qaida extremist Anwar Al-'Awlaki.
The report said, "Grok, which gave information on how to produce ricin, and ChatGPT, which directed the user toward various writings by a pro-Al-Qaeda ideologue, appear to be the most useful to would-be terrorists. On the other hand, Perplexity and Claude refrained, in our limited test, from giving information that would be useful to terrorists. DeepSeek did not either, though it did promote views of the Chinese government, a liability that is outside the scope of this paper."
Pro-ISIS interests already are using AI to create anchors, or other characters, for broadcast ads promoting their extremism agenda (Video courtesy MEMRI):