AI-generated photos, voice clones, and deepfake videos are becoming some of the most effective and dangerous tools of "information manipulation and fraud" across the world — from battlefields to election campaigns.
According to research by AI threat detection platform Sensity AI, during the Russia-Ukraine war, Ukrainian politicians and soldiers have been portrayed in deepfake videos as either "calling for surrender" or "confessing to war crimes."
In Taiwan, AI-generated content originating from China spreads on social media during election periods — featuring fake speeches, fabricated scandals, or staged gaffes of politicians, targeting opposition candidates.
In Japan, fake images of natural disasters — including fictional nuclear accidents or floods — created using AI have been used to spread panic among the public.
These incidents highlight the growing risks of AI tools being used recklessly and without oversight as "next-generation weapons."
Francesco Cavalli, co-founder of Sensity AI, commented on the power and dangers of AI-generated visuals, videos, and audio.
He warned that as AI tools advance, it is becoming harder to distinguish real from fake. Signs like inconsistent lighting, overly smooth skin, unnatural blinking, or strange mouth movements can help spot fake visuals — but not always.
"AI-generated content, especially in low-resolution or compressed formats, can escape human detection," Cavalli said. "AI-generated voices are now practically impossible for people to recognize as fake."
AI-generated voice mimics are now considered one of the highest risks. A scammer, impersonating U.S. Senator Marco Rubio using AI, created a fake account on the messaging app Signal and contacted foreign ministers from three different countries, a member of Congress, and a state governor.
Because these voices are easy to create and hard to detect quickly, they're being used in phone scams to make victims believe they're speaking with someone they trust.
As tools like Midjourney and Runway improve, distinguishing fake from real becomes harder even for trained eyes.
Cavalli said, "We have documented AI-generated media used in election interference, fake press conferences to promote fraud platforms, and war footage designed to manipulate public opinion."
These visuals are usually spread via fake news sites and social media ads.
Referring to AI use cases in China, Japan, and Taiwan, he said: "In all these examples, AI-driven propaganda is not just a theoretical threat. It's a global weapon actively used to manipulate perception, destabilize societies, and exert soft power."
There are increasing calls for tech platforms to take stronger action against AI-powered visual disinformation.
While some platforms are focusing on the issue, most lack "robust forensic systems capable of detecting synthetic media at scale," Cavalli said.
"Some companies knowingly profit from fraud campaigns and only act under external pressure," he added. "We need stronger collaboration between detection tech providers, platforms, and regulators."
Cavalli believes labeling AI-generated content isn't enough, explaining:
"AI-generated content isn't inherently harmful — it depends on how and where it's used. For example, platforms that allow deepfake scam ads without oversight should face heavy penalties. Instead of relying on user complaints, platforms must take proactive measures."
Sensity has developed a four-step analysis process that includes facial manipulation detection, identification of AI-generated visuals, voice imitation analysis, and forensic auditing — which can be used in official investigations and court proceedings.
Cavalli emphasized the growing distrust in visual media and called for more public awareness.
He said people must be educated about the risks of AI-generated visuals and voices, and companies, journalists, and researchers should be equipped with forensic tools.
"Seeing will no longer mean believing," Cavalli concluded.