Since Israel initiated airstrikes on Iran last week, a surge of online disinformation has emerged. BBC Verify reviewed numerous posts, revealing a concerted effort to exaggerate the impact of Tehran’s retaliatory actions.
Analysis uncovered numerous AI-generated videos showcasing Iran’s military capabilities, alongside fabricated footage depicting Israeli targets under attack. Three prominent fake videos amassed over 100 million views across various platforms.
Conversely, pro-Israel accounts disseminated disinformation by recirculating outdated footage of Iranian protests, falsely portraying widespread anti-government sentiment and support for Israel’s campaign.
The initial Israeli strikes commenced on June 13th, prompting a series of Iranian missile and drone attacks on Israel.
One open-source intelligence organization deemed the volume of disinformation “astonishing,” accusing “engagement farmers” of exploiting the conflict for online attention and profit.
Geoconfirmed, an online verification group, reported on X: “We’re seeing everything from unrelated footage from Pakistan to recycled videos from October 2024 strikes—some with over 20 million views—alongside game clips and AI-generated content presented as genuine events.”
Certain accounts rapidly amplified this disinformation, significantly increasing their follower counts. One pro-Iranian account, “Daily Iran Military,” with no apparent ties to Tehran’s authorities, experienced a 100% follower growth on X, rising from 700,000 to 1.4 million between June 13th and 19th.
Numerous similar accounts, many with verified status, have proliferated, raising concerns about their authenticity and origins.
Emmanuelle Saliba, Chief Investigative Officer at Get Real, described this as “the first time we’ve seen generative AI used at scale during a conflict,” in comments to BBC Verify.
BBC Verify reviewed accounts frequently sharing AI-generated imagery exaggerating the effectiveness of Iran’s response. One image, viewed 27 million times, depicted numerous missiles striking Tel Aviv.
Another video purported to show a nighttime missile strike on an Israeli building; Ms. Saliba noted the difficulty in verifying such clips.
AI-generated fakes also focused on claims of destroyed Israeli F-35 fighter jets. If accurate, the volume of purportedly destroyed jets would represent 15% of Israel’s fleet, according to Lisa Kaplan, CEO of Alethea. However, no such footage has been verified.
One widely shared post, claiming to show a downed F-35, exhibited clear AI manipulation: disproportionate object sizes and lack of impact evidence.
A TikTok video (with 21.1 million views) claiming to show a downed F-35 was identified as footage from a flight simulator game and subsequently removed by TikTok after BBC Verify’s intervention.
Ms. Kaplan linked some of the F-35 disinformation to networks previously associated with Russian influence operations, suggesting a shift in focus from undermining support for the Ukraine war to discrediting Western weaponry.
“Russia lacks a response to the F-35. So, how does it counter it? By undermining its support in certain countries,” Ms. Kaplan explained.
Established accounts with histories of involvement in conflicts, including the Israel-Gaza conflict, also spread disinformation, their motivations varying but often linked to potential monetization through platform incentives.
Pro-Israel posts primarily focused on alleged rising dissent within Iran.
This includes a widely circulated AI-generated video falsely depicting Iranians chanting “we love Israel” in Tehran.
Recently, with speculation about potential US strikes on Iranian nuclear sites, AI-generated images of B-2 bombers over Tehran have emerged. The B-2’s capability to strike subterranean nuclear sites has fueled this narrative.
Official sources in both Iran and Israel have shared some of this fake imagery. Tehran’s state media disseminated fabricated strike footage and an AI-generated image of a downed F-35, while an IDF post was flagged on X for using unrelated older footage.
Much of the disinformation reviewed by BBC Verify appeared on X, where users frequently utilized Grok, X’s AI chatbot, for verification. However, Grok incorrectly validated certain AI videos as genuine.
One example involved a video of trucks carrying missiles emerging from a mountain complex. Despite obvious AI manipulation (rocks moving independently), Grok repeatedly confirmed its authenticity, referencing Newsweek and Reuters reports.
X did not respond to BBC Verify’s request for comment on this chatbot behavior.
Similar videos appeared on TikTok and Instagram. TikTok stated that they proactively enforce guidelines prohibiting false content and collaborate with fact-checkers. Meta did not respond to BBC Verify’s request for comment.
While motivations vary, much disinformation is shared by average social media users. Matthew Facciani of the University of Notre Dame suggests that the binary choices presented by conflicts accelerate the spread of such content, driven by emotional responses and alignment with political identities.
What do you want BBC Verify to investigate?
This comes after US President Donald Trump said she was “wrong” for saying Iran was not building nuclear weapons.
Israel and Iran exchange fresh attacks a day after Tehran said it would not negotiate over its nuclear programme.
The technology tycoon said his children would not have access to their inheritance for 30 years.
It’s breaking records for online gaming – what’s behind its growing appeal?
Tate, 36, is the younger brother of controversial influencer Andrew Tate.