The U.S.-Israeli war on Iran is just over 2 weeks old, and already the world is awash in deepfakes. The New York Times reports that a “torrent of fake videos and images generated by artificial intelligence have overrun social networks during the first weeks of the war in Iran.” Deepfakes on X, Facebook, and other platforms, especially TikTok, have garnered millions of views. The fake videos include massive explosions in Tel Aviv, successful missile attacks on U.S. warships, Israelis bemoaning their losses, and other images purporting to show how Iran is delivering pain to its enemies. Many of the videos have a Hollywood feel to them, with massive explosions and sonic booms. Other videos are more muted, such as one showing girls playing just before the U.S. attack that accidentally struck the Shajarah Tayyebeh elementary school, killing at least 175 people, most of them children. The attack was real, but the video was fake.
According to a recent report by Cyabra, a company that tracks influence campaigns, Iran is behind the deepfake effort. Iran’s efforts are designed to sway audiences at home and abroad, convincing those populations that Iran is striking back while undermining the legitimacy of the U.S. and Israeli operations. The best response involves a coordinated effort between governments and private companies, working together to detect, debunk, and remove deepfakes. Even then, however, deepfakes are likely to spread widely and shape broader perceptions of the war.
The U.S.-Israeli war on Iran is just over 2 weeks old, and already the world is awash in deepfakes. The New York Times reports that a “torrent of fake videos and images generated by artificial intelligence have overrun social networks during the first weeks of the war in Iran.” Deepfakes on X, Facebook, and other platforms, especially TikTok, have garnered millions of views. The fake videos include massive explosions in Tel Aviv, successful missile attacks on U.S. warships, Israelis bemoaning their losses, and other images purporting to show how Iran is delivering pain to its enemies. Many of the videos have a Hollywood feel to them, with massive explosions and sonic booms. Other videos are more muted, such as one showing girls playing just before the U.S. attack that accidentally struck the Shajarah Tayyebeh elementary school, killing at least 175 people, most of them children. The attack was real, but the video was fake.
According to a recent report by Cyabra, a company that tracks influence campaigns, Iran is behind the deepfake effort. Iran’s efforts are designed to sway audiences at home and abroad, convincing those populations that Iran is striking back while undermining the legitimacy of the U.S. and Israeli operations. The best response involves a coordinated effort between governments and private companies, working together to detect, debunk, and remove deepfakes. Even then, however, deepfakes are likely to spread widely and shape broader perceptions of the war.
Deepfakes should no longer be a surprise when war breaks out. In March 2022, just after Russia began its full-scale invasion, we saw deepfakes depicting Ukrainian President Volodymyr Zelensky asking his soldiers to lay down their weapons. In late 2023, during the Israel-Hamas war, deepfake videos emerged of babies crying in a debris field, along with imagery of supposed Israeli military operations. In a report to Congress in November 2025, the U.S.-China Economic and Security Review Commission asserted that China had used the India-Pakistan war earlier in May to circulate fake imagery of downed French-made Rafale jets to promote their own J-35 fighters. The ongoing conflict with Iran is just the latest instance of how war and deepfakes go together.
Indeed, Iran is no stranger to cyber-influence operations. The Handala hacking group, sometimes referred to as Void Manticore, is reportedly a part of Iran’s Ministry of Intelligence and Security. It and its sister groups within the ministry are reported to have silently compromised and installed or found backdoors into various Israeli, U.S., and allied nations’ corporate and government networks. In addition to carrying out traditional cyberattacks, Handala is allegedly responsible for releasing deepfakes of Israeli Prime Minister Benjamin Netanyahu and former Prime Minister Naftali Bennett.
Tehran’s current disinformation campaign probably has several audiences. Iran seeks to create the perception that it is winning—or at least enduring and inflicting pain on its enemies—in the hope that this perception will enable it to outlast the United States and Israel.
First, Iran seeks to bolster morale at home. Before the war began, the regime faced a legitimacy crisis, with the Iranian economy cratering and the regime gunning down thousands of peaceful protesters in the streets. Military humiliation at the hands of Iran’s two greatest enemies would only worsen this crisis, but images of Tehran hitting back while its enemies shake in fear help counter the impression of failure. Playing up tragedies such as the U.S. strike on the elementary school also highlights the perceived barbarity of the U.S. assault and the need for Iran to resist.
Second, Tehran seeks to win over audiences around the world as part of its broader strategy to widen the war to increase pressure on the United States and Israel. If audiences in Europe, Asia, and the greater Middle East see Iran as striking back effectively and believe the U.S. war is unjust and destabilizing (a perception that is already widespread), their governments will be less likely to support the war and may put pressure on Washington to end it.
Third, Iran seeks to undermine U.S. and Israeli morale. Although the United States, along with Israel, is inflicting massive damage on Iran, most Americans already oppose the war. Images showing the death and destruction of U.S. forces and the human cost of U.S. mistakes can make the war even less popular, increasing pressure on President Donald Trump to end it.
The U.S. and Israeli governments need to respond to deepfakes in near real time, debunking them and otherwise trying to limit their impact. Successfully detecting and combating deepfakes, however, may need partnerships that go beyond governments, especially as the number of deepfake generation tools is increasing daily. Today, just one major software-sharing platform, Hugging Face, hosts more than 93,000 text-to-image generative models, 1,000 text-to-video models, and 4,000 text-to-speech generators. Such models allow users to write a textual prompt as an input to the model, which produces an image, video, or audio file that reflects the prompt (for example, a 30-second video showing three drones taking off from the deck of a ship and heading toward the coastline of a city with many tall buildings). Another major software-sharing platform is GitHub. It is easy to use such files to produce realistic deepfakes that show a country under attack.
While U.S. government agencies such as the Cybersecurity and Infrastructure Security Agency, the National Security Agency, and the Defense Department have invested in deepfake detection, they have been unable to stem the current wave of deepfakes, and the threat is growing daily. Resource investment is limited at best. Moreover, governments often lack the cutting-edge technical expertise found in the private sector. Perhaps most importantly, social media companies, not governments, own the infrastructure that hosts deepfakes. Government efforts to engage social media companies have also declined, with critics expressing concerns about “jawboning”—government pressure on private actors without formal legal authority that can have a coercive effect and chill free speech.
Technology firms do not want disinformation such as Iranian deepfakes on their platforms, but they are often accused of doing little in response. At times, the designs of their platforms and algorithmic recommendations may make the problem worse. Some technology firms and social platforms have made large investments in detecting deepfakes, while others have done little to detect and combat them. Many trust-and-safety teams focus more on regulatory compliance than on keeping their platforms free of disinformation and deepfakes.
The latest war against Iran shows the need for all of this to change. Deepfakes mislead and confuse the public and possibly even government officials, poisoning democratic discourse. Democratic governments should increase the personnel focused on this issue and push technology companies to do so as well. Sharing information is vital. Technology companies may learn how their platforms are being manipulated in ways that governments must know, while intelligence and security agencies may learn about plans for deepfakes and other manipulations and tip off technology companies. Academia can also play an important role in deepfake detection—a number of universities around the world are developing new deepfake detectors and helping journalists and others to detect and debunk deepfakes for free. Iran, of course, is only a minor information power. A conflict with China, with its resources and technical skill, would require a far greater effort.
Even if measures to combat deepfakes improve, decision-makers must also accept that deepfakes will shape current and future conflicts. One difficulty will be responding to the news before knowing what, precisely, is true and false. A deepfake can travel halfway around the world while detectors are still putting on their shoes, to paraphrase the old saying about lies. At times, leaders will need to make decisions even before images can be properly evaluated to determine if they are deepfakes. In addition, there will be a higher level of misperception among their own and adversary publics, hindering messaging and otherwise making it difficult to show that the United States and Israel are winning or to convey the complexities of a situation to a skeptical public.
The deepfake war unfolding alongside the U.S.-Israeli military campaign illustrates how cheap and widely available generative tools allow states such as Iran to shape perceptions of the battlefield almost as quickly as events occur, targeting domestic audiences, international opinion, and enemy morale simultaneously. In future conflicts, the struggle to control narratives—and to distinguish truth from convincing fabrication—will be nearly as consequential as the fighting itself.

