AI-Generated Satellite Images as a Tool for Deception in Iran's War
SadaNews - A photo of a destroyed American base in Qatar, published by an Iranian news platform, appeared to be captured via satellite, but it was actually a fake image generated by artificial intelligence, highlighting the growing threat posed by technology-driven misinformation during the war.
The rise of generative artificial intelligence has enhanced the ability of states and propaganda groups to fabricate convincing images that look as if they were taken by satellites during conflicts, a trend that researchers warn has security implications in the real world.
In the wake of the U.S.-Israeli war on Iran, the "Tehran Times" published an image allegedly showing a comparison between American radar equipment at a base in Qatar, before and after its "complete destruction."
However, the image was actually a modified version of a "Google Earth" image from last year, showing an American base in Bahrain that had been manipulated using artificial intelligence, according to researchers.
The visual evidence, which was not immediately clear, included a row of cars parked in the same locations in both the real satellite image and the altered one.
Yet, the modified image garnered millions of views and spread across social media in multiple languages, revealing users' increasing failure to distinguish between truth and falsehood amidst platforms inundated with AI-generated visual elements.
Brady Afrik, a researcher in open-source intelligence, notes a "rise in the occurrence of modified satellite images" on social media following major events, including the Middle East war.
He explains that "many of these modified images bear clear indications of imperfect AI generation: strange angles, unclear details, and fabricated features that do not conform to reality."
He adds, "Some others appear as manually modified images, often by adding damage marks or other changes to an original satellite image that did not contain such details in the first place."
"Circumventing Censorship"
Information warfare analyst Tal Hagin points to another AI-driven satellite image depicting alleged Israeli and American aircraft targeting a painted image of a plane on the ground in Iran, while it seemed that Tehran moved the real planes to other locations.
The clear evidence included nonsensical coordinates embedded in the fake image that circulated on platforms like "Instagram," "Threads," and "X."
"France Press" detected the presence of SynthID, a hidden watermark used to distinguish images created with Google's AI tool.
The fake satellite images emerged following the appearance of fake open-source intelligence (OSINT) accounts across social media, seemingly aimed at undermining the work of reliable digital investigators.
Hagin states, "Due to the ambiguity during war, it can be exceedingly difficult to ascertain how successful the enemy's strikes have been. Open-source intelligence provided the solution by utilizing publicly posted and satellite-captured images to circumvent censorship" in countries like Iran.
He continues, "But it has now become a target for those engaged in disinformation."
Reports of fake satellite images generated or modified using AI surfaced after the Russia-Ukraine conflict and last year’s four-day war between India and Pakistan.
"Caution"
Afrik emphasizes that "modified satellite images, like other forms of disinformation, can have real-world consequences when people act on information received without verifying its credibility."
He adds, "This can lead to impacts ranging from influencing public opinion on a critical issue, such as whether a country should engage in conflict, to affecting financial markets."
In the age of artificial intelligence, real-time, high-resolution satellite images can provide decision-makers with vital evidence that helps assess security threats and debunk lies from unreliable sources.
During a recent attack by militants on Niamey airport in Niger, the satellite intelligence company "Vantor" reported that it observed images circulating online claiming to show the airport's main civil building on fire.
The company’s satellite images helped demonstrate that the images were fake and generated by artificial intelligence, according to Tommy Maxstead from "Vantor."
Bo Zhao from the University of Washington explains that "when a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events."
He adds that as AI-generated images become more convincing, "it is important for the public to engage with this kind of visual content cautiously and with critical awareness."
Thomas Friedman: Trump Has No Idea How to End the War with Iran
Syrian Army: Hezbollah Fired Shells at Our Forces in the Town of Sarghaya
AI-Generated Satellite Images as a Tool for Deception in Iran's War
Analysis: An Iranian Strategy for Resilience, Striking Energy Markets, and Overcoming Amer...
Iraqji: Iranian Attacks Continue and No Talks with Washington
Lebanon: Extension of the Parliament's Term for Two Years and Postponement of Legislative...
Report: 165 Girls Killed in Possible U.S. Assault in Iran