Deceive not by visuals: The impact of AI-generated images on conflicts over time - AI-generated images influencing conflicts: a history of manipulated imagery shaping wars
In the digital age, the use of artificial intelligence (AI) has revolutionised various sectors, including media and warfare. However, this technological advancement has also given rise to a concerning trend: the generation and spread of misinformation.
Recent conflicts and wars, such as the Israel-Iran conflict and the war in Ukraine, have seen AI being extensively used to manipulate public perception and influence psychological warfare. This manipulation is achieved through AI-generated videos, deepfakes, recycled footage, and automated social media activity.
A prime example of this is the video that surfaced online, purporting to show an explosion at Evin Prison's entrance. The video, despite its black-and-white image, pixelated resolution, and real-existing environment, was initially believed to be real by media with specialized verifiers due to its authentic appearance. However, doubts about its authenticity were raised by Hany Farid, an expert in image analysis and digital forensics.
In the Israel-Iran conflict, an influx of AI-generated images and videos flooded social media platforms. These contents often included distorted buildings, unnatural human features, and scenes fabricated with video game footage falsely presented as real combat operations. For instance, clips from games like Arma 3 and DCS World were repurposed as supposed airstrike footage.
Russia's use of AI-powered deepfakes is also noteworthy. A fabricated video of Ukrainian President Zelenskyy urging troops to surrender was broadcast on a hacked TV channel aiming to undermine morale and destabilise Ukraine internally.
Platforms like X employ crowdsourced community notes and AI assistants like Grok to detect misinformation. However, these tools often miss or inconsistently identify AI-generated fakes due to their complexity. Actors use a combination of AI-generated text, images, video, audio, and bot networks simultaneously to overwhelm verification processes and mask the origins of disinformation.
Stefan Feuerriegel, professor and AI researcher at Ludwig-Maximilians-Universität Munich, states that larger datasets, better algorithms, and precisely trained models have significantly improved AI technologies. He believes that emotionally charged and polarizing campaigns, whether with the intention of misinformation or not, thrive especially in conflicts and wars.
Feuerriegel suggests that AI tools are used to intentionally degrade high-resolution AI videos after the fact, making them appear more authentic and harder to verify. He predicts that AI images or videos will soon look absolutely authentic and realistic due to a new architecture similar to text models.
The proliferation of misinformation, especially in conflicts and wars, is not solely politics and platforms' responsibility, according to Feuerriegel. He suggests that the state should question whether it is right that anyone can register anonymously on social networks given the ease of spreading misinformation with AI support.
The Israeli Defense Ministry announced an attack on Evin Prison in Tehran, Iran, two weeks ago. A deepfake of an alleged Iranian-shot F35 fighter jet of the Israeli army spread, despite its unrealistic size and appearance, and received thousands of likes and shares.
In the case of the Evin Prison video, Farid found a photo of the prison from 2023 that is similar to the first frame of the video. Several branches and bushes in the video and the photo look identical. Farid believes it's unlikely that the photo is from 2023 and suggests an AI-assisted image-to-video generator was used.
For a Tehran spring, the branches and bushes in the video appear unusually bare. This discrepancy, combined with the video's pixelated resolution and the unrealistic size and appearance of the deepfake F35 fighter jet, raises questions about the video's authenticity.
The state-run Iranian TV channel Fars, Western media, and Israel's Foreign Minister Gideon Saar shared the video, adding to its credibility. However, as the use of AI in generating and spreading misinformation continues to evolve, it is crucial for fact-checkers, AI detection tools, and the public to stay vigilant and informed.
I'm not going to be able to do this, as the technology in question is the alarming sophistication of AI-generated content, which can blur the lines between reality and fiction. For instance, deepfakes, recycled footage, and automated social media activity are becoming increasingly difficult to distinguish from real events, thereby contributing to the proliferation of misinformation.