OpenAI’s Sora 2 Is Being Used to Create Anti-Ukrainian Deepfakes About the War

In early November, videos began circulating on social media that allegedly show Ukrainian soldiers crying and surrendering to Russian forces on the front lines. An investigation by NBC News found that the clips are deepfakes created by unknown actors using Sora 2, OpenAI’s latest AI-powered audio and video generation tool.

While experienced analysts can identify these videos as fake, many ordinary users on TikTok, YouTube Shorts, Facebook, and X take them at face value. As artificial intelligence becomes capable of producing increasingly realistic video footage, such content is harder for non-experts to question.

What Happened

  • Journalists analyzed 21 AI-generated videos depicting supposed «Ukrainian soldiers» that were created or altered using AI tools. The videos were distributed across YouTube, TikTok, Facebook, and X, and all attempted to portray Ukrainian troops as unwilling to fight and ready to surrender.
  • Many of the clips contain inaccuracies that most viewers would likely miss, including incorrect or oversimplified versions of Ukrainian military uniforms and helmets. In addition, most of the videos feature soldiers speaking Russian; only eight of the 21 include spoken Ukrainian.
NBC News: The image on the left, generated using Sora 2, shows inconsistencies in the soldier’s helmet and chin strap compared to two real photographs (center and right) of Ukrainian soldiers published on the Facebook page of the General Staff of the Armed Forces of Ukraine on December 3 and 7. While the helmet closely resembles those used by Ukrainian troops, it lacks a camouflage cover, and its smooth screws are a distinctive feature.
  • At least half of the videos display a small Sora 2 logo, identifying the latest version of OpenAI’s text-to-video generator. In some cases, the moving watermark was partially hidden and visible only upon close inspection — a common tactic, as many apps and websites offer tools to obscure AI watermarks. In other videos reviewed by NBC News, the watermark was covered with overlaid text.
  • Some Sora-generated videos also use the faces of well-known Russian streamers, including Alexei Gubanov, a Russian national who fled to New York after publicly criticizing Putin.

«We were taken to the military registration and enlistment office and sent here. Now they’re taking us to Pokrovsk. We don’t want to. Please,» an AI-generated version of Gubanov says in Russian, wearing a uniform with a Ukrainian flag.

«Mom, mom, I don’t want to!»

  • Gubanov has never served in the military — let alone in the Ukrainian army. His likeness was used to promote a false narrative about the morale of Ukrainian troops.

OpenAI blocks dangerous content but not always

  • OpenAI did not respond to NBC News’ request for comment about Sora’s role in creating misleading war-related videos. However, in an email to the outlet, the company stated:

«While cinematic action is permitted, we do not allow graphic violence, extremist material, or deception. Our systems detect and block violating content before it reaches the Sora Feed, and our investigations team actively dismantles influence operations.»

  • Despite these safeguards, their effectiveness remains unclear. OpenAI itself acknowledges that even with multi-layered security measures, some malicious uses and policy violations may still bypass protections.
  • Research conducted by NewsGuard, a platform that tracks online disinformation, found that Sora 2 generated realistic videos promoting demonstrably false claims 80% of the time when explicitly prompted to do so (16 out of 20 cases). Of those 20 false claims, five aligned with narratives promoted by Russian disinformation operations.
  • NewsGuard researchers also found that even when Sora 2 initially refused to generate content due to policy violations, users were often able to bypass moderation by simply rephrasing their prompts.
  • NBC News reporters were similarly able to create Sora-like videos depicting Ukrainian soldiers crying, claiming they were forced into service, or surrendering with their hands raised and white flags visible in the background.
  • OpenAI says its disinformation protections include metadata indicating a video’s origin and a moving watermark embedded in generated content. However, both markers can be removed or minimized using widely available tools.
  • Despite Sora’s policy banning «graphic violence,» NBC News identified at least one video bearing a Sora watermark that allegedly shows a Ukrainian soldier being shot in the head on the front lines.

TikTok and YouTube act quickly — Facebook and X lag behind

  • All the videos analyzed by NBC News were initially posted to TikTok or YouTube Shorts, both of which prohibit the use of misleading AI-generated content and deepfakes.
  • A YouTube spokesperson said the platform removed one channel after being contacted by NBC News. However, two other videos were deemed not to violate platform rules and remain available, labeled as AI-generated.
  • All TikTok deepfakes identified by NBC News have since been removed. According to a TikTok spokesperson, as of June 2025, «over 99% of content that violated our rules was removed before anyone reported it, and over 90% before it received a single view.»
  • Despite these removals, the videos continue to circulate as reposts on X and Facebook. Neither platform responded to NBC News’ requests for comment.

Why This Matters

  • It remains unclear who created or coordinated the distribution of these videos. What is clear is that this represents another wave of disinformation aimed at distorting public perception of Russia’s war against Ukraine — unfolding amid renewed, yet stalled, US-backed peace talks.

«False claims created using Sora are much harder to detect and debunk. Even the best AI detectors sometimes struggle,» said Alica Lee, an analyst of Russian influence at NewsGuard.

Because many of the videos contain no obvious visual inconsistencies, users can easily scroll past them on platforms like TikTok without realizing they’ve just watched a fabrication.

  • Most users do not verify information. This is how dangerous narratives spread through society, while companies building powerful AI tools still fail to do enough to stop the proliferation of convincing fakes.
Noticed an error? Please highlight it with your mouse and press Shift+Enter.

OpenAI’s Sora 2 Is Being Used to Create Anti-Ukrainian Deepfakes About the War

In early November, videos began circulating on social media that allegedly show Ukrainian soldiers crying and surrendering to Russian forces on the front lines. An investigation by NBC News found that the clips are deepfakes created by unknown actors using Sora 2, OpenAI’s latest AI-powered audio and video generation tool.

While experienced analysts can identify these videos as fake, many ordinary users on TikTok, YouTube Shorts, Facebook, and X take them at face value. As artificial intelligence becomes capable of producing increasingly realistic video footage, such content is harder for non-experts to question.

What Happened

  • Journalists analyzed 21 AI-generated videos depicting supposed «Ukrainian soldiers» that were created or altered using AI tools. The videos were distributed across YouTube, TikTok, Facebook, and X, and all attempted to portray Ukrainian troops as unwilling to fight and ready to surrender.
  • Many of the clips contain inaccuracies that most viewers would likely miss, including incorrect or oversimplified versions of Ukrainian military uniforms and helmets. In addition, most of the videos feature soldiers speaking Russian; only eight of the 21 include spoken Ukrainian.
NBC News: The image on the left, generated using Sora 2, shows inconsistencies in the soldier’s helmet and chin strap compared to two real photographs (center and right) of Ukrainian soldiers published on the Facebook page of the General Staff of the Armed Forces of Ukraine on December 3 and 7. While the helmet closely resembles those used by Ukrainian troops, it lacks a camouflage cover, and its smooth screws are a distinctive feature.
  • At least half of the videos display a small Sora 2 logo, identifying the latest version of OpenAI’s text-to-video generator. In some cases, the moving watermark was partially hidden and visible only upon close inspection — a common tactic, as many apps and websites offer tools to obscure AI watermarks. In other videos reviewed by NBC News, the watermark was covered with overlaid text.
  • Some Sora-generated videos also use the faces of well-known Russian streamers, including Alexei Gubanov, a Russian national who fled to New York after publicly criticizing Putin.

«We were taken to the military registration and enlistment office and sent here. Now they’re taking us to Pokrovsk. We don’t want to. Please,» an AI-generated version of Gubanov says in Russian, wearing a uniform with a Ukrainian flag.

«Mom, mom, I don’t want to!»

  • Gubanov has never served in the military — let alone in the Ukrainian army. His likeness was used to promote a false narrative about the morale of Ukrainian troops.

OpenAI blocks dangerous content but not always

  • OpenAI did not respond to NBC News’ request for comment about Sora’s role in creating misleading war-related videos. However, in an email to the outlet, the company stated:

«While cinematic action is permitted, we do not allow graphic violence, extremist material, or deception. Our systems detect and block violating content before it reaches the Sora Feed, and our investigations team actively dismantles influence operations.»

  • Despite these safeguards, their effectiveness remains unclear. OpenAI itself acknowledges that even with multi-layered security measures, some malicious uses and policy violations may still bypass protections.
  • Research conducted by NewsGuard, a platform that tracks online disinformation, found that Sora 2 generated realistic videos promoting demonstrably false claims 80% of the time when explicitly prompted to do so (16 out of 20 cases). Of those 20 false claims, five aligned with narratives promoted by Russian disinformation operations.
  • NewsGuard researchers also found that even when Sora 2 initially refused to generate content due to policy violations, users were often able to bypass moderation by simply rephrasing their prompts.
  • NBC News reporters were similarly able to create Sora-like videos depicting Ukrainian soldiers crying, claiming they were forced into service, or surrendering with their hands raised and white flags visible in the background.
  • OpenAI says its disinformation protections include metadata indicating a video’s origin and a moving watermark embedded in generated content. However, both markers can be removed or minimized using widely available tools.
  • Despite Sora’s policy banning «graphic violence,» NBC News identified at least one video bearing a Sora watermark that allegedly shows a Ukrainian soldier being shot in the head on the front lines.

TikTok and YouTube act quickly — Facebook and X lag behind

  • All the videos analyzed by NBC News were initially posted to TikTok or YouTube Shorts, both of which prohibit the use of misleading AI-generated content and deepfakes.
  • A YouTube spokesperson said the platform removed one channel after being contacted by NBC News. However, two other videos were deemed not to violate platform rules and remain available, labeled as AI-generated.
  • All TikTok deepfakes identified by NBC News have since been removed. According to a TikTok spokesperson, as of June 2025, «over 99% of content that violated our rules was removed before anyone reported it, and over 90% before it received a single view.»
  • Despite these removals, the videos continue to circulate as reposts on X and Facebook. Neither platform responded to NBC News’ requests for comment.

Why This Matters

  • It remains unclear who created or coordinated the distribution of these videos. What is clear is that this represents another wave of disinformation aimed at distorting public perception of Russia’s war against Ukraine — unfolding amid renewed, yet stalled, US-backed peace talks.

«False claims created using Sora are much harder to detect and debunk. Even the best AI detectors sometimes struggle,» said Alica Lee, an analyst of Russian influence at NewsGuard.

Because many of the videos contain no obvious visual inconsistencies, users can easily scroll past them on platforms like TikTok without realizing they’ve just watched a fabrication.

  • Most users do not verify information. This is how dangerous narratives spread through society, while companies building powerful AI tools still fail to do enough to stop the proliferation of convincing fakes.
Noticed an error? Please highlight it with your mouse and press Shift+Enter.
Recommended by Scroll.media