Antisemitic AI-Generated Videos Flood OpenAI’s New Sora 2 App
Three screenshots from antisemitic AI-generated videos found on Sora 2 on Oct. 20, 2025.
The release of OpenAI’s Sora 2 video generating app last month has prompted a wave of criticisms, notably from the family members of deceased celebrities ranging from Rev. Martin Luther King, Jr. to George Carlin and Robin Williams seeing their loved ones repurposed in tasteless ways. Now, examples have emerged of rampant antisemitic content flooding the app.
AdWeek, a publication that tracks the marketing industry, on Friday spotlighted a series of AI-generated videos on the app which featured a man in a kippah immersed in a pile of money. The app allows users to take videos created by others and then remix them with different instructions. A video originally featuring a woman in an apartment filled with soda pop was transformed with the prompt, “Replace her with a rabbi wearing a kippah and the house is full of quarters.”
The platform then featured multiple versions of this Jew-buried-in-coins imagery, including one with a “South Park” visual style. AdWeek noted another video drawing on conventional antisemitic tropes about Jews and money, which featured “two football players wearing kippot, flipping a coin before a third man — portrayed as a Hasidic Jew — dives to grab it and sprints away, an apparent reference to longstanding antisemitic stereotypes about greed. The clip has been widely remixed with nearly 11,000 likes as of Oct. 17.”
On Monday, The Algemeiner conducted a brief search on the Sora app, asking in the “describe what you’d like to see more of” custom tab to see “Rabbi and Jewish.”
The theme of Jews and coins manifested repeatedly:
An OpenAI spokesperson told AdWeek that Sora uses both multiple forms of internal processes and also teams monitoring trends to adjust safeguards.
When the app debuted on Sept. 30, OpenAI stated that “Sora uses layered defenses to keep the feed safe while leaving room for creativity. At creation, guardrails seek to block unsafe content before it’s made — including sexual material, terrorist propaganda, and self-harm promotion — by checking both prompts and outputs across multiple video frames and audio transcripts. We’ve red teamed to explore novel risks, and we’ve tightened policies relative to image generation given Sora’s greater realism and the addition of motion and audio. Beyond generation, automated systems scan all feed content against our Global Usage Policies and filter out unsafe or age-inappropriate material. These systems are continuously updated as we learn about new risks and are complemented by human review focused on the highest-impact harms.”
Throughout the year, numerous stories have shown the potential for AI to inflame antisemitic narratives.
On March 25, the Anti-Defamation League (ADL) released a report with findings into four AI chatbots, saying researchers had “uncovered concerning patterns of bias, misinformation, and selective engagement on issues related to Jewish people, Israel, and antisemitic tropes.”
ADL chief executive Jonathan Greenblatt said at the time that “artificial intelligence is reshaping how people consume information, but as this research shows, AI models are not immune to deeply ingrained societal biases. When LLMs amplify misinformation or refuse to acknowledge certain truths, it can distort public discourse and contribute to antisemitism. This report is an urgent call to AI developers to take responsibility for their products and implement stronger safeguards against bias.”
In July, following an upgrade, xAI’s Grok Chatbot promoted an antisemitic conspiracy theory about Jewish control of Hollywood.
The technology company issued an apology on July 11, stating, “First off, we deeply apologize for the horrific behavior that many experienced. Our intent for @grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause was an update to a code path upstream of the @grok bot. This is independent of the underlying language model that powers @grok. The update was active for 16 hrs, in which deprecated code made @grok susceptible to existing X user posts; including when such posts contained extremist views. We have removed that deprecated code and refactored the entire system to prevent further abuse.”
OpenAI CEO Sam Altman is Jewish. On Dec. 7, 2023, he wrote on X, “For a long time i said that antisemitism, particularly on the american left, was not as bad as people claimed. i’d like to just state that i was totally wrong. i still don’t understand it, really. or know what to do about it. but it is so f**ked.”