TF Opinion: Sora 2 Videos Contributing Deepfakes, AI Slop

TF Opinion: Sora 2 Videos Contributing Deepfakes, AI Slop

Z Patel

When Innovation Turns Ugly

OpenAI promotes Sora 2 as a revolutionary tool in video generation. Sora 2 merges text prompts, AI imagination, and cinematic realism. Yet beneath its glossy tech and viral success lies daunting moral and creative decay.

The app’s viral clips blur reality and fiction. Celebrities appear in scenes they never filmed. Historical figures resurrect for memes. Deepfake influencers profit from likenesses of the dead. What began as entertainment now spreads what critics call “AI slop” — content that clogs feeds, exploits identities, and erodes authenticity.

What’s Happening & Why This Matters

Sora’s Rise and the Ghosts It Revives

Sora 2 launched in October 2025 with limited access in the U.S. and Canada. Within five days, it crossed one million downloads — faster than ChatGPT. The app’s ease of use made anyone a video creator: type a prompt, get a hyper-realistic short film in minutes.

But the appeal comes with controversy. Users discovered they could recreate scenes starring dead celebrities, a loophole that bypassed OpenAI’s stated ban on living public figures. The result: viral clips featuring Martin Luther King Jr., Amy Winehouse, Kobe Bryant, Stephen Hawking, and even Adolf Hitler in absurd, often disturbing scenarios.

OpenAI’s algorithm rewards attention, not decency. As AI researcher Henry Ajder notes, “A world saturated with this kind of content distorts how people are remembered.” Families of those depicted — including Robin Williams’s daughter Zelda Williams and Malcolm X’s daughter Ilyasah Shabazz — denounce the videos as offensive and dehumanising.

Sora’s promise of creativity now mirrors a digital séance, turning memory into spectacle.

Hollywood Reacts — With Fury

Hollywood’s reaction to Sora 2 is no less dramatic. Talent agencies like WME and CAA accuse OpenAI of deceit and exploitation. They allege the company downplayed how easily Sora 2 enables the use of recognisable IP and celebrity likenesses.

Executives claim that OpenAI’s pre-launch meetings suggested “strong guardrails.” Those expectations collapsed on release. Within hours, users generated parodies of SpongeBob, Pokémon, Grand Theft Auto, and even hybrid mashups like “Pikachu in Oppenheimer.”

WME, which represents Ben Affleck, Denzel Washington, and Jennifer Garner, immediately opted out of all its clients from AI-generated videos. CAA echoed thei exit, calling Sora’s behaviour “exploitation.”

An unnamed WME executive told The Hollywood Reporter:

“They knew exactly what they were doing when they released this without protections and guardrails.”

Sam Altman, OpenAI’s CEO, insists that Sora 2’s purpose is “to make people smile.” Many in Hollywood disagree — and some prepare litigation. With legal experts divided over whether Section 230 protects OpenAI from liability for user-generated videos, the tension between AI freedom and creative consent escalates.

The Law, the Dead, and the Digital Afterlife

OpenAI’s allowance for “historical figures” creates a murky ethical loophole. In California, New York, and Tennessee, a person’s likeness remains protected after death — yet enforcement depends on intent and monetisation.

YouTube player

Legal scholar James Grimmelmann from Cornell Tech explains:

“We couldn’t resurrect Christopher Lee to star in a movie without consent. Why can OpenAI resurrect him in thousands of shorts?”

The company recently added a process for families to request takedowns of AI depictions of “recently deceased” individuals. But it has yet to clarify what “recent” means. Critics call this a “Whac-A-Mole” solution — reactive, inconsistent, and legally fragile.

OpenAI faces similar pushback from the Motion Picture Association, which accuses the company of infringement. Viral “Nazi SpongeBob” and “King Karaoke” clips forced OpenAI to pivot toward an opt-in licensing model for rights holders.

The Slop Problem

The internet once drowned in clickbait. Now it drowns in “AI slop.” Low-quality, shock-driven content floods feeds across TikTok, YouTube, and Instagram, powered by models like Sora. The difference is speed and realism. These clips look cinematic, even when the context is absurd.

Experts like Alexios Mantzarlis of Cornell Tech warn that monetised “AI influencers” now profit from this slop. “Economic AI slop,” he says, emerges when creators build large audiences from AI videos of famous people and monetise that traffic.

That model erodes trust in media. The more consumers engage with deepfakes — even for laughs — the more the boundary between creativity and deception dissolves.

OpenAI calls Sora a “tool for entertainment.” But when that entertainment rewrites history and exploits the dead, the moral calculus changes.

The Convenience Culture

Sora 2’s viral appeal lies in how fast it creates emotional spectacle. Users no longer need skill, vision, or artistry — only curiosity. The result is a flood of uncanny videos that feel creative but lack meaning.

That’s the true definition of AI slop: content generated faster than thought, optimised for clicks rather than craft. In the process, legacies get condensed into memes, and cultural icons become props in digital puppetry.

As Zelda Williams wrote after seeing fake clips of her late father:

“To watch real people’s legacies be condensed into horrible, TikTok slop puppeteering them is maddening.”

TF Summary: What’s Next

Sora 2’s viral moment exposes AI’s growing cultural rot. The model’s power outpaces ethics, and OpenAI’s “good faith” fixes arrive too slowly. Expect rising lawsuits from estates, tighter state regulations on likeness rights, and the first federal test of AI-generated liability within a year.

MY FORECAST: The great risk is beyond legalities — it’s societal. Each viral deepfake ebbs away public trust, at art, and at our sense of what’s real. Until AI creators learn restraint, the internet turns from innovation into imitation.

— Text-to-Speech (TTS) provided by gspeech


Share This Article
Avatar photo
By Z Patel “TF AI Specialist”
Background:
Zara ‘Z’ Patel stands as a beacon of expertise in the field of digital innovation and Artificial Intelligence. Holding a Ph.D. in Computer Science with a specialization in Machine Learning, Z has worked extensively in AI research and development. Her career includes tenure at leading tech firms where she contributed to breakthrough innovations in AI applications. Z is passionate about the ethical and practical implications of AI in everyday life and is an advocate for responsible and innovative AI use.
Leave a comment