Meta Pursues Deepfake App Makers in Courts

Meta Sues Deepfake App Maker Over Nonconsensual AI Nudifying Ads

Sophia Rodriguez

Meta has launched a legal offensive against creators of explicit deepfake apps that dodge its advertising rules. The company targets apps like CrushAI, which use AI to generate nonconsensual, sexualized images, often called “nudifying” technology. This lawsuit reflects the ongoing battle against harmful AI content and highlights Meta’s efforts to protect users and enforce platform policies.


What’s Happening & Why This Matters

Meta filed suit against Hong Kong-based Joy Timeline HK Limited, the maker of CrushAI, accusing the company of repeatedly evading its ad restrictions. The lawsuit alleges CrushAI ran over 87,000 ads on Meta’s platforms that violated policies banning adult nudity, sexual content, and nonconsensual intimate imagery.

Meta claims CrushAI created a network of more than 170 business accounts and over 135 Facebook pages to run these ads. The ads targeted users in the United States, Canada, Australia, Germany, and the United Kingdom. Some ads blatantly promoted the app’s AI-nudifying capabilities with captions like “upload a photo to strip for a minute” and “erase any clothes on girls.”

Despite Meta’s policy forbidding ads that promote sexual exploitation or bullying, CrushAI continued its campaign, even after warnings. Meta’s complaint includes examples of how the app uses AI to create explicit images without consent.

This lawsuit follows media reports revealing that Meta’s platforms have become a primary source of traffic for these apps. Research from outlets like Faked Up and 404Media showed that about 90% of CrushAI’s traffic came from Meta’s sites.

Meta has invested in new technology to detect these ads before they run, even if the content doesn’t show nudity. Its specialist teams collaborate with external experts to identify terms, phrases, and emojis commonly used in such ads. Meta also shares data through the Tech Coalition’s Lantern program, a collaborative effort to combat online child sexual exploitation.

The issue gained further attention after the Take It Down Act became law in the US. The act makes sharing nonconsensual explicit deepfakes illegal and requires platforms to remove such content promptly.

Despite these efforts, Meta has faced criticism, including from its own Oversight Board, for under-enforcing policies against AI-manipulated videos featuring celebrities in scams. The case of an AI-generated video of soccer star Ronaldo Nazário endorsing a fraudulent game exemplifies ongoing challenges.


TF Summary: What’s Next

Meta’s lawsuit against deepfake app makers, such as CrushAI, intensifies the fight against nonconsensual AI-generated content. The company combines legal action with advanced detection tools and cross-industry collaboration to protect users.

As AI technology evolves, tech platforms face increasing pressure to prevent abuse while balancing innovation. Users, regulators, and companies must collaborate to curb the misuse of harmful deepfakes and safeguard online safety.

— Text-to-Speech (TTS) provided by gspeech

Share This Article
Avatar photo
By Sophia Rodriguez “TF Eco-Tech”
Background:
Sophia Rodriguez is the eco-tech enthusiast of the group. With her academic background in Environmental Science, coupled with a career pivot into sustainable technology, Sophia has dedicated her life to advocating for and reviewing green tech solutions. She is passionate about how technology can be leveraged to create a more sustainable and environmentally friendly world and often speaks at conferences and panels on this topic.
Leave a comment