Social Media Abuses: Harris Cheapfake and Sextortion Accounts

Eve Harrison

Social media platforms have faced increasing challenges with the rise of AI-generated content, such as cheapfakes and deepfake pornography. A recent incident involving Vice President Kamala Harris and the spread of non-consensual explicit imagery has exposed flaws in these platforms’ content moderation practices. This article delves into the details and implications of these social media abuses.

What’s Happening & Why This Matters

The Cheapfake Incident

Meta’s Oversight Board recently revealed that the company failed to promptly remove an explicit AI-generated image of an Indian public figure. This case, along with another involving an American woman, highlighted Meta’s reliance on media reports to identify harmful content, a method that has proven reactive rather than proactive.

In the incident involving the Indian public figure, the explicit deepfake remained on Instagram until the Oversight Board intervened, despite being reported twice. On the other hand, an AI-generated image of an American woman was promptly removed from Facebook after being flagged, thanks to its inclusion in Meta’s database of flagged images. The Board criticized Meta for its inconsistent enforcement of rules against non-consensual sexual imagery and recommended clearer guidelines and a more robust system for identifying and removing such content.

Sextortion Accounts on Instagram

Meta’s challenges extend beyond deepfakes. The company recently deleted 63,000 Instagram accounts involved in sextortion scams. These scams typically target teenagers, tricking them into sending nude images, which are then used to extort money. The scale of this issue in Nigeria prompted Meta to take extensive action, highlighting the global reach of this form of exploitation.

Legislative and Social Reactions

Legislators in the United States have started addressing these issues. The Senate passed the Defiance Act, allowing victims of deepfake pornography to sue those responsible for creating, distributing, or receiving these images. Senator Ted Cruz proposed the TAKE IT DOWN Act, criminalizing the publication of non-consensual AI-generated explicit imagery and mandating its removal from platforms like Facebook and Instagram.

TF Summary: What’s Next

Meta’s Oversight Board has called for the company to improve its content moderation policies, particularly around non-consensual sexual imagery. This includes clearer rules and more proactive measures to identify and remove harmful content. As legislation continues to evolve, social media platforms will need to adapt quickly to protect users from these forms of abuse. The ongoing pressure from lawmakers and the public suggests that major changes may be on the horizon for how social media companies handle explicit and non-consensual AI-generated content.

— Text-to-Speech (TTS) provided by gspeech

Share This Article
Avatar photo
By Eve Harrison “TF Gadget Guru”
Background:
Eve Harrison is a staff writer for TechFyle's TF Sources. With a background in consumer technology and digital marketing, Eve brings a unique perspective that balances technical expertise with user experience. She holds a degree in Information Technology and has spent several years working in digital marketing roles, focusing on tech products and services. Her experience gives her insights into consumer trends and the practical usability of tech gadgets.
Leave a comment