Musk’s X Deepfake Posts May Violate Platform Policies

TF AI Writer

UPDATE 29 JUL: Elon Musk responded, via X, to criticism from Vice President Kamala Harris and California Gov. Gavin Newsom that deepfakes are “parody is legal in America”.

Elon Musk stirred controversy once again by sharing a video featuring a deepfake of Vice President Kamala Harris. This action appears to breach X’s content policies, raising concerns about the spread of misinformation and the use of synthetic media on social platforms.

What’s Happening & Why This Matters

On Friday, Musk reposted a digitally altered campaign video of Vice President Harris on X. This video, manipulated to mimic Harris’s voice, falsely portrays her making disparaging remarks about President Biden, calling him senile, and referring to herself as an “ultimate diversity hire” because she is both a woman and a person of color. The video also replaces images of former President Donald Trump and his running mate, Sen. JD Vance, with pictures of Biden.

Violations and Deepfakes

X’s content policies explicitly prohibit sharing “synthetic manipulated or out-of-context media that may deceive or confuse people and lead to harm.” By reposting this altered video, Musk seems to have violated these rules. The deepfake technology used in this instance has created a false narrative, misleading viewers about Harris’s statements and intentions.

The use of deepfakes is increasingly becoming a major issue, especially as the presidential election draws closer. Deepfakes like the one shared by Musk, along with another misleading video of Harris earlier this month, are prime examples of how synthetic media can distort reality and misinform the public. These manipulations threaten the integrity of political discourse and can significantly impact public opinion and trust.

AI and Misinformation

Adding to the concern, Musk’s AI chatbot, Grok, has been reported to spread false election information. The chatbot allegedly told users that ballots are “locked” in eight states, falsely claiming voters could only choose between Trump or Biden, excluding Harris. These false reports demonstrate the potential dangers of AI in disseminating misinformation.

TF Summary: What’s Next

The rising issue of deepfakes and AI-driven misinformation on social media platforms like X requires urgent attention. As the presidential election approaches, it is crucial for social media companies to strengthen their content moderation and fact-checking mechanisms. The steps X takes in response to Musk’s post will likely set a precedent for handling synthetic media and misinformation. Platforms must develop robust policies and enforce them rigorously to ensure the information shared is accurate and trustworthy, safeguarding the integrity of public discourse and democratic processes.

— Text-to-Speech (TTS) provided by gspeech

Share This Article
Leave a comment