Eko: X, Meta Approved AI-Generated Election Ads with Hate Speech

Sophia Rodriguez

Social Media Platforms Fail to Block Hateful Political Ads

A new investigation by Eko, a nonprofit group focused on technology accountability, reveals that Meta and X approved political ads containing AI-generated images and hateful messages in the lead-up to Germany’s February 23 elections. The findings raise concerns about how social media platforms handle political disinformation and hate speech—especially when it involves paid advertising.

Eko researchers submitted 10 test ads to each platform containing antisemitic and Islamophobic content, including explicit calls for violence against minority groups. According to the report, X approved all 10 ads within hours, while Meta cleared five for publication. These ads were created using AI image generators like OpenAI’s DALL-E and Stable Diffusion, making them look authentic.

Although the ads were removed before running, their approval raises concerns about the lack of adequate oversight in political advertising on major tech platforms.

What’s Happening & Why This Matters

Approval of Hate Speech Violates Content Policies

Meta and X failed to stop ads that clearly violated their content policies on hate speech and disinformation. The ads included calls for violence against Jewish and Muslim communities, with one Meta-approved ad reportedly suggesting burning synagogues in Germany. Two of the advertisements that X approved contained references to Nazi-era crimes and used imagery linked to concentration camps.

This approval process suggests that social media companies are not effectively enforcing their own rules. Even though platforms claim strict policies against incitement and disinformation, their ad review systems failed to detect violations or allow them to pass through anyway.

AI-Generated Disinformation Makes Manipulation Easier

Using AI-generated imagery in political ads introduces another challenge for content moderation. Tools like DALL-E and Stable Diffusion allow users to create realistic but misleading visuals, making it more difficult for audiences to distinguish between authentic campaign materials and manipulated content.

With elections in multiple countries this year, the spread of AI-powered disinformation is becoming harder to track. If platforms continue approving ads that contain harmful or misleading content, voters could be exposed to false narratives that shape their political views.

Scrutiny for Handling Hate Speech

Since Elon Musk acquired X (formerly Twitter) in 2022, the platform has faced repeated accusations of allowing hate speech to thrive. Musk has also been actively involved in Germany’s political discourse, recently attending a rally for the far-right Alternative für Deutschland (AfD) party and interviewing one of its leaders on X Spaces. His direct engagement with political figures has raised concerns about whether X enables extremist content.

Regulators in the European Union (EU) are already investigating X under the Digital Services Act (DSA) to determine whether its algorithms and moderation policies allow hate speech and election disinformation to spread.

Prioritizing Ad Revenue Over Ethics

The Eko report suggests that social media companies may prioritize ad revenue over content moderation. With political ads generating millions of dollars in revenue, platforms might not have strong incentives to enforce strict ad review processes. By allowing harmful content to pass through, Meta and X create an environment where misleading narratives can spread quickly.

Even though platforms claim to have strict ad policies, enforcement remains weak. This creates opportunities for political groups and bad actors to manipulate public opinion using AI-generated disinformation.

Meta, X Have Not Responded

Neither Meta nor X has publicly responded to the findings. This lack of acknowledgment has frustrated watchdogs and regulators, who have repeatedly called for more transparency in social media ad approvals and better enforcement of content rules.

With elections happening across Europe and the U.S. later this year, concerns about how AI-generated political ads could influence voter perception continue to grow. Platforms are now under pressure to demonstrate that their ad moderation systems can prevent the spread of harmful content before election day.

TF Summary: What’s Next

The findings from Eko’s investigation show that social media platforms fail to enforce their ad policies, allowing AI-generated hate speech and election disinformation to spread through paid ads. With major elections happening soon, regulators and watchdogs may push for stricter ad review policies and stronger accountability. The EU’s ongoing investigation into X could lead to new rules forcing platforms to improve content moderation. Whether Meta and X take action to prevent AI-powered election disinformation remains uncertain, but the risks are growing as political campaigns increasingly use AI tools.

— Text-to-Speech (TTS) provided by gspeech

Share This Article
Avatar photo
By Sophia Rodriguez “TF Eco-Tech”
Background:
Sophia Rodriguez is the eco-tech enthusiast of the group. With her academic background in Environmental Science, coupled with a career pivot into sustainable technology, Sophia has dedicated her life to advocating for and reviewing green tech solutions. She is passionate about how technology can be leveraged to create a more sustainable and environmentally friendly world and often speaks at conferences and panels on this topic.
Leave a comment