Social media giants Meta and X came under fire after a damning study revealed that both platforms approved ads containing violent hate speech targeting Muslims and Jews in the lead-up to Germany’s federal elections. The research, conducted by corporate responsibility group Eko, highlights alarming gaps in content moderation just as immigration and national identity dominate political discourse.
Hate Speech Unchecked: The Study’s Shocking Findings
Eko researchers tested Meta and X’s ad review systems by submitting inflammatory ads filled with violent, discriminatory language. The ads included calls for Muslim immigrants to be imprisoned in concentration camps, AI-generated images of burning mosques and synagogues, and antisemitic conspiracies portraying Jews as manipulating global events for financial gain. The results were staggering: most ads were approved within hours.
In total, X greenlit all 10 test ads, while Meta approved five out of ten, despite its stated policies against hate speech and violence.
What Got Approved?
The content of the approved ads was horrifying:
- Muslim refugees likened to “vermin” and “rodents,” with calls for their sterilization, burning, or gassing.
- Synagogues depicted as being torched to combat a fictitious “globalist Jewish rat agenda.”
- Muslims portrayed as rapists and criminals flooding Germany to destroy democracy.
- Bogus claims that Germany’s center-left SPD party wanted to import 60 million Middle Eastern refugees, with the ad calling for violent retaliation.
Despite the brutality of the messaging, X approved every single ad, while Meta rejected only half. The rejected ads were turned down for reasons related to political sensitivity, but the fact that half still passed review raises serious concerns about the platform’s ability to enforce its own policies.
A Broken Moderation System
Eko’s findings illustrate catastrophic failures in automated moderation systems. Meta, despite facing scrutiny in 2023 for similar issues, has seemingly made no progress. Worse, X appeared to bypass moderation entirely, waving through even the most egregious content without hesitation.
None of the AI-generated images used in the ads were flagged as artificial, despite Meta’s policies requiring disclosure for political or social issue ads. This oversight further underscores the lax enforcement of rules that supposedly protect users from manipulation.
The Digital Services Act: A Toothless Watchdog?
The European Union’s Digital Services Act (DSA) was introduced to curb the spread of hate speech, disinformation, and online threats to democracy. Yet, Eko’s research suggests that the law has had little impact on platform behavior. Meta and X continue to approve ads that violate not just their policies but also EU regulations.
The European Commission has ongoing investigations into both companies, probing their handling of political ads and illegal content. However, these investigations have dragged on for months, and no penalties have been issued. Under the DSA, confirmed violations can result in fines up to 6% of a company’s global revenue or even a temporary ban within the region. But as Germany heads to the polls, real-world consequences remain elusive.
Musk’s Political Interference
The controversy is further compounded by the actions of Elon Musk, owner of X, who has publicly backed Germany’s far-right AfD party. Musk, with his massive online influence, even hosted a livestream with AfD leader Alice Weidel and urged voters to support the party. This, combined with X’s failure to block violent ads, raises uncomfortable questions about the platform’s political neutrality.
The Real-World Impact
Eko’s researchers disabled all test ads before they went live, preventing public exposure to the hate speech. But the study starkly reveals how easily extremist content could have reached voters during a sensitive election period. Platforms stand to profit from political ads, meaning there is a direct financial incentive to overlook harmful content.
Urgent Call for Action
Civil society groups are now demanding immediate intervention. Suggestions include:
- Turning off algorithm-driven recommendation systems before elections.
- Implementing emergency “break-glass” measures to prevent the spread of borderline content.
- Faster, more transparent investigations and enforcement of the DSA.
Without decisive regulatory action, Germany’s democratic process — and the integrity of future elections across Europe — will remain vulnerable to exploitation.
Final Thoughts
The Eko study is a wake-up call. It shows that big tech platforms, left to their own devices, cannot be trusted to self-regulate. Until meaningful penalties are enforced, Meta and X will continue to profit from hate speech, violence, and disinformation — no matter the human cost.
The German election is just hours away. The question now is whether the EU will act before the next democratic crisis unfolds online.