Studies indicate that Meta and X endorsed advertisements containing hate speech and incitements to violence prior to Germany's federal elections.
A recent study by a German corporate responsibility organization has uncovered that social media platforms Meta (
Facebook) and X (formerly Twitter) permitted ads with anti-Semitic and anti-Muslim messages ahead of Germany’s federal elections.
As part of the research, investigators submitted 20 advertisements featuring violent language and hate speech directed at minority groups.
The findings revealed that X approved all 10 ads submitted, while Meta greenlit 5 out of 10. The ads contained calls for violence against Jews and Muslims, likening Muslim refugees to 'viruses' and 'rodents,' as well as urging their extermination or sterilization.
One advertisement even called for setting synagogues ablaze to 'halt the Jewish globalist agenda.' The researchers pointed out that these ads were flagged and removed before publication, but the results raise significant questions about the content moderation policies enforced by social media companies.
The organization has shared its findings with the European Commission, which is anticipated to initiate an inquiry into potential breaches of the EU Digital Services Act by Meta and X. The timing of this report is particularly critical as Germany’s federal elections approach, heightening concerns regarding the potential effects of hate speech on the democratic process.
Previously,
Facebook drew criticism during the Cambridge Analytica scandal, where a data intelligence firm was found to have influenced elections worldwide through comparable tactics, culminating in a $5 billion fine.
Furthermore,
Elon Musk, the owner of X, has been accused of meddling in the German elections, including encouraging support for the far-right AfD party.
It is still uncertain whether the approval of these advertisements reflects Musk’s political inclinations or his broader adherence to 'free speech' principles on X. He dismantled X’s established content moderation system, replacing it with a 'community notes' approach, wherein users contribute context to posts for alternative perspectives.
Mark Zuckerberg, CEO of Meta, has introduced a similar mechanism for
Facebook, although he indicated that AI-based detection systems for hate speech and unlawful content would continue to function.
This transition has raised alarms, especially with reports indicating that extremist right-wing content is increasingly promoted on platforms like X and TikTok, influencing public sentiment.
The recent economic downturn and the rise in violence linked to attacks involving Muslim migrants have further exacerbated tensions.
It remains ambiguous whether the increase in extremist content is a reflection of real-world issues or if social media algorithms are enhancing such messages to drive user engagement.
Regardless, both Musk and Zuckerberg have shown a readiness to reduce content moderation amidst pressures from the European Union and German authorities.
It is unclear if this investigation will prompt the EU to impose stricter regulations on X,
Facebook, and TikTok, but it underscores the ongoing challenge of managing the balance between free expression and curtailing the spread of extremist content.
The study highlights the broader issue that hate speech frequently serves political interests, complicating social media platforms' role in content moderation.
As discussions about regulatory measures continue, the question of who should oversee digital expression—private corporations or government bodies—remains unresolved.
Similar to traditional media outlets, social platforms may find themselves under increasing scrutiny regarding how they manage user-generated content.