X Pledges Quicker Action on Hate and Terror Content in UK
· science
The Slow Pace of Social Media Regulation
The recent pledge by social media platform X to review reports of suspected hate and terrorist content in the UK within 24 hours on average may seem like a significant step forward, but it’s only one part of a broader pattern. For years, social media companies have been under pressure from regulators to take more responsibility for the content hosted on their platforms. Despite numerous commitments and promises, progress has been slow.
The term “online harm” is often used to describe unwanted or illicit behavior on social media, but its meaning is unclear. Does it encompass hate speech, terrorist propaganda, or child exploitation? The ambiguity allows companies like X to claim they’re taking action while allowing problematic content to persist.
Ofcom’s online safety director Oliver Griffiths described the commitments as a “step forward,” but what does that mean in practice? Will these changes actually make a difference for users affected by hate and terrorist content? For instance, how will X’s new system of submitting performance data to Ofcom every three months ensure accountability?
Social media companies have long been aware of the problem. They’ve had years to develop systems and processes for dealing with reports of illegal hate and terror material. Yet, despite this knowledge, they continue to drag their feet. As Griffiths noted, “persistent on some of the largest social media sites” is a stark admission of just how entrenched the issue remains.
The complex relationship between social media companies and regulators like Ofcom contributes to the sluggish response. While these organizations have the power to demand change, they also rely on cooperation from the very same companies they’re regulating. This creates an uneasy balance between enforcement and conciliation.
However, there are signs that this dynamic is starting to shift. Organizations like Tell Mama, which records anti-Muslim incidents in the UK, are pushing for more concrete action. Iman Atta’s statement, “This sends an important message that no platform or body operating in this country is above scrutiny,” signals a growing recognition of the need for greater accountability.
Critics remain skeptical, however. Danny Stone, chief executive of the Antisemitism Policy Trust, acknowledges the importance of X’s commitments but emphasizes that more needs to be done to tackle open racism on social media platforms. His concerns are well-founded: as long as these platforms prioritize growth and engagement over user safety, they’ll continue to enable harm.
The UK has seen a disturbing series of attacks targeting Jewish communities in recent months. These incidents highlight the urgent need for effective regulation and enforcement. As Ofcom continues its investigation into X’s AI tool Grok, it’s clear that the future of social media in the UK hangs in the balance.
Only time will tell whether X’s promises translate into meaningful action.
Reader Views
- CPCole P. · science writer
While X's pledge to review reports of hate and terror content within 24 hours is a necessary step, we need to scrutinize what this actually means in terms of effectiveness. What constitutes "hate and terrorist content" in practice? The ambiguity surrounding online harm metrics allows social media companies to game the system with self-reporting and arbitrary definitions. Until these issues are addressed, commitments like X's will remain empty promises without clear benchmarks for success or meaningful consequences for non-compliance.
- TLThe Lab Desk · editorial
The proposed 24-hour review period for hate and terror content on social media platform X is a meager Band-Aid solution for a festering wound. What's missing from this conversation is the elephant in the room: the lack of effective collaboration between regulators and tech companies to develop concrete, industry-wide standards for online safety. Without these shared goals, progress will remain incremental at best, allowing hate speech and terrorist propaganda to persist on platforms like X until someone gets hurt – again.
- DEDr. Elena M. · research scientist
The pace of social media regulation is indeed glacial. However, I'd argue that X's promise to review reports within 24 hours on average is more of a PR stunt than a genuine commitment to change. What about the content that slips through the cracks? A focus solely on speeding up the moderation process overlooks the underlying issue: the lack of transparency and accountability in social media companies' algorithms and reporting systems. We need to see more robust data sharing between platforms, regulators, and researchers to truly combat online harm.