As disinformation spreads during UK riots, regulators are currently powerless to take action

As disinformation spreads during UK riots, regulators are currently powerless to take action


Kirill Kudryavtsev | AFP | Getty Images

LONDON — Ofcom, the U.K.’s media regulator, was chosen last year by the government as the regulator in charge of policing harmful and illegal content on the internet under strict new online safety regulations.

But even as online disinformation related to stabbings in the U.K. has led to real-world violence, Ofcom, Britain’s online safety regulator finds itself unable to take effective enforcement actions.

Last week, a 17-year-old knifeman attacked several children attending a Taylor Swift-themed dance class in the English town of Southport in Merseyside.

Three girls were killed in the attack. Police subsequently identified the suspect as Axel Rudakubana.

Shortly after the attack, social media users were quick to falsely identify the perpetrator as an asylum seeker who arrived in the U.K. by boat in 2023.

On X, posts sharing the fake name of the perpetrator were actively shared and were viewed by millions.

That in turn helped spark far-right, anti-immigration protests, which have since descended into violence, with shops and mosques being attacked and bricks and petrol bombs being hurled.

Why can’t Ofcom take action?

U.K. officials subsequently issued warnings to social media firms urging them to get tough on false information online.

Peter Kyle, the U.K.’s technology minister, held conversations with social media firms such as TikTok, Facebook parent company Meta, Google and X over their handling of misinformation being spread during the riots.

But Ofcom, the regulator tasked with taking action over failings to tackle misinformation and other harmful material online, is unable at this stage to take effective actions on the tech giants allowing harmful posts inciting the ongoing riots because not all the powers from the act have come into force.

New duties on social media platforms under the Online Safety Act requiring firms to actively identify, mitigate and manage the risks of harm from illegal and harmful content on their platforms have not yet taken effect.

Once the rules fully take effect, Ofcom would have the power to levy fines of as much as 10% of companies’ global annual revenues for breaches, or even jail time for individual senior managers in cases where repeat breaches occur.

But until that happens, the watchdog is unable to penalize firms for online safety breaches.

Rapid user growth impacting revenue per user, says Reddit CEO Steve Huffman

Under the Online Safety Act, the sending of false information intended to cause non-trivial harm is considered a punishable criminal offense. That would likely include misinformation aiming to incite violence.

How has Ofcom responded?

An Ofcom spokesperson told CNBC Wednesday that it is moving quickly to implement the act so that it can be enforced as soon as possible, but new duties on tech firms requiring them by law to actively police their platforms for harmful content won’t fully come into force until 2025.

Ofcom is still consulting on risk assessment guidance and codes of practice on illegal harms, which it says it needs to establish before it can effectively implement the measures of the Online Safety Act.

“We are speaking to relevant social media, gaming and messaging companies about their responsibilities as a matter of urgency,” the Ofcom spokesperson said.

“Although platforms’ new duties under the Online Safety Act do not come into force until the new year, they can act now — there is no need to wait for new laws to make their sites and apps safer for users.”

Gill Whitehead, Ofcom’s group director for online safety, echoed that statement in an open letter to social media companies Wednesday, which warned of the heightened risk of platforms being used to stir up hatred and violence amid recent acts of violence in the U.K.

This will be the most pro-business Treasury Britain has ever seen: UK Finance Minister Rachel Reeves

“In a few months, new safety duties under the Online Safety Act will be in place, but you can act now – there is no need to wait to make your sites and apps safer for users,” Whitehead said.

She added that, even though the regulator is working to ensure firms rid their platforms of illegal content, it still recognizes the “importance of protecting freedom of speech.”

Ofcom says it plans to publish its final codes of practice and guidance on online harms in December 2024, after which platforms will have three months to conduct risk assessments for illegal content.

The codes will be subject to scrutiny from U.K. Parliament, and unless lawmakers object to the draft codes, the online safety duties on platforms will become enforceable shortly after that process concludes.

Provisions for protecting children from harmful content will come into force from spring 2025, while duties on the largest services will become enforceable from 2026.



Source

Apple at 50: The iPhone maker ‘blew a 5-year lead’ on AI, but former insiders say it can still win
Technology

Apple at 50: The iPhone maker ‘blew a 5-year lead’ on AI, but former insiders say it can still win

CUPERTINO, Calif. — Nasdaq brought its market open festivities to Apple’s sprawling Silicon Valley headquarters on Tuesday, the eve of the company’s 50th birthday. From a desk inside Apple Park, the ring-shaped campus that Steve Jobs spent his last years helping design, Tim Cook rang the opening bell and, in the process, ushered in the […]

Read More
OpenAI’s Fidji Simo takes medical leave, announces leadership changes
Technology

OpenAI’s Fidji Simo takes medical leave, announces leadership changes

Fidji Simo, chief executive officer of Instacart Inc., speaks during a Bloomberg Studio 1.0 interview in San Francisco, California, U.S., on Thursday, March 3, 2022. David Paul Morris | Bloomberg | Getty Images Fidji Simo, OpenAI’s product and business chief, announced several leadership changes on Friday and revealed she is taking a significant medical leave […]

Read More
Meta, Google under attack as court cases bypass 30-year-old legal shield
Technology

Meta, Google under attack as court cases bypass 30-year-old legal shield

Meta Platforms CEO Mark Zuckerberg arrives outside court to take the stand at trial in a key test case accusing Meta and Google’s YouTube of harming kids’ mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026. Mike Blake | Reuters For the last three decades, internet giants have been able to […]

Read More