Home Technology As misinformation spreads, the UK may seek stronger powers to regulate tech...

As misinformation spreads, the UK may seek stronger powers to regulate tech platforms.

As misinformation spreads, the UK may seek stronger powers to regulate tech platforms.

The British government has signalled it may seek stronger powers to regulate technology platforms after days of violent unrest in England and Northern Ireland sparked by the spread of online misinformation.

On Friday, Prime Minister Keir Starmer confirmed he would review the Online Safety Act (OSA).

Passed by Congress in September 2023 after years of political debate, the bill obliges platforms that facilitate user-to-user communication — including social media platforms and messaging apps — to remove illegal content and protect users from other harms, such as hate speech, and imposes fines of up to 10% of global annual revenue for failure to comply.

“The first thing I want to say about online and social media is that this is not a lawless place. That is clear from the prosecutions and the sentences,” Starmer said, stressing that those who incite hate online are already facing consequences. The Crown Prosecution Service reported that the first sentence for hate speech related to violent disorder has been handed down.

But Starmer added: “I agree that we need to look more broadly at social media following this disruption, but for now we need to focus on addressing the disruption and making sure our communities are safe and secure.”

The Guardian reported that the review was confirmed after London Mayor Sadiq Khan criticised the OSA, calling it “not fit for purpose”.

Violent riots have erupted in towns and cities across Britain and Northern Ireland since a knife attack in Southport on July 30 left three young girls dead.

The false information about the perpetrators of the attack incorrectly identified them as Muslim asylum seekers who arrived in the country on a small boat. The false information spread quickly online, including through social media posts spread by far-right activists. The false information about the killers’ identity is widely linked to the civil unrest that has rocked the country in recent days.

Also on Friday, it was reported that a British woman was arrested under the Public Order Act 1986 for allegedly inciting racial hatred by making a false social media post about the identity of her attacker.

These arrests remain a priority for the government to address civil unrest for now, but the broader question of what to do about the technology platforms and other digital tools that are used to spread widespread misinformation is unlikely to go away.

As previously reported, OSA is not yet fully operational, as regulators are still in the process of negotiating guidelines. So some might say it is premature to consider legislation before at least mid-next year, to give the law a chance to work.

At the same time, the bill has been criticized for being poorly drafted and failing to address the platforms’ fundamental business model of profiting from outrage-driven engagement.

The previous Conservative government also introduced some major changes in the autumn of 2022, specifically removing provisions focused on tackling “lawful but harmful” speech (i.e. areas that typically include misinformation).

At the time, digital minister Michelle Donnellan said the government was responding to concerns about the bill’s impact on free speech. But another former minister, Damien Collins, challenged the government’s framing, arguing that the removed clause was simply intended to apply transparency measures to ensure platforms enforce their own terms of service – for example, in situations where content risks inciting violence or hatred.

Mainstream social media platforms, including Facebook and X (formerly Twitter), generally have terms and conditions that prohibit such content, but it is not always clear how strictly they enforce these standards. (Just one immediate example: On August 6, a British man was arrested on suspicion of inciting racial hatred by posting a message on Facebook calling for attacks on hotels housing refugees.)

Platforms have long employed a plausible deniability playbook: saying they remove content when it’s reported. But laws regulating the resources and processes they have in place could force them to be more proactive in stopping the free flow of toxic misinformation.

One pilot case for X is already underway in the European Union, where authorities enforcing the bloc’s digital services law have been investigating how the platform moderates misinformation since December.

On Thursday, the EU told Reuters that X’s handling of harmful content related to civil unrest in the UK could be taken into account in its own investigation into the platform. “What is happening in the UK can be seen here,” a commission spokesperson said, adding that “if there are instances of hate speech or incitement to violence, that could be taken into account as part of the proceedings against X.”

According to the Department for Science, Innovation and Technology, when OSA is fully operational in the UK by next spring, the law could put similar pressure on big platforms to approach misinformation. A department spokesperson said that under current law, the biggest platforms are required to consistently enforce their own terms of service, including when they prohibit the spread of misinformation.

Exit mobile version