
On Monday, Britain’s internet regulator Ofcom published its first set of final guidelines for online service providers covered by the Online Safety Act. This starts the clock on the first compliance deadline for wide-ranging online harms laws, which regulators expect to kick in in three months.
Ofcom has been under pressure to implement an online safety regime more quickly following the summer riots, which were widely seen as sparked by social media activity. Although it follows a process set out by lawmakers, final compliance measures must be consulted and approved by parliament.
“This decision on the unlawful harms code and guidance is an important milestone in ensuring that online providers are legally required to protect users from unlawful harm,” Ofcom said in a press release.
“Suppliers are now obliged to assess the risk of unlawful harm to their services as of a deadline of 16 March 2025. From March 17, 2025, regulations completing the parliamentary process will require providers to put in place set safety measures. “We specify this in our code or use other effective measures to protect our users from illegal content and activity.”
“We are prepared to take enforcement action if suppliers do not take immediate action to address risks to their services,” he added.
According to Ofcom, more than 100,000 technology companies could fall within the law’s obligations to protect users from various types of illegal content in relation to more than 130 “serious crimes” set out in the law, covering areas such as terrorism, hate and more. there is. Language, child sexual abuse and exploitation, fraud and financial crimes.
Failure to comply may result in fines of up to 10% of annual global turnover (or up to £18 million, whichever is greater).
Companies in scope range from technology giants to “very small” service providers, with sectors as diverse as social media, dating, gaming, search and pornography affected.
“The obligations of this Act apply to UK-connected service providers, regardless of where they are based in the world. The number of regulated online services may total more than 100,000 and range from the world’s largest technology companies to the smallest of services,” Ofcom said.
The regulations and guidance follow a consultation in which Ofcom reviewed research and took responses from stakeholders to help shape these rules after they were passed by Parliament last autumn and become law in October 2023.
The regulator outlined measures for user-to-user and search services to reduce risks associated with illegal content. Guidance on risk assessment, record keeping and review is outlined in official documents.
Ofcom also today published a summary covering each chapter of the policy statement.
The approach taken by UK law is the exact opposite of a one-size-fits-all approach. More obligations apply to larger services and platforms that may pose multiple risks compared to smaller services that generally have fewer risks.
However, even smaller, lower-risk services are not exempt from the obligation. And indeed, many of the requirements apply to all services, including having a content moderation system that can quickly take down illegal content. We have a mechanism for users to submit content complaints. There should be clear and accessible terms of service. Remove accounts from banned organizations. And many others. Although many of these comprehensive measures are at least features that mainstream services are likely to already offer.
However, it is fair to say that any technology company providing user-to-user or search services in the UK should at least carry out an assessment of how the law applies to its business. Specific areas of regulatory risk.
For large platforms with engagement-driven business models, where the ability to monetize user-generated content is tied to capturing people’s attention, the need to operate on a larger scale is important to avoid violating the law’s obligation to protect users from numerous harms. Change may be needed. .
A key instrument driving change is legislation introducing criminal liability for senior executives in certain circumstances. This means that tech CEOs can be held personally liable for certain types of regulatory violations.
Speaking on BBC Radio 4’s Today program on Monday morning, Ofcom CEO Melanie Dawes suggested that 2025 will finally see significant changes to the way major technology platforms operate.
“What we’re announcing today is actually an important moment for online safety because in three months time, tech companies will have to take appropriate action,” she said. “What should they change? They need to change the way their algorithms work. “They need to test to ensure that illegal content like terrorism, hate, abuse of intimate images, etc. does not actually appear in our feeds.”
“And if something slips through the net, they’re going to have to take it down. “And for children, we try to make their accounts private so strangers can’t contact them,” he added.
That said, Ofcom’s policy statement is only the start of implementing the legal requirements and the regulator is still working out further measures and obligations in relation to other aspects of the law. It will be introduced in the new year, including what Dawes said would be “wider protections for children”.
So the more substantive child safety changes to the platforms that parents have been demanding may not filter out until later this year.
“In January we will be putting forward a requirement for age verification so we know where our children are,” Dawes said. “And in April we will finalize rules on broad protections for children. This will be about pornography, suicide and self-harm material, violent content, etc. “It’s so ordinary, but today it’s really harmful.”
Ofcom’s summary document also notes that further action may be needed to keep pace with technological developments, such as the rise of generative AI, which could lead to ongoing reviews of risks and further evolving requirements for service providers. indicates.
Regulators are also planning “crisis response protocols for emergencies,” such as last summer’s riots. Proposals to block accounts of people who share child sexual abuse material (CSAM); Guidance on how to leverage AI to address unlawful harm.








