Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
On Monday, UK internet regulator Ofcom published the first set of final guidelines for online service providers subject to the Online Safety Act. That starts the clock ticking on the first compliance period of the online harm law, which the regulator expects to begin in three months.
Ofcom has been under pressure to move faster in implementing the following online security regime riots in the summer it was widely accepted that it was fueled by social media activity. However, it follows a process set by lawmakers, which requires it to consult and approve the final compliance measures by parliament.
“This decision on the Unlawful Harm Codes and guidelines marks a major milestone with online providers now legally required to protect their users from unlawful harm,” Ofcom said. press release.
“ISPs now have until March 16, 2025 to assess the risk of illegal harm on their services. As of March 17, 2025, providers will have to take specified security measures, according to the Codes, which have completed a parliamentary process. Access codes or use other effective measures to protect users from illegal content and activity.”
“If providers do not take immediate action to address the risks in their services, we are ready to take enforcement action,” he added.
According to Ofcom, more than 100,000 tech firms may be within the scope of the law’s duties to protect users from a range of illegal content – the Act deals with more than 130 “priority offences” covering areas including terrorism, hate speech, sexual exploitation and abuse of children, fraud and financial crimes.
Failure to comply with the risks leads to fines of up to 10% of global annual turnover (or up to £18m, whichever is greater).
Companies in the field range from tech giants to “very small” service providers, with exposure to a variety of sectors including social media, dating, gaming, search and pornography.
“The duties in the Act apply to service providers with connections to the UK, regardless of where they are located in the world. The number of regulated online services can exceed 100,000 and range from the world’s largest technology companies to very small services,” Ofcom wrote.
Following a consultation on the codes and guidelines, Ofcom reviews the research and receives stakeholder responses to help shape these rules after legislation. passed the parliament last fall and became law in October 2023.
The regulator has set out measures for user-to-user and search services to reduce the risks associated with illegal content. Guidelines for risk assessment, recording and analysis are summarized here is an official document.
Ofcom has also published summary covers every chapter of today’s policy statement.
The approach taken by UK law is the opposite of a one-size-fits-all approach – generally placing greater obligations on larger services and platforms, where multiple risks may arise, than on smaller, less risky services.
However, smaller risk services are also exempt from liability. And – indeed – many requirements apply to all services, such as having a content moderation system that allows for the rapid removal of illegal content; having a mechanism for users to submit content complaints; having clear and accessible terms of service; deletion of accounts of banned organizations; and many others. Although many of these comprehensive measures are features that basic services can at least already offer.
But it’s fair to say that any tech firm offering user-to-user or search services in the UK will have to assess how the law applies to their business. specific areas of regulatory risk.
For larger platforms with engagement-centric business models — their ability to monetize user-generated content is tied to capturing people’s attention — larger operational changes may be required to avoid violating the law’s duties to protect users from myriad harms. .
A key lever for speeding up change is legislation that provides for criminal liability for senior executives in certain circumstances, meaning that tech executives can be held personally liable for certain types of noncompliance.
Speaking to BBC Radio 4’s Today program on Monday morning, Melanie Dawes, chief executive of Ofcom, suggested that 2025 will finally see significant changes to the way major tech platforms operate.
“What we’re announcing today is actually a big moment for online security, because in three months technology companies will have to start taking the necessary measures,” he said. “What will they have to change? They need to change the way the algorithms work. They should test them so that illegal content such as terrorism and hate, abuse of intimate images, does not actually appear on our feed.
“And then if things go through the net, they’ll have to pick it up. And for children, we want their accounts to be private, so strangers can’t contact them,” he added.
However, Ofcom’s policy statement is just the beginning of its implementation of the legal requirements, with the regulator still working on additional measures and duties around other aspects of the law, including what Dawes said.wider protection for children” said that it will be presented in the new year.
More significant changes to child safety on platforms that parents want to force may not filter through until the end of the year.
“In January we will be bringing forward our requirements for age checks to know where children are,” Dawes said. “Then in April we will finalize rules around our wider protections for children – and that will cover pornography, suicidal and self-harming material, violent content and more. will be about, simply not feeding the children as before. very normal, but today it’s really harmful.”
Ofcom’s summary document also notes that further measures may be needed to keep pace with technological developments such as the rise of generative artificial intelligence, suggesting it will continue to review risks and may further improve requirements on service providers.
The regulator is also planning “crisis response protocols for emergencies” like last summer’s riots; offers to block accounts of those sharing CSAM (child sexual exploitation material); and guidance on using AI to address illegal damages.