Ofcom Issues Guidance for Tech Firms to Tackle Online Harms

Written by

UK communication services regulator Ofcom has introduced new guidance for tech firms to tackle online harms on their platforms.

This is part of its obligations under the Online Safety Act.

The codes of practice on illegal online harms focus on acts such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide.

Tech platforms, including social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites, have until March 16, 2025, to complete their illegal harms risk assessments.

The Online Safety Act was passed in October 2023, and places obligations on tech firms to tackle a range of online harms on their platforms, from child sexual abuse to fraud.

Ofcom has the power to fine tech companies up to £18m ($22.8m) or 10% of their annual income, whichever is higher, for failure to comply with the requirements.

The new codes of practice were introduced following a consultation period.

UK Technology Secretary Peter Kyle commented “If platforms fail to step up the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites.”

How to Tackle Illegal Online Harms

The new Ofcom codes set out specific measures tech companies can take to mitigate illegal online harms. These include:

  • Naming a senior person in the organization who is accountable to their most senior governance body for compliance, reporting and complaints duties
  • Ensuring moderation teams are appropriately resourced to remove illegal material quickly when they become aware of it, with reporting and complaints functions are easy to find and use
  • Improve the testing of algorithms to make illegal content harder to disseminate
  • Tackle pathways to online grooming, including ensuring children’s profiles and locations are not visible to other users and providing guidance to children on the risks of sharing personal information
  • Use hash-matching and URL detection to detect child sexual abuse material (CSAM)
  • Use tools to identify illegal intimate image abuse and cyberflashing
  • Establish a dedicated reporting channel for organizations with fraud expertise, allowing them to flag known scams to platforms in real-time so that action can be taken
  • Remove users and accounts that generate or share posts on behalf of terrorist organizations proscribed by the UK government

Ofcom warned that while it will support providers to help them comply with these new duties, it is prepared to take early enforcement action against any platforms that ultimately fall short.

The regulator said it is working towards an additional consultation on further codes measures in Spring 2025, including around the use of AI to tackle illegal harms and crisis response protocols for emergency events.

What’s hot on Infosecurity Magazine?