The UK Government finally published its much-delayed response to the Online Harms White Paper just before Christmas. It received little attention due to COVID-19 and Brexit. Julian Hayes and Michael Drury analyse the far-reaching impact of the proposals.

Obscured by rising Coronavirus statistics and tail-end Brexit negotiations, the Government’s proposals for stamping out online harms, released at the end of last year, received scant public attention.

The muted response belies their ground-breaking nature and potentially far-reaching impact both on what we see and how we interact online. Aspects of the proposals – the duty of care, range of service providers affected, and means of enforcement – have now crystallised. Those seeking further clarity, though, looked in vain with the sound-bite heavy proposals leaving important details to a forthcoming Online Harms Bill itself and draft Codes of Practice.

The direction of travel is now clear, but concerns over privacy and freedom of speech remain unresolved..

Knowns & unknowns

Underpinning the proposals, which cover the gamut of harms from terrorist content and child abuse material to cyberbullying and promotion of self-harm, is a duty of care on service providers to take action to prevent user-generated content or activity on their services from causing significant physical or psychological harm to individuals.

All service providers will be obliged to take action against illegal content and activity. Where children are likely to use service providers’ products, they must also provide protections for minors against legal but harmful content and activity such as pornography. Certain high-risk ‘Category 1’ providers – the big tech titans with large user-bases or offering wide content-sharing functionality – will also be obliged to spell out and enforce how they deal with legal but harmful content and activity accessed by adults. Despite protests from consumer champions, certain activities will fall outside the legislation, including online scams.

The duty of care will apply to both service providers hosting user-generated content accessible in the UK and those which facilitate public and private online interaction between service users here. In practice – and irrespective of their physical location – this will encompass search engines, social media companies, video-sharing platforms, online forums, dating services, instant messaging services and online marketplaces serving the UK. Web-hosting companies, ISPs and VPNs will not be directly affected, though they must co-operate with the online harms regulator, Ofcom if required.

While aggrieved users may bring individual items of illegal or harmful material to Ofcom’s attention, the regulatory regime will not give individuals the right to sue service providers, though platform providers must, under existing legislation, continue to take down illegal online material of which they become aware.

To enforce the new regime, Ofcom will be armed with information gathering tools, including powers of entry, documentation production and interview.

Where companies fail to discharge their duty, Ofcom may issue improvement or non-compliance notices, or impose significant administrative penalties of £10 million or 10% of the parent company’s annual global turnover (whichever is the higher). In cases of repeated or egregious non-compliance, the regulator may block UK access. For now, there will be no senior management liability implications.

So far so clear, but those managing affected companies may be scratching their heads as to how practically they should gear up to meet regulatory expectations.

Here matters become opaque and we must wait for statutory codes of practice from Ofcom.

The codes will apparently specify systems, processes and governance, and will include risk assessment steps, content moderation measures, and tools to support users manage harm.

“The Government’s intention that service providers could be required to use automated technology to monitor private communications for such material should spark wide debate about whether the right balance with privacy is being struck”

Given these uncertainties, the proposals have unsurprisingly attracted criticism, with the tech sector (much of which already goes to considerable lengths to remove harmful online material) describing them as a “confusing minefield”, warning that they may simply divert harmful material from large platforms to smaller less regulated service providers, and calling for clarity about how the proposals will work on the ground.

Privacy concerns

Understandable concern over child safety online features prominently in the proposals with the Government highlighting how private messaging channels are used to disseminate CSEA material. However, the Government’s intention that service providers could be required to use automated technology to monitor private communications for such material should spark wide debate about whether the right balance with privacy is being struck and what message is sent to more repressive regimes around the world who may point to the measures to justify similar intrusion for less laudable aims.

Age verification measures to protect children from legal but harmful online material also raise privacy issues. The Government’s previous attempt at this was embarrassingly scrapped in October 2019 despite considerable expenditure on the project.

It seems this is to be resurrected with one option being the use of controversial facial recognition technology as an age ‘gateway’ to adult material. The effect on the provision of encrypted services remains to be considered.

Freedom of speech

With the original Online Harms White Paper raising acute freedom of speech concerns, the Government’s proposals exempt journalistic content and ‘below the line’ comments from regulatory attention. Similarly, to protect freedom of speech from overzealous takedown of material, Ofcom’s codes of practice will oblige service providers to offer effective redress mechanisms for users who feel their rights have been infringed.

Where service providers fear the regulator’s punishment more than an individual complainant, however, the efficacy of such redress mechanisms is doubtful –tech companies may well err on the side of caution, preferring to delete material where doubt over its legality exists.

Reconciling the irreconcilable?

Principled objections apart, the proposals also highlight a tricky conundrum at the heart of the Government’s online harms discussion: how to make the UK the safest place in the world to go online and simultaneously the best place to grow and start a digital business. Faced with high-profile instances of online harm, the administration has raised expectations of a sea change in online regulation, promising there will be “no more empty gestures”.

Yet with the Government fervently hoping the UK’s thriving digital sector will pull the country out of its post-Corona economic doldrums, the need to minimise regulatory burdens which risk stifling homegrown tech SMEs or disincentivising inward investment by overseas digital players is key. We must wait for further detail to see how these twin aims can be reconciled.

Conclusion

Since the mobile revolution and the rise of social media, there have been calls to regulate the so-called online ‘wild west’. With the EU publishing its proposals late last year and US digital regulation debate engaged, the battle lines between governments and tech companies are emerging.

The UK remains at the vanguard of this struggle, with what it would say is a proportionate, risk-based regulatory model, sparking considerable international interest. Whether the Government can refine the details to satisfy competing pressure groups and chart a consensual way forward will determine whether others now follow its lead.

Michael Drury & Julian Hayes, partners, BCL Solicitors LLP