The U.K. Online Safety Bill completed its passage through Parliament earlier this week and is now on its way to the King, for Royal Assent, after which it will become law. This new legislation seeks to protect people from illegal content and activity online, with a critical focus on child protection so that they do not encounter content and activity that could be harmful to them.
The regulated parties are the providers of user-to-user online content, which captures the big social media platforms and the like, the providers of search services, which include search engines and, to a lesser extent, publishers of online pornography. Broadly speaking, the platforms will have to perform risk assessments to understand the extent to which their users will encounter illegal or harmful content, block access to this content and arbitrate between other interests that might be engaged by blocking, such as journalistic interests and freedom of expression.
Illegal content covers content consisting of words, images, speech or sounds that are already offenses, or which are made offenses by this legislation. This covers child sexual exploitation and abuse content; terrorism content; hate content; communications offenses such as malicious communications, harassment and cyber stalking; related sexual offenses such as revenge pornography, upskirting and e-flashing; and encouraging or assisting self-harm. Content that is harmful to children includes pornography; glorification of suicide, self-harm and eating disorders; abuse and hate related to race, religion, sex, sexual orientation, disability or gender reassignment; cyberbullying; depiction of serious violence and injury to people or animals; performance of dangerous stunts; and self-administration of harmful substances.
Plainly, this is a wide array of content and it is broad enough to capture many shades of opinion and controversy. For example, no one would sensibly argue against trying to curtail the spread of terrorism or child abuse content, but where do we draw the line on stunts, which, conceptually at least, will capture shows such as Jackass and countless YouTube channels? Is there a bright-line test to distinguish between the performance of a stunt and the encouragement of it? There are real and substantial freedom of speech issues here and, no doubt, the legislation will trigger both macro and micro legal disputes, ranging from landmark court cases through to discrete complaint and claims that the regulated service providers will have to manage in a judge and jury sense.
However, the controversy that I want to focus on concerns section 122.
Notices To Deal With Terrorism Content Or Child Sexual Exploitation And Abuse Content
Section 122 will empower Ofcom, the communications regulator, to service notices on regulated platforms requiring them to identify terrorism and CSEA content and to prevent people from encountering it, or to develop or source technology for these purposes.
The controversy within these powers is the classic law enforcement versus privacy dichotomy that has played out so many times in the past. For example, 9/11 encouraged mass surveillance, which in turn triggered Edward Snowden’s leaks, which in turn triggered the Schrems cases against the EU-U.S. data transfer schemes. The San Bernardino terrorist attacks led to the FBI versus Apple litigation.
Reading section 122 at face value, it certainly poses a threat to communications encryption, which in turn creates two risks. Firstly, there is the obvious privacy risk: why should private sector companies become a tool of law enforcement and be empowered to survey everyone’s communication? Isn’t this mass surveillance by the back door? Secondly, there is the risk to security itself, with the argument being that once you break encryption, it’s broken for everyone, which in turn helps the bad guys.
The reaction to these powers has included threats or insinuations by various technology companies that provide encrypted communication channels (e.g., end-to-end encryption) that they will shut up shop in the U.K. Many privacy activists have been horrified. Here’s a view from Amnesty International.
The counter argument, if it needs stating, which provides the foundational stone for the legislation, is that terrorists, abusers and criminals take advantage of legitimate technologies to harm individuals and society and the status quo severely undermines effect law enforcement. This is plainly right. Anonymity preserving technologies are central to cybercrime’s growth and success. End-to-end encrypted communications are a tool and accelerator of crime, as is Tor, virtual private networks, proxies, virtual machines and cryptocurrency.
Another counter argument, which has been scoffed at by some, is there are ways to scan and filter encrypted technologies without breaking encryption itself. This presupposes two ideas: that an encrypted message in motion can be scanned, or that scanning applications can be added to either side of the encrypted channel to operate on the resulting plaintext. Well, we shall see about that.
Will s.122 Break Encrypted Communications?
My personal point of view, to apply a realpolitik lens to the argument (or reallaw), is that the s.122 power will be a paper tiger, at least for the foreseeable future. Firstly, I do not perceive that it will be used. Secondly, if it is used, it would probably be successfully challenged in court.
There are many arguments here—and I will save most of the details for use in future cases—but key issues include:
- The power will require the platforms to use “accredited technology” or develop or source technology that meets government standards. However, neither the accreditation scheme nor the government standards exist. Developing them to a level of technical and legal robustness will likely take years and will be subject to legal challenges along the way.
- Where the power turns on the platforms developing or sourcing technology, this is subject to a :best endeavours” clause. You can drive a legal coach and horses through this kind of language, especially where this operates in a technological environment.
- The power can only be used where it is “necessary and proportionate.” Again, there are multiple legal hurdles to overcome.
- The use of the power is discretionary. Even if all of the preceding barriers can be overcome, this creates more.
- The use of the power needs to be preceded by “warning notices.” Again, this contains multiple, complex legal bear traps for the regulator.
Next, you need to keep in mind how law works in practice, in a regulated environment. I doubt that the regulator will relish the fight, but if I am wrong on the current temperature, well, people come and go and things change during the course of legal battle.
Add to that the political dimensions. Will there be a political thumb on the scale, and can we count on it wanting to bring on the fight?
The scourge of online harms is real and growing and my personal perspective is that there are lots of reasons to applaud this legislation and to support its success. Hopefully, it will deliver on its main objective, to increase the protection of children online.
However, I do not believe that it will result in the end of encryption any time soon.