The Aim of the Online Safety Act
The UK’s long-delayed Online Safety Act has finally become law after receiving royal assent on 26th October 2023.
The Act introduces a range of obligations for large technological firms as to how they should design, operate, and moderate their platforms. It aims to prohibit providers of user-to-user services, such as social media platforms from hosting illegal or harmful content. The Act is an attempt to make the internet safer, particularly for children, by cracking down on illegal content like child sexual abuse material. It also aims to reduce, if not eradicate harmful and age-inappropriate content, including online harassment as well as content that glorifies suicide, self-harm, and eating disorders.
The Act applies a duty of care to providers of user-to-user services and search services that have links to the UK. It requires regulated services to perform risk assessments and to adopt mitigation measures (safety duties). The specific scope of these safety duties varies significantly depending on the nature of the service and the content.
For too long there has been a significant gap in legislation regarding online safety. Whilst there are currently several pieces of legislation that users may be able to rely on, the victims are required to bring action against the individual posting the online abuse. The Act provides additional measures to be brought against the providers of a service that is being utilised to carry out online abuse.
The Act anticipates that the legislation will be extraterritorial. However, some services will be exempt, including news websites, some retail services, some services used internally by businesses and email services.
The Online Safety Act – what does it comprise of?
It is a vast piece of legislation comprising 213 pages, plus 126 pages of Explanatory Notes.
In essence, following its introduction, platforms will be required to remove content that includes:
- Child sexual abuse.
- Controlling or coercive behaviour.
- Cyber bullying.
- Extreme sexual violence.
- Extreme violence against animals or people.
- Fraudulent adverts involving scams.
- Hate crime and speech.
- Inciting violence.
- Violence against women and girls.
- Illegal immigration and people smuggling.
- Promoting or facilitating suicide.
- Promoting self-harm.
- Revenge porn.
- Selling illegal drugs or weapons.
- Sexual exploitation.
- Terrorism.
Responsibilities under the Online Safety Act
In summary, all platforms are required to:
- Remove illegal content quickly or prevent it from appearing in the first place. This includes content under offences designated as priority offences in the Act and includes new additions at the House of Lords stage about immigration and modern slavery.
- Prevent children from accessing harmful and age-inappropriate content (such as, pornographic content, online abuse, cyberbullying or online harassment, or content which promotes or glorifies suicide, self-harm or eating disorders).
- Enforce age limits and implement age-checking measures.
- Ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments.
- Provide parents and children with clear and accessible ways to report problems online when they do arise.
In relation to adult protections, there will be new transparency, accountability, and freedom of expression duties.
Category 1 organisations will be required to set clear terms of service in relation to the restriction or removal of user-generated content, and the suspension or banning of users on grounds related to user-generated content. These terms of service must provide sufficient detail, so users understand what content is and is not permitted on the platform.
Category 1 platforms will be required to provide optional user empowerment tools to give users greater control over the content they see. Companies will need to provide these tools for a whole list of content categories including those that encourage, promote, or provide instructions for suicide, self-harm or eating disorders, and content that is abusive or incites hate on the basis of race, religion, sex, sexual orientation, disability or gender reassignment.
Enforcement of the Online Safety Act
As the regulator appointed, Ofcom is to be granted considerable and far-reaching enforcement powers. It is also anticipated that Ofcom may, in certain circumstances, request orders from the courts to restrict access to a relevant service being provided by the platform.
Ofcom could take the following action:
- Require companies not meeting their obligations to do so.
- Impose large fines of up to £18 million or 10% of global annual turnover (whichever is higher).
- Apply to court for business disruption measures (including blocking non-compliant services).
- Bring criminal sanctions against senior managers who fail to ensure their company complies with Ofcom’s information requests, or who deliberately destroy or withhold information.
- Use a range of powers to gather the information it needs to support its oversight and enforcement activity.
- Make companies change their behaviour, by taking measures to improve compliance. This includes using proactive technologies to identify illegal content and ensure children aren’t encountering harmful material.
- Help companies comply with the new laws by publishing codes of practice and setting out the steps companies should take to comply with their new duties. Companies will either need to follow these steps or show that their approach is equally effective.
The full regulatory regime will come into place in phases over the two years following royal assent.
New Criminal Offences under the Online Safety Act
The Act also creates new offences, such as:
- The false communications offence, aimed at protecting individuals from any communications where the sender intended to cause harm by sending something knowingly false.
- The threatening communications offence, to capture communications that convey a threat of serious harm, such as grievous bodily harm or rape.
- Flashing offence, aimed at stopping epilepsy trolling.
- An offence criminalising the assisting or encouraging of self-harm online.
Summary
Whilst introduced with the best of intentions, one of the main concerns of the Act is that the protections designed to keep adults and children safe could have a considerable impact on and conflict with their human rights, such as their right to privacy, freedom of speech and freedom of expression.
Time will soon tell whether and to what extent the additional measures introduced by the Act make the online world a safer place for its users, particularly children, and how it is received by the organisations that are bound by it.
This article was first published on the 8 November 2023
Author: Francesca Burfield