Skip to content

What is the Online Safety Act?

Alicia Hempsted
Written by  Alicia Hempsted
Kara Gammell
Reviewed by  Kara Gammell
5 min read
Updated: 23 May 2025

The Online Safety Act 2023, which gives extra protections for children and vulnerable people online, introduces new rules for that will make some big changes for UK internet users. These are already coming into effect, so find out what to expect.

The Online Safety Act 2023, put together by UK government and Ofcom, will make some big changes to the online landscape for UK users.

For parents, this may be a big relief, with the Online Safety Act adding valuable protections especially for children using the internet – whether they're browsing YouTube videos or playing online games with their friends.

New rules mean they may be less likely to stumble upon harmful, disturbing, or age-inappropriate content online, and plans to tackle unfit age verification methods mean it should be harder for them to get access to adult websites.

Some of these new regulations will come into force in Summer 2025, so you can expect to see some changes online.

For an in-depth look at the Online Safety Act gov.uk have published an online explainer, but continue reading for a summary of what's in store.

What and who does the Act apply to?

These new rules apply to all companies with services available to the UK market, even if the company providing the service is based outside of the UK.

This includes any platform or service that allows users to share content or interact with other users. Here are a few examples:

  • Social media (e.g. Instagram, X, Facebook)

  • Search services (e.g. Google, Bing, Pinterest)

  • Cloud file storage and sharing (e.g. iCloud, Google Drive, Dropbox)

  • Video-sharing (e.g. YouTube, TikTok, Twitch)

  • Online forums (e.g. Reddit, Steam, 4chan)

  • Dating apps (e.g. Tinder, Hinge, Bumble)

  • Instant messaging (e.g. Snapchat, WhatsApp, Discord)

  • Online multi-player games & gaming platforms (e.g. Roblox)

What does 'harmful content' mean?

Harmful content as outlined in the Act fits into two categories: illegal content or content that may be harmful to children.

Illegal content is defined as:

  • Child sexual abuse

  • Controlling or coercive behaviour

  • Extreme sexual violence

  • Extreme pornography

  • Fraud

  • Racially or religiously aggravated public order offences

  • Inciting violence

  • Illegal immigration and people smuggling

  • Promoting or facilitating suicide

  • Selling illegal drugs or weapons

  • Sexual exploitation

  • Terrorism

The Act has also introduced a number of new offences that are considered illegal:

  • Encouraging or assisting serious self-harm

  • Cyberflashing

  • Sending false information intended to cause non-trivial harm

  • Threatening communications

  • Intimate image abuse

  • Epilepsy trolling

Companies have a duty to remove this content as soon as possible, prevent it from being shown, and give users a clear and easy way to report it.

Content that is harmful to children is defined as:

  • Pornography

  • Content that encourages, promotes, or provides instructions for either self-harm, eating disorders, or suicide

  • Bullying

  • Abusive or hateful content

  • Content which depicts or encourages serious violence or injury

  • Content which encourages dangerous stunts and challenges

  • Content which encourages the ingestion, inhalation, or exposure to harmful substances

Content that's considered 'harmful to children' may still be available to adults under guidance of the Act, but companies have a duty to prevent children from accessing this kind of content.

What's going to change?

The way companies interpret their new duties will differ depending on the service they provide and the kinds of risks their users may be exposed to.

It's likely that over the next year or so you will get some emails and notifications about 'changes to your terms of service' from websites, platforms, or apps you have an account with.

One change you can expect across many websites and platforms is stricter age verification methods.

Most companies that offer online adult content or services with user-to-user to interaction have some measures to limit access to children, the most common being to ask users to provide a date of birth.

However, a UK survey conducted by Ofcom in 2024 revealed that 22% of eight to 17 years olds lie about their age on social media apps.

The Online Safety Act requires companies to add more robust age checks, so after the Summer 2025 deadline to put these protections in place, making an account on certain websites might require you to give a copy of your photo ID or credit card details to verify your age.

Kara Gammell
Kara Gammell
Senior Editorial Strategy Lead/Brand Spokesperson

A step in the right direction but not a cure-all

While companies will be taking on more responsibilities to protect their users, it isn't safe to let children go online unsupervised without any safety measures in place. It's not just harmful content parents need to be worried about.

For example, almost half of UK 8 to 17-year-olds have been victims of online scams and the number of young people identified as having a gambling problem doubled to 85,000 in 2024, with online gambling in particular on the rise.

To ensure children are using the internet safely and responsibly, it's important that parents take steps to protect children online and educate them on the dangers. One way they can do this is to set up broadband parental controls which offer a number of useful protections to manage and monitor children's internet activity.

What happens to companies that break the rules?

Ofcom, the regulator of online safety, has been given new powers to hold companies that break the rules to account.

They can be fined as much as 10% of their worldwide revenue or £18 million, whichever is higher, and they can be held criminally liable for not complying with duties related to child safety, abuse, and exploitation.

In extreme cases, Ofcom can prevent companies from making money in the UK or being accessed by UK users by stopping internet service providers and advertisers from working with them.

If you feel that an online service is breaking these rules or isn't doing enough to protect their users, you should report directly to the service first to make a complaint. If you're still concerned, you can report the service to Ofcom.

Ofcom doesn't respond to individual complaints, but sharing your complaint with them can help Ofcom with their assessments.

When a regulated company receives multiple complaints, it's a sign they may not be following the rules, which would prompt Ofcom to investigate and possibly take action.