The U.K. Online Safety Act is now law, but questions remain about how it will work in practice. This UK Online Safety Act requires tech firms to keep minors away from harmful content, enforce age verification, and provide greater accountability for what happens on their platforms. While the rules exist, real-world enforcement is uneven and messy.
What’s Happening & Why This Matters
Children’s Commissioner Rachel de Souza has pushed for stricter checks. Speaking with BBC Newsnight, she argued: “We need age-verification on VPNs. It’s absolutely a loophole that needs closing.” Many teens use VPNs to bypass restrictions and access adult material or restricted apps. The implications of the UK Online Safety Act play into this, as the government says it has no plans to ban VPNs, but the idea of forcing age checks has raised alarms among privacy advocates. After all, VPNs are meant to protect anonymity by encrypting user activity.

The act also directs attention to social media platforms. De Souza accused companies of not removing harmful content fast enough, telling Sky News: “They’ve had years to pull this stuff down and protect children, and they’re just not doing enough.” She warned of cases where children as young as six view violent pornography.
Meanwhile, another wrinkle has entered the debate: artificial intelligence. The U.K. Medicines and Healthcare products Regulatory Agency (MHRA) is leading a new international effort on AI in healthcare, focusing on safe adoption of the technology. In parallel, doctors across NHS England and private practices are testing AI transcription, diagnostic aids, and patient monitoring systems. The UK Online Safety Act now also considers AI-driven platforms that interact with patients, students, and minors, beyond its original goal.
Concerns extend beyond access to harmful media. Researchers and ethicists are debating how much autonomy AI tools should have. Anthropic, the company behind the Claude chatbot, recently allowed its models to exit “distressing” conversations. This decision has fueled discussions about AI responsibility and human oversight. Critics, like linguist Emily Bender, say these systems are “synthetic text-extruding machines,” while others argue for moral consideration if AI gains memory or simulated distress.

At the same time, mental health advocates fear that powerful AI systems could unintentionally harm vulnerable users. Reports of young people being influenced by chatbots underscore the urgent need for strong guardrails.
The Online Safety Act is meant to keep children safe online, but as the internet grows more complex — with VPNs, encrypted apps, and AI systems — its limits are clear. Tech companies, regulators, and educators must work together to create safeguards without stripping away privacy or innovation.
TF Summary: What’s Next
The U.K. Online Safety Act faces its toughest test yet: enforcement in a fast-changing digital world. Regulators are under pressure to close loopholes around VPN use and hold tech giants accountable. At the same time, the rise of AI systems in healthcare, education, and online chat tools expands the act’s relevance and challenges.
The conversation is far from over. Expect more debates about privacy, AI safeguards, and whether governments should police the digital tools that both protect and empower users. Balancing child safety with user rights will remain one of the most difficult challenges in tech policy. This is where the UK Online Safety Act plays a crucial role.
— Text-to-Speech (TTS) provided by gspeech