Social media’s relationship with children is under intense scrutiny as new investigations reveal alarming gaps in safety measures. TikTok and Instagram, two of the most popular platforms for kids and teens, face growing pressure from privacy watchdogs and regulators. They must take more decisive action to protect young users. Despite promises of better safety tools, recent reports indicate that efforts by both companies fall short. This leaves children vulnerable to harmful content and invasive data practices.
What’s Happening & Why This Matters

Canadian privacy regulators have sharply criticised TikTok, saying its tools to keep children under 13 off the platform are “inadequate.” A joint investigation by federal and provincial privacy commissioners revealed that a troubling number of young children are using TikTok, despite the company’s stated policies. In Quebec, 40% of children aged 6 to 17 have a TikTok account. Furthermore, 17% of children aged 6 to 12 actively use the app.
Privacy Commissioner Philippe Dufresne stated that TikTok must enhance its underage verification systems and provide clear, accessible information about how it collects and utilises children’s personal data. He stated,
“Our investigation found that measures TikTok uses to keep children off the popular video-sharing platform and to prevent the collection and use of their sensitive personal information were inadequate.”
TikTok has agreed to make improvements, including enhanced verification and better privacy communications. However, authorities remain cautious. Officials are concerned about TikTok’s use of facial and voice analytics, location data, and spending behaviour. These elements are employed to build detailed profiles of young users. They also create tailored content and targeted ads.
Added oversight follows rising geopolitical tensions surrounding TikTok’s Chinese parent company, ByteDance, and national security concerns in both Canada and the United States. While Canada has ordered the dissolution of TikTok’s Canadian business, U.S. President Donald Trump hinted at a possible acquisition. His deal envisions tech companies and prominent billionaires taking control of TikTok’s U.S. operations.
Instagram’s Failing Safety Net
On Instagram, the picture is equally troubling. A comprehensive review led by Meta whistleblower Arturo Béjar found that two-thirds of Instagram’s new teen safety tools are ineffective. The research, conducted with New York University, Northeastern University, and child-safety foundations, tested 47 tools designed to protect young users. Alarmingly, 30 of those tools were rated “red,” meaning they could be easily bypassed or had been quietly discontinued.

Béjar’s report accuses Meta of negligence and misleading the public about the true safety of its platforms. He said,
“Kids, including many under 13, are not safe on Instagram. This is not about bad content on the internet, it’s about careless product design.”
Test accounts created to mimic teens and parents revealed shocking weaknesses:
- Adults could message minors directly, despite features supposedly blocking this.
- Offensive messages, such as explicit threats, were not filtered or flagged, revealing flaws in the “hidden words” feature.
- Algorithms promoted harmful content, including self-harm, eating disorders, and illegal activities.
- Meta’s much-publicised time-management tools were either discontinued or renamed, making them harder for parents to find and use.
The findings prompted advocacy groups, including Molly Rose Foundation and David’s Legacy Foundation, to demand stricter enforcement under the UK’s Online Safety Act. Regulators, such as Ofcom, now have the authority to impose severe penalties on companies that fail to protect children. This includes mandating safer algorithms and restricting toxic content feeds.
The Push for Accountability

Governments worldwide are intensifying their efforts to regulate tech companies, particularly in relation to the online safety of children. In the UK, platforms are now legally obligated to prevent the spread of harmful material targeting minors. Enforcement will focus on algorithm transparency, data protection, and reducing compulsive platform use.
Meta, however, continues to push back. The company argues that its teen accounts provide “automatic safety protections” and claims its parental controls are industry-leading. Despite this defence, the evidence suggests that many of these protections are either poorly implemented or easily circumvented.
The Bigger Picture
Children’s mental health remains at the heart of this debate. Families who have suffered tragic losses due to harmful content are calling for systemic changes. Harmful algorithm recommendations, unrestricted adult contact, and addictive design choices have created an environment. Young users are both the product and the target in this environment.
As the digital world becomes increasingly integrated into daily life, the responsibility of tech companies to protect their youngest users grows heavier. Failure to act invites regulatory crackdowns and erodes public trust in these platforms.
TF Summary: What’s Next
The pressure on TikTok and Instagram to clean up their platforms is mounting. Regulators are prepared to enforce stricter penalties, and public opinion is shifting. They now hold social media companies accountable for the harm caused to children. Expect more global collaboration on online safety laws. The UK’s Online Safety Act serves as a model for other nations.
MY FORECAST: Social platforms face massive fines and possible forced restructuring if they don’t radically improve child safety tools — soon. Governments are ready to step in where companies fail.
— Text-to-Speech (TTS) provided by gspeech