Telegram sells speed, privacy, and freedom. A new investigation says some users are cashing in on something uglier: industrial-scale abuse.
A new investigation from AI Forensics says Telegram has been hosting a large, organised cross-border abuse network spanning Spain and Italy. Researchers say they reviewed nearly 2.8 million messages across 16 Telegram groups over six weeks and found a coordinated ecosystem built around non-consensual sexual content, hacking services, surveillance offers, paid abuse channels, and AI-powered “nudifying” tools. The report says just under 25,000 people actively participated, while the content reached roughly 52,000 people across the two countries.
That is not a moderation slip. That is a system. The findings suggest Telegram is not only struggling with abuse. Telegram is allegedly providing a shape and business model that helps abuse spread, regroup, and profit. Once a platform keeps allowing the same networks to revive within hours, the conversation stops being about one bad channel. It turns toward platform design, enforcement failures, and whether Europe is regulating the right problems fast enough.
What’s Happening & Why This Matters
The Abuse Is Organised, Large, and Cross-Border
The first brutal fact is scale.
According to the report cited by multiple sources published on 8 April 2026, AI Forensics examined almost 2.8 million Telegram messages across 16 Italian and Spanish communities over a six-week period. Researchers say more than 24,000 members actively posted, and the groups shared 82,723 images, videos, and audio files. The reported total reach was about 52,000 people, including roughly 27,000 in Italy and 25,000 in Spain.

That set of numbers changes the tone right away. A lot of platform-abuse coverage still sounds like isolated moderation failure or one grotesque niche. AI Forensics describes something much more industrial: cross-border communities, recurring patterns, monetisation, recruitment channels, tool sharing, and content redistribution beyond Telegram itself.
The point is ugly. Once abuse is structured, scalable, and profitable, platform friction starts factoring in more than platform intent. A network like that does not need universal platform approval. A network like that only needs enough gaps to keep breathing.
Victims Are Often Women, Not Public Figures
One of the more damning details in the reporting is how ordinary many of the victims appear to be.
AI Forensics researcher Silvia Semenzin told WIRED that most victims are “ordinary women” who often do not even know that their photos are being shared or manipulated. She said much of the abuse is directed at women the perpetrators know personally, including partners, former partners, acquaintances, and friends.

Public imagination often treats platform abuse as something aimed at celebrities, influencers, or already-visible targets. The report says some public figures appear in the material, but a large share of the content centres on women with little visibility beyond the local social graph around the perpetrator. Women in the videos are often named, tagged, and made locatable through profile links shared in Telegram channels.
That is a harsher form of violence than casual internet spectators may realise. The target is not some faceless public image. The target is often a person whose name, relationships, and location are already known to the abuser or to the audience consuming the material.
That makes the platform problem more severe. A channel that helps humiliate a stranger is bad enough. A channel that helps localise and monetise abuse against someone’s ex, classmate, or colleague is more intimate, more poisonous, and more socially destabilising.
AI Tools Scale Abuse Faster
Artificial intelligence is not the whole story here. Artificial intelligence is still making the story worse.
“Nudifying” bots were frequently advertised in the observed channels as tools for generating more non-consensual content. AI Forensics said that the dynamic increased the “volume, speed and ease” of the abuse.

That line should make regulators uncomfortable for a very simple reason. AI does not need to invent a new form of misogyny to intensify harm. AI only needs to lower the cost of producing abuse, multiplying images, and widening participation.
The wider context backs that up. In early 2026, reports described Telegram as a major venue for AI-generated deepfake nude channels across multiple countries, with Telegram stating that such content violates its terms and that moderators and AI tools are used to remove it.
The ugly pattern is becoming familiar. Generative tools lower the skill barrier. Messaging platforms lower the distribution barrier. Payment features, subscriptions, and anonymity lower the commercial barrier. Put the three together, and abuse stops fringe deviance and starts an efficient market.
That sentence should disgust people. That sentence is still where the market appears to be drifting.
Telegram’s Moderation Response Appears Weak
Telegram shut down some of the groups AI Forensics monitored, but the same communities reportedly reappeared “just a few hours later” with the same names. AI Forensics said that the pattern shows Telegram’s moderation is “insufficient.” The group recommended better reporting systems and stronger enforcement, especially around the platform’s Premium model, because it can create a path for monetisation.

That detail may be the most politically useful part of the whole report. A platform can always argue that bad actors slip through. The harder question is whether the platform’s systems make revival easy, repeat abuse cheap, and enforcement too easy to evade.
Telegram says child sexual content and non-consensual material are prohibited under its terms of service and says AI-powered moderation plus staff enforcement are in place. Telegram did not provide an immediate response to the outlet about the AI Forensics report.
The credibility gap is right there. If the platform bans the content but the same groups reappear within hours, the rulebook starts sounding ornamental.
“AI forensics” is not only about discovering abuse. The work is about discovering what weak moderation is when hostile communities already know how to route around it.
The Abuse Is Not Limited to Telegram
The report says the distribution chain extends beyond one app.
Content from the Telegram channels was redistributed on TikTok and Instagram. At the same time, Reddit acted as a “recruitment gateway,” used to spread links back to the original Telegram channels, where paid content could be found.

That cross-platform pattern undermines the fantasy that a single service can solve the issue on its own through isolated moderation. A lot of digital abuse now behaves like supply-chain crime. One platform hosts the premium material. Another helps recruit. Another helps amplify. Yet another helps normalise. The abuse economy is modular.
That is one reason enforcement continues to lag. Legal systems, regulatory agencies, and platform trust-and-safety teams are often still structured around platform-by-platform responsibility. The actual abuse crosses boundaries much faster than that framework was designed to handle.
AI Forensics reportedly calls the problem “European in scope” and argues that the spread of non-consensual sexual content is no longer constrained by either national borders or platform borders. That sounds right. It sounds like a warning lawmakers will hate because it points toward a bigger regulatory job than they wanted.
Europe’s Digital Rules Face a Fresh Stress Test
AI Forensics is calling for Telegram to be designated a Very Large Online Platform, or VLOP, under the Digital Services Act. That would require the company to take on stricter transparency, risk assessment, and accountability duties. Telegram said in February 2026 that it had “significantly fewer” than 45 million users, which is the threshold that would trigger VLOP designation in the EU.

That threshold fight is already politically sensitive. A February 2026 parliamentary question in the European Parliament asked why Telegram had not been designated despite persistent concerns around disinformation, harmful content, and criminal activity.
So the AI Forensics report lands in a room that was already tense.
The bigger issue is not only whether Telegram crosses a user-number threshold. The bigger issue is whether the DSA is nimble enough to confront platforms whose most dangerous effects stem from structure, anonymity, virality, and weak moderation rather than from a single clean headline metric.
Europe likes rules. The current test is whether Europe likes enforcement enough to use them.
TF Summary: What’s Next
AI Forensics says Telegram has enabled a structured abuse ecosystem across Spain and Italy, with nearly 2.8 million messages, over 24,000 active participants, more than 82,000 shared media files, and a total reach of about 52,000 people across the two countries. The reporting says the channels traded non-consensual sexual material, promoted AI “nudifying” tools, recycled victims’ identities, and reappeared quickly after shutdowns. The findings reveal one ugly truth as a policy problem: weak moderation, combined with easy monetisation, can turn messaging platforms into abuse infrastructure.
MY FORECAST: Telegram will face louder pressure in Europe, not only over harmful content but over whether the platform’s architecture actively helps abusive communities regroup and profit. More scrutiny will dog AI-generated sexual abuse tools, cross-platform recruitment chains, and the DSA’s user-threshold logic. The public debate will keep widening, too. A lot more people are starting to see digital violence as a systems problem, not a few bad actors with phones. That realisation is overdue.
— Text-to-Speech (TTS) provided by gspeech | TechFyle

