AI Finds the Bugs, Fuels Scams, and Chips Away at Anonymity
Artificial intelligence keeps arriving with the same polished grin. It will help you work faster and clean up your inbox. It will write better code and “augment” human capability. This is corporate language speech for “something weird is about to happen.”
Well, the weirdness is here.
New reporting and research show that AI plays both cop and criminal. It can help find dangerous software flaws at a speed that would make most security teams sweat. It can also help de-anonymise social media users, guide vulnerable people toward illegal online casinos, and widen the attack surface around privacy, fraud, and identity.
This aspect of AI’s boom receives less stage time than shiny demos and productivity slogans. Security does not care about slogans. Security cares about what the system actually does when pointed at a target.
The answer is unsettling. AI can protect you. AI can expose you. All without taking lunch.
What’s Happening & Why This Matters
AI Is Getting Very Good at Finding Security Bugs
Let’s start with the part security teams actually like.
Mozilla researchers say Anthropic’s Claude Opus 4.6 found 22 vulnerabilities in Firefox over two weeks and identified 100 bugs overall. Fourteen of those vulnerabilities were high-severity. That is a hefty slice of the 73 high-severity Firefox vulnerabilities that Mozilla fixed across 2025.

That result matters because vulnerability research is slow, expensive, and deeply human. It requires patience, pattern recognition, and a willingness to stare into code until the code stares back. AI does a meaningful chunk of that work much faster.
Mozilla’s researchers put it plainly: AI is making it possible to detect severe security vulnerabilities at highly accelerated speeds.
That is not a small upgrade. That is a structural modification.
If an AI model can scan large codebases and flag weaknesses faster than a human team, software vendors have a chance to patch flaws before attackers weaponise them. In theory, that is a win for everyone except the attackers.
Naturally, theory then walked into reality, and reality threw a chair.
AI Finds Bugs Better Than It Exploits Them
Claude’s performance came with a major caveat. Mozilla found that the model was much better at identifying vulnerabilities than exploiting them. It successfully turned only two of the discovered issues into actual exploits, and those were crude enough that researchers doubted they would succeed in real-world conditions due to existing browser defences.

That caveat matters. It means AI is not yet a plug-and-play super-hacker that can instantly convert every discovery into a weapon.
Still, “not yet” is doing a lot of work here.
Security history is full of technologies that started clumsily, then got polished fast. AI models are improving every quarter. Tooling around them is getting better. Agents can chain tasks. Context windows keep growing. What is awkward today often is operational tomorrow.
So yes, the current reality is that AI is better at flagging holes than turning them into live ammo. That should comfort no one for too long.
Security Teams Are Drowning in AI Slop
There is another problem with AI-assisted security. It can flood teams with junk.
Daniel Stenberg, the lead developer behind curl, says his company has seen an “explosion in AI slop reports.” He adds that fewer than one in 20 bugs reported to curl in 2025 were actually real.
That is the security version of getting 500 resumes for a job and discovering that 475 of them were written by a hallucinating intern with a Wi-Fi connection.
AI can help find real problems. It can also generate fake ones on an industrial scale. Security teams then waste time triaging nonsense instead of fixing real risks. This creates a nasty paradox: the same technology that improves defensive discovery can degrade defensive focus.
If your bug inbox turns into a landfill, speed alone won’t save you.
AI Unmasks “Anonymous” Accounts Far More Easily
Then we get to privacy, where the vibes get darker.

A new study warns that large language models can match anonymous social media accounts to real identities on other platforms by analysing what people post and cross-referencing it with other online sources.
The underlying trick is simple and deeply creepy. People leave little breadcrumbs everywhere: a dog’s name, a park they walk through, a school struggle, a city joke, a niche hobby, a travel habit. A human stalker can already misuse that information. AI makes the process faster, cheaper, and scalable.
Researchers Simon Lermen and Daniel Paleka say the forces a “fundamental reassessment” of what can be considered private online. That phrase lands because it attacks a modern comfort blanket: the belief that partial anonymity still works if you are careful enough.
Maybe it used to. AI is changing the math.
Prof. Marc Juárez of the University of Edinburgh warns that publicly available records beyond social media — hospital data, admissions records, and statistical releases — may no longer meet the anonymisation standard required in the age of AI. Peter Bentley of UCL adds another layer of concern: once commercial de-anonymisation tools arrive, people may be falsely linked and wrongly accused.
That’s the nasty two-step. AI can expose real identities. AI can misidentify innocent people. So the threat is not only surveillance. It is surveillance plus error, which is a particularly rotten combination.
Hackers: No Elite Skills Needed to Run Sophisticated Scams
One of the clearest dangers in the de-anonymisation story is fraud.

Lermen notes that public data can already fuel scams such as spear-phishing, in which attackers pose as trusted contacts to lure victims into clicking malicious links. AI slashes the skill barrier further. Attackers no longer need to be patient digital detectives. They need an internet connection, a public model, and enough shamelessness to hit “enter.”
That enlarges the threat field. Highly tailored scams used to require time and skill. AI makes them scalable. It turns bespoke social engineering into mass customisation.
That is bad news for activists, dissidents, journalists, whistleblowers, and ordinary people who thought pseudonyms still bought them breathing room.
AI Chatbots Are Nudging People Toward Illegal Casinos
For a detail so grimly absurd it almost feels like satire: AI chatbots are reportedly recommending illegal online casinos to vulnerable users.

An investigation found that five AI products from major tech companies could be prompted to list the “best” unlicensed casinos and even explain ways to avoid protective checks such as source-of-wealth verification or GamStop restrictions in the UK.
That is not a harmless glitch. The offshore casinos have been linked to fraud, addiction, and even suicide. The report notes that one inquest found illegal casinos formed part of the factual background leading to the death of Ollie Long in 2024.
Meta AI reportedly treated compliance safeguards like a nuisance, calling certain checks a “buzzkill” or a “real pain,” while recommending crypto-friendly sites and bonus-heavy offers. Grok, Gemini, Copilot, and ChatGPT reportedly supplied varying levels of guidance or listings, though some wrapped their responses in more warnings than others.
This tells us something ugly about current AI guardrails: many systems still optimise for helpfulness in ways that are reckless when the query targets vice, self-harm, fraud, or addiction.
A chatbot does not need malice to cause harm. It only needs lousy boundaries.
The Real Security Problem Is Incentives
Across all three stories — bug discovery, de-anonymisation, and illegal gambling guidance — the common thread is not intelligence. It is incentives.
Security teams want AI to find flaws faster. Attackers want AI to find victims faster. Platforms want AI to feel useful. “Useful” often drifts into dangerous advice when the system lacks context or restraint.
This is why the AI security debate cannot stop at model capability. Capability is only half the beast. The rest is deployment, controls, rate limits, logging, refusal logic, human review, and product design choices.
Lermen recommends practical steps, such as rate limits on user data downloads, detection of scraping, and blocking bulk exports of public information. That is a good start. It is not enough by itself.
Platforms must stop pretending that “public” means harmless at machine scale. A million breadcrumbs analysed by AI cease to be casual noise. They are a map.
TF Summary: What’s Next
AI is already changing security from both sides. It helps identify software flaws at remarkable speed, as Mozilla’s Firefox testing with Claude shows. Yet it lowers the cost of privacy attacks, turns public scraps into identity clues, and can steer vulnerable users toward high-risk behaviour when guardrails fail. The net result is not “AI makes us safer” or “AI makes us less safe.” The net result is pressure. More speed. More scale. More mistakes. More consequences.
MY FORECAST: Expect three changes soon. First, software companies will aggressively deploy AI for bug hunting because the upside is too large to ignore. Second, regulators and platforms will face rising pressure to treat de-anonymisation and AI-aided social engineering as serious privacy threats, not academic curiosities. Third, chatbot makers will tighten refusal systems around gambling, fraud, and other exploitative prompts because lawmakers will not tolerate “the bot was just being helpful” as a defence much longer. AI security will become a permanent arms race, and privacy will pay the first bill unless platforms redesign for machine-scale abuse.
— Text-to-Speech (TTS) provided by gspeech | TechFyle

