AI Turns the Tables in Cybercrime
Google reports a disturbing new reality: malware and ransomware now connect to large language models (LLMs) such as Gemini, Claude, and Qwen to refine, rewrite, and adapt their malicious code. The Google Threat Analysis Group (TAG) names three malware strains — Quietvault, Promptflux, and Promptsteal — as early adopters of AI-assisted cyberattacks.
The threats no longer rely on static code or manual programming. They query LLMs for suggestions, analyze antivirus defenses, and regenerate versions of themselves faster than security systems track. The race between AI developers and cybercriminals is more volatile with every interaction.
TAG’s researchers document how these threats connect to public AI models, issue disguised prompts, and receive operational code that executes directly within networks. The results: faster propagation, dynamic infection chains, and autonomous code mutation.
What’s Happening & Why This Matters
Malware That Writes Its Own Updates
The malware strain Promptflux represents the most sophisticated form of this AI-enabled criminal innovation. TAG’s findings describe Promptflux sending messages to Gemini’s API, asking for rewritten functions that avoid antivirus detection. The AI’s responses transform basic code into cleaner, obfuscated versions that retain functionality but bypass signature scans.
“The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software,” Google’s report states.
By continuously generating “fresh” versions of itself, Promptflux behaves like a digital organism — always learning, always rewriting. This self-improvement loop eliminates many vulnerabilities researchers depend on to build defenses.
Another example, Quietvault, uses Gemini to parse stolen files and identify sensitive credentials. It automates the extraction of passwords, API keys, and digital tokens. Once retrieved, it compresses and exfiltrates them through encrypted channels.
TAG’s forensic investigation shows Quietvault leveraging command-line AI interfaces to analyze stolen archives faster than human operators once managed.
Experts Push Back but Stay Cautious
Not everyone sees this as an unstoppable leap. Security researcher Marcus Hutchins, known for halting the 2017 WannaCry ransomware, critiques Google’s framing of these attacks as exaggerated.
“This is what I’m going to refer to as CTI slop,” Hutchins writes. “Tech companies inflate the importance of these AI-linked malware cases to promote the idea that generative AI is more transformative than it really is.”
Still, Hutchins concedes that as attackers refine their prompt engineering, AI-enhanced malware becomes more practical. Once criminals learn how to train or manipulate models to perform specific technical tasks, the speed and precision of malware creation surpass traditional scripting.
Even skeptics admit that the crossover between LLMs and cybercrime signals a turning point. Code obfuscation, phishing campaign design, and vulnerability scanning all become easier through AI-driven automation.
When Malware Talks to AI
Google’s TAG links Promptsteal, another AI-assisted Trojan, to Russia’s APT28 group (Fancy Bear). Unlike Promptflux, Promptsteal communicates through Alibaba’s Qwen model, disguised as an AI image generator. Once installed on a victim’s device, it sends coded queries to Qwen requesting exploit templates or executable modules.
Each response from the model contains operational logic — snippets that modify system settings, install payloads, or reroute network connections. The malware executes these instructions locally, allowing state-backed hackers to deploy campaigns through semi-autonomous AI intermediaries.
TAG identifies this as the first confirmed use of an AI chatbot in state-sponsored cyber operations. The event reveals a chilling trend: public AI systems designed for productivity double as on-demand hacking tools.
Google’s engineers disabled the compromised API keys and restricted Gemini’s ability to respond to prompts related to “code evasion,” “payload generation,” and “system infiltration.” However, the structural risk holds: once AI is a dynamic code generator, malicious users always find alternative pathways.
Defensive AI: The Counterbalance
Google’s response extends beyond blocking bad queries. TAG’s researchers deploy their own AI systems to monitor traffic, pattern shifts, and new command structures. The company uses machine learning to detect prompt-engineered malware patterns, distinguishing between legitimate developer queries and malicious automation.
Other cybersecurity giants — Microsoft, Palo Alto Networks, and CrowdStrike — develop similar AI-driven defenses. The systems read code structure and conversational intent, attempting to identify when attackers use models as “brains” for malware.
But the challenge intensifies. Every AI safeguard creates a new hurdle for attackers to overcome — and every attack produces data that improves both sides’ intelligence. The cyber battlefield now functions as a continuous AI feedback loop.
TF Summary: What’s Next
AI-enhanced malware recasts the cybersecurity arms race. Attackers use language models not as tools but as collaborators — fast, adaptable, tireless. Promptflux, Quietvault, and Promptsteal show how accessible LLM APIs transform ordinary malware into dynamic, self-correcting systems.
For defenders, victory depends on speed and foresight. Detection must encompass prompts, responses, and behavioral intent, not only signatures or code fragments.
MY FORECAST: Security teams adopt conversational forensics as a discipline. Analysts decode how malware “talks” to AI, building prevention tools that intercept those conversations. The next generation of antivirus software won’t scan files — it’ll analyze dialogue between code and AI.
The fight gained a new language. Everyone now learns to speak it.
— Text-to-Speech (TTS) provided by gspeech

