
Nation-state hackers have crossed a troubling threshold: they're now weaponising commercial AI to generate malicious code dynamically during active attacks, according to Google's Threat Intelligence Group.
The newly discovered HONESTCUE malware represents a concerning evolution in cyber threats. Rather than carrying pre-written attack code that security tools can detect, the malware sends prompts to Google's Gemini API mid-execution and receives freshly generated C# code as a response. This code then executes entirely in memory, leaving zero traces on infected systems—a technique that renders traditional antivirus signatures virtually useless.
"The adversary's incorporation of AI is likely designed to support a multi-layered approach to obfuscation by undermining traditional network-based detection," GTIG researchers explained in their quarterly report covering late 2025.

The attack chain is deceptively simple yet effective. HONESTCUE sends innocuous-looking prompts like "write a C# program to download and execute a file" to Gemini's API. The AI dutifully generates clean code that appears legitimate in isolation. But when compiled and executed by the malware framework, this code downloads and launches secondary payloads—all without writing files to disk that security software could scan.
Beyond live malware generation, Google identified over 100,000 attempts to steal its AI models' reasoning capabilities through "distillation attacks"—essentially intellectual property theft conducted through API access rather than traditional hacking.
Meanwhile, threat groups from Iran, North Korea, China, and Russia are systematically using LLMs for reconnaissance, crafting personalised phishing messages, and mapping organisational hierarchies.
Iranian group APT42 particularly exemplifies this AI-augmented approach, using Gemini to research targets' professional backgrounds and generate culturally appropriate social engineering pretexts that bypass traditional phishing red flags.
Google has disabled accounts linked to these activities and strengthened Gemini's safeguards. However, the emergence of AI-calling malware signals that defenders now face threats that can adapt and evolve during the attack itself—a paradigm shift requiring entirely new defensive strategies.