
For the first time, researchers have confirmed that a criminal threat actor used artificial intelligence to discover and weaponize a zero-day vulnerability — and nearly launched a mass exploitation campaign before being stopped.
Google's Threat Intelligence Group (GTIG) disclosed the finding today in its latest AI threat tracker report, warning that the line between AI as a productivity tool and AI as a weapon has officially been crossed.
The exploit targeted a popular open-source, web-based system administration tool, enabling attackers to bypass two-factor authentication (2FA) — the extra login step meant to stop unauthorized access even when passwords are compromised.
The vulnerability wasn't a typical memory corruption bug that automated scanners catch; it was a hardcoded logic flaw baked into the application's design, exactly the kind of subtle, semantic error that AI models are increasingly good at spotting.
GTIG says the Python-based exploit script carried telltale signs of AI authorship: structured, textbook-quality code, educational comments, a hallucinated CVSS score (an AI-generated severity rating that didn't match any official database), and clean formatting patterns consistent with large language model output. Although researchers don't believe Gemini was involved, they have high confidence that an AI model assisted in both the discovery and development of the exploit.
The attack was disrupted before deployment. GTIG worked with the affected vendor to responsibly disclose the vulnerability and neutralize the threat.
Beyond this specific incident, the report paints a broader picture of how AI is reshaping offensive security. China-linked groups like APT45 are reportedly sending thousands of automated prompts to recursively analyze CVEs and validate proof-of-concept exploits.
Russia-nexus actors have deployed AI-generated malware — including CANFAIL and LONGSTREAM — that uses large blocks of decoy code to fool antivirus detection. An Android backdoor called PROMPTSPY autonomously navigates victim devices using Google's Gemini API, even blocking users from uninstalling it by overlaying an invisible shield over the uninstall button.
Supply chain attacks are also widening the blast radius. The criminal group TeamPCP (UNC6780) compromised LiteLLM, Trivy, and Checkmarx repositories earlier this year, stealing cloud credentials and API keys from affected build pipelines.
For defenders, the takeaway is direct: AI is no longer just accelerating phishing emails. It's now finding vulnerabilities humans might miss, writing functional exploits, and running them autonomously — making patch cadence and supply chain hygiene more critical than ever.