Follow Cyber Kendra on Google News! | WhatsApp | Telegram

Add as a preferred source on Google

Google: Hackers Now Using AI to Create Shape-Shifting Malware

Google warns hackers are using AI to build adaptive, self-modifying malware that evades detection.

AI Malware

A new report from Google’s Threat Intelligence Group (GTIG) reveals that cybercriminals and state-backed hackers are now using artificial intelligence not just to code—but to think, adapt, and evolve in real time. The research marks the first confirmed use of AI-driven malware capable of rewriting itself mid-operation to evade detection, signaling a major turning point in cyber warfare.

According to GTIG, adversaries are “no longer leveraging AI just for productivity gains—they’re deploying novel AI-enabled malware in active operations.”

One of the most alarming discoveries is a new malware strain dubbed PROMPTFLUX, written in VBScript. Unlike traditional code, it utilizes Google’s Gemini API to generate new versions of itself on demand, creating what researchers refer to as “just-in-time” malware.

This self-modifying malware can obfuscate its code, disguise itself from antivirus tools, and even “think” by requesting fresh instructions from AI models. GTIG described its inner workings as “a metamorphic script that can evolve over time.”

Capabilities of notable AI tools
Capabilities of notable AI tools

Another tool, PROMPTSTEAL, linked to Russian state-backed APT28 (also known as FROZENLAKE), utilized an open-source AI model to generate commands on the fly for stealing sensitive data. Instead of hardcoding malicious commands, PROMPTSTEAL dynamically asked an AI to craft them — a tactic that could make traditional detection nearly impossible.

How Hackers Are Outsmarting AI Safeguards

The report also highlights how attackers are learning to social-engineer AI systems themselves. Chinese-linked actors, for instance, posed as cybersecurity students in “capture-the-flag” competitions to trick Gemini into sharing restricted information. Others claimed to be “writing a research paper” to bypass safety filters.

These pretexts, as noted by GTIG, demonstrate how threat actors are combining psychological manipulation with AI misuse to breach defenses.

While much of this AI-powered malware is still experimental, GTIG warns that it’s only a matter of time before these techniques become mainstream in cybercrime marketplaces. In fact, underground forums are already advertising AI-assisted phishing kits and malware builders for sale.

Google says it has taken steps to disable compromised assets and strengthen Gemini’s safeguards, ensuring the model can “refuse to assist” with malicious prompts. The company is also urging the industry to adopt its Secure AI Framework (SAIF)—a new standard for the safe deployment of AI.

“Generative AI is now both a target and a tool in the threat landscape,” GTIG concluded. “The difference between innovation and exploitation will depend on how responsibly we build and defend AI systems.”

Post a Comment