Follow Cyber Kendra on Google News! | WhatsApp | Telegram

Add as a preferred source on Google

AI Hacks AI: Security Tool Finds One-Click RCE in OpenClaw Assistant

AI pentester Hackian discovered a critical RCE vulnerability in OpenClaw in under 2 hours, as a viral assistant gains 100K+ GitHub stars.

OpenClaw (Clawdbot) hacked

In a watershed moment for AI security, an autonomous hacking agent has successfully exploited another AI system, exposing a critical vulnerability in (Clawdbot / Moltbot) OpenClaw—the open-source personal assistant that amassed over 100,000 GitHub stars in just one week. The discovery raises urgent questions about security practices as AI agents gain unprecedented access to users' digital lives.

Security researchers at Ethiack deployed Hackian, their AI-powered penetration testing tool, against a fresh OpenClaw instance on January 26. Working autonomously without human intervention, Hackian identified and confirmed a one-click account takeover leading to remote code execution in approximately 100 minutes—faster than most human security researchers could complete initial reconnaissance.

The Perfect Storm of Popularity and Exposure

OpenClaw's meteoric rise created an unusual security scenario. Originally launched as Clawdbot, then briefly renamed Moltbot following trademark concerns from Anthropic, the platform promises users a persistent AI assistant that runs on their own infrastructure while integrating seamlessly with WhatsApp, Telegram, Discord, Slack, and other messaging platforms. Enthusiasts rushed to deploy instances on public servers, often using default configurations—creating what security experts describe as an "attack surface goldmine."

Major cloud providers have jumped on the trend. Hostinger, DigitalOcean, and others now offer dedicated one-click OpenClaw deployment options, while hosting companies report unprecedented demand for VPS plans specifically configured for the assistant. 

Developer Peter Steinberger's weekend project attracted 2 million visitors in a single week, with former Tesla AI director Andrej Karpathy calling it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."

How the Attack Works

The vulnerability exploits OpenClaw's gateway control interface, which is enabled by default. Hackian discovered that URL parameters could override the WebSocket gateway address, and critically, that the application would immediately connect to an attacker-controlled server while transmitting the victim's authentication token.

The attack chain is deceptively simple: a victim visiting a malicious link would trigger their browser to leak their gateway token via WebSocket. Even users running OpenClaw locally aren't safe—the vulnerability enables attackers to pivot into local networks by leveraging the victim's browser as an exploitation proxy, bypassing traditional network security controls.

"This is essentially a CSRF vulnerability combined with WebSocket hijacking," explained Henrique Branquinho, AI Engineer at Ethiack. "The fact that Chrome doesn't yet enable Local Network Access protections by default means this works seamlessly against localhost deployments. No special permissions required."

AI Testing AI: The New Security Frontier

What makes this discovery particularly significant is the methodology. Hackian operated with complete autonomy, conducting reconnaissance by analysing publicly exposed source maps, reconstructing client-server communication protocols, identifying authentication weaknesses, and crafting proof-of-concept exploits—all without human guidance beyond pointing it at a target URL.

"When AI builds faster than ever, AI should test more than ever," Branquinho stated in Ethiack's disclosure. The team reported the vulnerability to OpenClaw maintainers within hours of confirmation. A patch was committed to the main branch on January 28, just two days after the initial discovery.

The Broader Security Crisis

OpenClaw's security challenges extend beyond this single vulnerability. The platform requires extensive system access by design—file systems, command execution, messaging platforms, and calendar integration. Security researchers have consistently warned that the tool is fundamentally unsuitable for non-technical users or deployment on personal machines containing sensitive data.

OpenClaw maintainer Shadow emphasised on Discord: "If you can't understand how to run a command line, this is far too dangerous a project for you to use safely." Yet the platform's viral marketing—promising an AI assistant that "actually does things"—has attracted mainstream users unprepared for the operational security requirements.

Industry experts point to persistent unsolved problems like prompt injection, where malicious messages could trick AI models into executing unintended commands. Steinberger has acknowledged these concerns, noting that "prompt injection is still an industry-wide unsolved problem" while directing users to security best practices that require significant technical expertise to implement.

What Users Should Do Now

Existing OpenClaw users should immediately update to the latest version. The vulnerability was patched in commit 8cb0fa9 and is fixed in all versions after January 28, 2026. Users should verify their instances are current and review security configurations, particularly ensuring the gateway control interface isn't publicly exposed without proper authentication.

For those considering deploying OpenClaw, security experts recommend treating it as experimental infrastructure requiring dedicated hardware isolated from personal devices and data. Running the assistant on a separate VPS or isolated network segment—never on a machine containing sensitive information—is essential. 

Users should implement strong gateway authentication tokens, restrict network access via tools like Tailscale, and maintain strict separation between the AI agent's access scope and critical systems.

The incident underscores a fundamental tension in AI development: the race to deploy powerful autonomous systems often outpaces security maturity. 

Ethiack's research demonstrates that the tools exist to test these systems comprehensively—but they must be deployed proactively, not after vulnerabilities reach production. With AI agents increasingly managing sensitive workflows, the industry faces a choice between moving fast and breaking things or building security foundations that can withstand the autonomous future they're creating.

Post a Comment