Follow Cyber Kendra on Google News! | WhatsApp | Telegram

Add as a preferred source on Google

AI-Powered GitHub Bot Quietly Targeted 500+ Repositories for Three Weeks Before Anyone Noticed

AI-assisted supply chain campaign exploited GitHub's pull_request_target misconfiguration across 500+ repos, stealing npm tokens and cloud credential

GitHub Bot

A threat actor armed with AI-assisted automation spent three weeks silently probing open-source repositories before security researchers caught on — and by then, the damage was already done.

Wiz Research published findings this week revealing that the publicly reported "prt-scan" campaign, first flagged by security researcher Charlie Eriksen on April 2, 2026, was actually the final phase of a six-wave operation that began on March 11. 

By the time anyone raised the alarm, the attacker had opened well over 500 malicious pull requests across GitHub, successfully compromised at least two npm packages spanning 106 published versions, and confirmed theft of AWS keys, Cloudflare API tokens, and Netlify authentication credentials.

The attack hinged on a well-known but persistently unpatched GitHub misconfiguration: the pull_request_target workflow trigger. 

Unlike the standard pull_request event, this trigger runs with access to the base repository's secrets — even when the pull request originates from a complete stranger's fork. Attackers can exploit this to steal credentials simply by getting their code to execute inside someone else's CI pipeline.

The playbook was surgical in concept. The attacker — operating across six GitHub accounts all sharing the same ProtonMail root address — would identify vulnerable repositories, fork them, and inject malicious payloads into CI-adjacent files like conftest.py, package.json, or build.rs, then open a pull request titled innocuously as "ci: update build configuration." 

If the workflow triggered, a five-phase payload would silently dump environment variables, enumerate cloud metadata endpoints across AWS, Azure, and GCP, and exfiltrate everything via encoded workflow log markers and PR comments.

What makes this campaign stand out is the evolution. Early waves in March relied on crude bash scripts with no obfuscation. By the final wave on April 2–3, the attacker was deploying AI-generated, repository-aware payloads that adapted to each target's tech stack — Go test files for Go projects, pytest fixtures for Python, npm scripts for Node.js — running at roughly seven pull requests per hour for over 22 sustained hours.

Despite the sophistication on the surface, Wiz researchers found significant blind spots. The attacker repeatedly injected Rust build files into Python repositories, attempted label-bypass techniques that are functionally impossible given GitHub's default permission scopes, and probed cloud metadata endpoints even on GitHub-hosted runners where such endpoints don't exist. The campaign had automation and scale; it lacked understanding.

The overall success rate stayed below 10%, and high-profile targets, including Sentry, NixOS, and OpenSearch, blocked every attempt using contributor approval gates. Still, Wiz's conclusion is sobering: at 500+ attempts, even a 10% hit rate means dozens of real compromises.

If your repositories use pull_request_target, audit your workflows immediately. Key defenses include requiring maintainer approval before any workflow runs for first-time contributors, restricting workflow permissions explicitly to read-only, and avoiding the use of repository secrets in any workflow that can be triggered by an external pull request.

Check for the following compromise indicators: branch names matching prt-scan-[12-character-hex], PR titles reading "ci: update build configuration," and workflow log lines containing ==PRT_EXFIL_START== or ==PRT_RECON_START== markers.

Post a Comment