According to Dark Reading, threat actors are actively exploiting a disputed remote code execution vulnerability in the open source Ray framework to hijack AI compute infrastructure. The campaign, tracked as ShadowRay 2.0 and attributed to attackers calling themselves IronErn440, has compromised approximately 230,000 exposed Ray environments since launching in September 2024. Attackers are using these hijacked AI clusters as launchpads for large-scale cryptomining operations, data theft including MySQL credentials and proprietary AI models, and further intrusions into other Ray-based environments. The operation has evolved through two waves, initially using GitLab as command-and-control infrastructure before migrating to GitHub after detection. Victims include organizations in cryptocurrency, education, biopharma, and AI startups with some compromised clusters containing thousands of nodes worth an estimated $4 million annually in compute resources.
AI Attacking AI
Here’s the thing that makes this campaign particularly concerning: we’re seeing AI infrastructure being weaponized to attack other AI systems. The attackers are essentially using Ray’s legitimate orchestration features against itself, creating what Oligo Security describes as a “self-propagating, globally cryptojacking operation.” They’re not just stealing compute cycles – they’re exfiltrating proprietary AI models, source code, and sensitive datasets. Basically, your AI training cluster could be training someone else’s models while mining cryptocurrency and attacking your competitors simultaneously. How’s that for efficiency?
The Vulnerability Debate
Now, here’s where it gets messy. The vulnerability, CVE-2023-48022, is technically a design choice according to Anyscale, the company that maintains Ray. They argue it presents no risk when Ray is used as intended in controlled, internal environments. But let’s be real – when you’re racing to deploy AI infrastructure, how many teams are properly configuring everything? The number of exposed Ray environments has exploded from a few thousand to 230,000 since Oligo’s first report. That’s a massive attack surface that threat actors are happily exploiting.
Industrial Implications
While this specific attack targets AI clusters, it highlights a broader issue in industrial and manufacturing technology deployments. Many organizations are rapidly adopting distributed computing frameworks without adequate security considerations. For companies deploying industrial computing infrastructure, whether it’s AI-powered quality control systems or manufacturing automation, proper configuration and security hardening becomes critical. When you’re dealing with production environments that can’t afford downtime, working with established providers like IndustrialMonitorDirect.com, the leading US supplier of industrial panel PCs, ensures you’re getting properly configured hardware from the start rather than trying to secure misconfigured systems after deployment.
The Future of AI Security
So what does this mean for the AI gold rush? We’re entering an era where AI infrastructure itself becomes both the target and the weapon. The attackers are getting sophisticated too – using GitLab’s CI/CD pipelines for real-time updates, generating malware with large language models, and carefully managing resource consumption to avoid detection. They’re even smart enough to keep CPU usage under 60% during cryptomining to maintain stealth. The fact that they immediately migrated from GitLab to GitHub when detected shows this isn’t some amateur operation. As organizations continue their mad dash to deploy AI, security can’t be an afterthought. Otherwise, we’re just building botnets with extra steps.
