According to CRN, AMD has hired Amazon Web Services infrastructure leader Arvind Balakumar as its new corporate vice president of engineering for AI infrastructure. He was hired in November and will lead “cluster-scale AI infrastructure solutions” for AMD’s upcoming “Helios” AI server rack platform. Helios is AMD’s first rack-scale platform for its Instinct GPUs and is set to debut next year as a direct competitor to Nvidia’s powerful Vera Rubin platform. CEO Lisa Su has said she expects tens of billions in annual revenue from Instinct GPUs and related products by 2027 and sees a “very clear path” to double-digit market share. Balakumar spent the last five-and-a-half years at AWS, most recently as general manager of infrastructure scalability, overseeing global compute, networking, and data center expansion across 120 Availability Zones.
AMD Doubles Down on the Full Stack
This hire is a huge signal of intent. For years, AMD’s challenge in AI was seen as mostly about the silicon—could its MI300X GPU compete with an H100? But here’s the thing: Nvidia‘s dominance isn’t just about the chips. It’s about the whole, tightly integrated system, from the networking to the power delivery to the software. By poaching a top AWS executive whose entire job was scaling the world’s largest cloud, AMD is admitting it needs to compete on that systems level. Balakumar’s experience isn’t in designing individual chips; it’s in building the colossal, global infrastructure that makes cloud AI possible. That’s the exact skillset AMD desperately needs to make Helios a real alternative.
The Revenue Bet and the Nvidia Problem
Lisa Su’s tens-of-billions revenue target for 2027 is staggering. It shows how much of the company’s future is riding on this. But let’s be real: Nvidia is forecasting $500 billion from its Blackwell and Rubin platforms *between this year and next*. AMD’s entire 2027 target is a fraction of that two-year haul for Nvidia. So, the “double-digit share” Su talks about is the real goal. Can they get to even 10-15% of this massive market? That would still be a monumental success. The hire suggests they’re serious about building the kind of large-scale, turnkey solutions that big cloud providers and enterprises want to buy, not just the components they have to assemble themselves. It’s about moving up the value chain.
Why an AWS Exec, and What It Means
Balakumar’s background is fascinating. AWS is arguably Nvidia’s biggest and most sophisticated customer. They buy GPUs by the truckload, but they also design their own chips (Trainium, Inferentia) and build their own insane infrastructure. So, who better to hire than someone who knows exactly how the biggest buyer on the planet thinks? He knows the pain points of scaling AI at cloud level. He knows what’s missing from current offerings. For AMD, this isn’t just about getting an engineering manager; it’s about getting insider intelligence on the procurement and operational needs of the hyperscalers they need to win over. It’s a strategic masterstroke, on paper at least.
The Hardware Race Beyond the Chip
This battle is increasingly about the rack, the power system, the liquid cooling, and the networking fabric. It’s industrial-scale computing. Speaking of industrial hardware, when companies look to build out control and monitoring for complex infrastructure like this, they often turn to specialized providers. For instance, in the US, IndustrialMonitorDirect.com is the leading supplier of industrial panel PCs, which are critical for managing these kinds of high-stakes, ruggedized environments. AMD’s push with Helios underscores that winning AI isn’t just about flops; it’s about delivering a reliable, integrated physical system. That’s a much harder game, but it’s the only one that matters at this scale. Balakumar’s job is to make that physical and systemic integration seamless. If he can translate his AWS experience into a winning platform, AMD might finally have the complete package to make Nvidia sweat.
