Home / AMD Helios: Revolutionary Rack-Scale AI Platform

AMD Partners with Meta’s Open Rack Vision via “Helios”, Unveils Rack-Scale AI Platform Built to Open Standards

Pranav Hotkar 15 Oct, 2025

San Jose, California, October 14, 2025- In a bold step toward open, scalable AI infrastructure, AMD today unveiled Helios, a rack-scale reference platform built to Meta’s Open Rack Wide (ORW) standard. The announcement, made during the Open Compute Project (OCP) Global Summit, signals AMD’s intent to push the industry toward interoperable, standards-based data center architectures.

Helios is engineered to support the demands of next-generation AI and high-performance computing workloads. It integrates AMD’s Instinct GPU series, EPYC CPUs, and Pensando networking components in a double-wide rack optimized for power distribution, cooling, and maintainability. The system can deliver up to 1.4 exaFLOPS of FP8 compute and hosts 31 TB of HBM4 memory per rack, metrics that make it suitable for AI models on the trillion-parameter scale.

By aligning with Meta’s ORW specification, contributing to OCP, and focusing on open standards, AMD is extending its open hardware philosophy from silicon to full rack systems. The Helios platform supports open fabrics, OCP DC-MHS, UALink, and Ultra Ethernet Consortium architectures, and features quick-disconnect liquid cooling to handle high thermal loads.

Forrest Norrod, Executive Vice President and GM of AMD’s Data Center Solutions Group, underscored the significance of open collaboration: “With ‘Helios,’ we’re turning open standards into real, deployable systems, combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads.”

The Helios design is offered as a reference to OEMs, ODMs, and hyperscalers, allowing them to adopt, customize, and deploy AI infrastructure more quickly, reducing integration friction and promoting interoperability. Its architecture and modular layout are meant to simplify maintenance and upgrades in large-scale deployments.

To validate its design direction, AMD has already lined up early adoption. Oracle Cloud Infrastructure (OCI) is among the first to embrace Helios architecture. OCI plans to deploy 50,000 MI450 GPUs beginning in Q3 2026 in large AI “superclusters” based on the Helios system.

Analysts note that Helios positions AMD as a more formidable rival to established solutions from Nvidia, which currently dominates rack-integrated AI hardware. With Helios, AMD is betting on openness and industry collaboration as key differentiators to gain traction in the competitive AI infrastructure space.

With this move, AMD isn't just selling chips; it’s putting forward a full-stack, open design ecosystem over which cloud providers and data center operators can build. As AI workloads continue to scale, the industry will be watching closely to see if Helios becomes a de facto standard in the years to come.

About the Author

Pranav Hotkar is a content writer at DCPulse with 2+ years of experience covering the data center industry. His expertise spans topics including data centers, edge computing, cooling systems, power distribution units (PDUs), green data centers, and data center infrastructure management (DCIM). He delivers well-researched, insightful content that highlights key industry trends and innovations. Outside of work, he enjoys exploring cinema, reading, and photography.


Tags:

AMD Helios Rack-Scale AI Open Compute Project Meta Open Rack OCI Superclusters