NVIDIA reveals the future of gigawatt AI factories at the OCP Global Summit. The NVIDIA Vera Rubin NVL144 MGX-generation rack servers are set to support increasing inference demands with 576 Rubin Ultra GPUs. Industry pioneers like Foxconn, CoreWeave, and HPE are designing for 800-volt data centers for improved efficiency and performance.
The Vera Rubin NVL144 MGX compute tray offers an energy-efficient, liquid-cooled design with modular expansion bays for networking and inference. NVIDIA plans to contribute the compute tray innovations as an open standard for the OCP consortium. The MGX architecture boosts AI factory performance and scalability, enhancing the rapid transition to gigawatt-scale infrastructure.
NVIDIA Kyber, the successor to Oberon, will support 576 Rubin Ultra GPUs in rack servers by 2027. The move to 800 VDC architecture offers benefits like increased power efficiency and scalability. NVIDIA Kyber racks are engineered for hyperscale AI data centers, optimizing performance and reliability for generative AI workloads.
Intel and Samsung Foundry join the NVIDIA NVLink Fusion ecosystem, enabling seamless integration of custom silicon into data center architecture. NVIDIA and Intel collaboration will see Intel x86 CPUs integrated into NVIDIA infrastructure platforms. Samsung Foundry offers design-to-manufacturing experience for custom silicon, meeting the demand for CPUs and XPUs.
Over 20 NVIDIA partners are involved in delivering rack servers with open standards for gigawatt AI factories. Silicon providers like ADI, AOS, and Texas Instruments, along with power system and data center providers, are aligning on open standards for the MGX rack server reference architecture. The ecosystem is crucial for scaling the next generation of AI factories.
Read more at NVIDIA: NVIDIA, Partners Drive Next-Gen Efficient Gigawatt AI Factories in Buildup for Vera Rubin