SC25: ASUS Showcases Nvidia GB300 NVL72 Systems
Showcasing complete portfolio built on Nvidia Blackwell architecture for scalable supercomputing and AI enablement
This is a Press Release edited by StorageNewsletter.com on November 20, 2025 at 2:02 pmAsus (ASUSTeK Computer Inc.) announced a showcase of its AI infrastructure portfolio accelerated by Nvidia Blackwell and Blackwell Ultra architectures at SC25, in booth #3732.
Driven by the strategy of ‘Ubiquitous AI. Incredible Possibilities.’, the company’s infrastructure goes all in AI strategy, aiming to accelerate clients’ time to market with superior computing performance. The firm stands as a total infrastructure solution provider, offering robust, scalable, cloud and on-premise solutions for diverse AI workloads. This is achieved through the seamless integration of the latest Nvidia compute platforms with advanced cooling, network orchestration, and large-scale deployment capabilities. By providing a full spectrum of AI solutions, from personal workstations to national supercomputing systems, Asus is committed to democratizing access to powerful AI technologies for everyone.
Rack-scale AI infrastructure with Nvidia GB300 NVL72
XA GB721-E2
At the top of the portfolio, the XA GB721-E2, Asus AI POD built on Nvidia GB300 NVL72, embodies a field-proven rack-scale AI architecture, accelerated by 72 Nvidia Blackwell Ultra GPUs and 36 Nvidia Grace CPUs. Combining a 100% liquid-cooling system with integrated switch trays, the system delivers deployment-ready scalability and energy-efficient performance, offering 10-petaflops-class computing power for enterprise AI and national-cloud workloads.
ESC NM2N721-E1
Building on this strong foundation, ESC NM2N721-E1 accelerated by Nvidia GB200 NVL72 marks a major step forward in the sovereign-AI domain, underscoring the capability to support national-scale AI platforms through fully integrated storage and compute architectures. Asus is also thrilled to announce the deployment of the ESC NM2N721-E1, built on Nvidia GB200 NVL72 platform reaffirming its strength in delivering Nvidia Blackwell architecture for large-scale, industry-backed AI initiatives.
All-New Asus ESC8000A-E13X system with Nvidia RTX PRO Server supporting Nvidia ConnectX-8 SuperNIC
Making its debut at SC25, the Asus ESC8000A-E13X is a 4U Nvidia RTX PRO Server based on Nvidia MGX built to accelerate enterprise AI and industrial HPC workloads. Powered by 2 AMD EPYC 9005 series processors and accelerated by 8 Nvidia RTX PRO 6000 Blackwell Server Edition GPUs, it delivers compute density and efficiency for large-model training and inference. Integrated with Nvidia ConnectX-8 SuperNIC, the system offers ultra-fast 400G Nvidia InfiniBand/Ethernet connectivity per QSFP port and supports up to 8 NVMe drives for high-speed storage. Its optimized 4U air-cooled design ensures sustained performance and reliability, even under the most demanding AI workloads.
From data center to personal AI development
The company also showcases a comprehensive lineup accelerated by Nvidia technologies that spans every scale of AI computing – from large-scale data centers to personal systems.
XA NB3I-E12
At the core, the XA NB3I-E12, based on Nvidia HGX B300 system, delivers performance and air-cooled efficiency for high-density AI training and HPC workloads. Designed as a foundation for AI Factory deployment, it supports large-model training and multi-GPU scale-up through Nvidia NVLink, and enables organizations to accelerate AI development pipelines from simulation and model training to full production. Complementing it, the ESC NB8-E11, based on Nvidia HGX B200 system, brings enterprise-grade flexibility to inference and cluster-level AI applications, providing the compute density and scalability required for AI Factory edge and production environments.
ExpertCenter Pro ET900N G3 supercomputer
Extending the ecosystem to creators and developers, the ExpertCenter Pro ET900N G3 supercomputer accelerated by Nvidia Grace Blackwell Ultra Superchip, bringing powerful AI computing to professional creators. Meanwhile, the compact Asus Ascent GX10 personal AI supercomputer accelerated by Nvidia GB10 Grace Blackwell Superchip enables next-gen AI exploration in a desktop form factor.
Professional Services: Uniting deployment and excellence
The company maintains a strong ecosystem of partners that supports every stage of the data center lifecycle – from design and validation to deployment and management. At SC25, Asus and Vertiv are partnering to accelerate rack-scale AI for the Nvidia GB300 NVL72 platforms. Vertiv augments Asus platforms with the industry’s complete power and cooling portfolio, including advanced CDU solutions. As a first-mover with validated grid-to-chip reference architectures for the Nvidia GB300 NVL72 platform, 800 VDC architecture and gigawatt-scale Nvidia Omniverse DSX Blueprint – the collaboration de-risks thermal and power integration. This partnership improves energy efficiency, enables industrial-scale deployment, and accelerates time-to-first-token for AI factories.
By combining Nvidia Blackwell and Blackwell Ultra architectures, advanced cooling, storage, networking, and world-class hardware-software integration, the company continues to advance AI and supercomputing infrastructure that empowers innovation across enterprise, research, and sovereign applications. Through Asus Professional Services, customers gain access to expert engineering, validation, and deployment support, while AIDC and ACC streamline rollout, monitoring, and lifecycle management across multi-node and multi-rack clusters – ensuring reliability, scalability, and rapid time-to-value for every AI deployment.













