What are you looking for ?
Advertise with us
RAIDON

Computex 2025: ASUS Advanced AI POD Design Built with NVIDIA

Enterprise-optimized reference architectures for accelerated AI infrastructure solutions

At Computex 2025, ASUS (ASUSTeK Computer Inc.) announced that it is pioneering the next wave of intelligent infrastructure with the launch of the NVIDIA Enterprise AI Factory validated design, featuring advanced ASUS AI POD designs with optimized reference architectures.

Asus Computex 25 Intro

These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms, supporting both air-cooled and liquid-cooled data centers. Engineered to accelerate agentic AI adoption at every scale, these innovations deliver unmatched scalability, performance, and thermal efficiency, making them the ultimate choice for enterprises seeking to deploy AI at unprecedented speed and scale.

NVIDIA Enterprise AI Factory with ASUS AI POD
The validated NVIDIA Enterprise AI Factory with ASUS AI POD design provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform on-premises. Designed for enterprise IT, it provides accelerated computing, networking, storage, and software to help deliver faster time-to-value AI factory deployments while mitigating deployment risks.

Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications.

For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or Spectrum-X Ethernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency.

AI-ready racks: Scalable power for LLMs and immersive workloads
ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Xeon 6 processors and RTX PRO 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing.

In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the company’s XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced TCO, and consistent performance.

Engineered for AI Factory, enabling next-gen agentic AI
Integrated with NVIDIA’s agentic AI showcase, ASUS infrastructure supports autonomous decision-making AI, featuring real-time learning, and scalable AI agents for business applications across industries.

VS320d RS26

Asus Vs320d Rs26As a global leader in AI infrastructure solutions, the company provides complete data center excellence with both air- and liquid-cooled options — delivering unmatched performance, efficiency, and reliability. We also deliver ultra-high-speed networking, cabling and storage rack architecture designs with NVIDIA-certified storage, RS501A-E12-RS12U as well as the VS320D series to ensure seamless scalability for AI/HPC applications. Additionally, advanced SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks optimize resource utilization, while the WEKA Parallel File System and ASUS ProGuard SAN Storage provide high-speed, scalable data handling.

The company also provides software platform and services, including ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC), ensuring development, orchestration, and deployment of AI models. The firm’s L11/L12-validated solutions empower enterprises to deploy AI at scale with confidence through world-class deployment and support. From design to deployment, ASUS is the trusted partner for next-generation AI Factory innovation.

Availability:
ASUS servers are available worldwide.

Resources :
ASUS AI POD with NVIDIA GB200 NVL72:ESC NM2N721-E1 | ASUS Servers   
ASUS AI POD with NVIDIA GB300 NVL72:ASUS AI POD with NVIDIA GB300 NVL72           
NVIDIA-Certfited System: ASUS NVIDIA Certified Systems

Read also :
Articles_bottom
ExaGrid
AIC
Teledyne
ATTO
OPEN-E