What are you looking for ?
PNY
RAIDON

SuperX Launches XN9160-B300 AI Server, Blackwell Ultra Delivers 50% More Compute Over Blackwell

8U chassis, features Xeon 6 processors, 8 NVIDIA Blackwell B300 GPUs, up to 32 DDR5 DIMMs, networking with up to 8×800 Gb InfiniBand, and 8x 2.5" Gen5 NVMe hot-swap bays

Super X AI Technology Ltd. announced the launch of its latest product, the XN9160-B300 AI server.

Superx Xn9160 B300 Ai Server, Powered By Blackwell Ultra

Powered by NVIDIA’s Blackwell GPU (B300), the XN9160-B300 is designed to meet the growing demand for scalable, high-performance computing across AI training, ML, and HPC workloads. Engineered for extreme performance, the system integrates advanced networking capabilities, scalable architecture, and energy-efficient design to support mission-critical data center environments.

The XN9160-B300 AI server is purpose-built to accelerate large-scale distributed AI training and AI inference workloads, providing extreme GPU performance for intensive, high-demand applications. Optimized for GPU-supported tasks, it excels in foundation model training and inference, including reinforcement learning (RL), distillation techniques, and multimodal AI models, while also delivering high performance for HPC workloads such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling.

Designed for enterprise-scale AI and HPC environments, the XN9160-B300 combines supercomputer-level performance with energy-efficient, scalable architecture, offering mission-critical capabilities in a compact, data-center-ready form factor.

The launch of the SuperX XN9160-B300 AI server marks a significant milestone in SuperX’s AI infrastructure roadmap, delivering powerful GPU instances and compute capabilities to accelerate global AI innovation.

XN9160-B300 AI Server
The SuperX XN9160-B300 AI Server, unleashing extreme AI compute performance within a 8U chassis, features Intel Xeon 6 Processors, 8 NVIDIA Blackwell B300 GPUs, up to 32 DDR5 DIMMs, and high-speed networking with up to 8×800 Gb InfiniBand.

High GPU Power and Memory
The XN9160-B300 is built as a highly scalable AI node, featuring the NVIDIA HGX B300 module housing 8 NVIDIA Blackwell B300 GPUs. This configuration provides the peak performance of the Blackwell generation, specifically designed for next-era AI workloads.

Crucially, the server delivers a massive 2,304GB of unified HBM3E memory across its 8 GPUs (288GB/GPU). This colossal memory pool is essential for eliminating memory offloading, supporting larger model residence, and managing the expansive Key/Value caches required for high-concurrency, long-context GenAI and Large Language Models.

Extreme Inference and Training Throughput
The system leverages the B300 Ultra’s superior FP4/NVFP4 precision and second-gen Transformer Engine to achieve monumental performance leaps. According to NVIDIA, Blackwell Ultra delivers a decisive leap over Blackwell by adding 50% more NVFP4 compute and 50% more HBM capacity per chip, enabling larger models and faster throughput without compromising efficiency. [1] Scaling is effortless, thanks to 8x800Gb OSFP ports for InfiniBand or dual 400Gb Ethernet. These ports allow for the high-speed, low-latency communication necessary to connect the servers into vast AI Factories and SuperPOD clusters. The 5th Gen NVLink interconnects further ensure that the 8 on-board GPUs communicate seamlessly, acting as a single, potent accelerator.

Robust CPU and Power Foundation
The GPU complex is supported by a robust host platform featuring dual Xeon 6 processors, providing the efficiency and bandwidth required to feed the accelerators with data. The memory subsystem is equally formidable, utilizing 32 DDR5 DIMMs supporting speeds up to 8000MT/s (MRDIMM), ensuring the host platform never bottlenecks the GPU processing.

For mission-critical reliability and sustained performance, the XN9160-B300 is equipped with 12×3000W 80 PLUS Titanium redundant power supplies, ensuring extremely high energy efficiency and stability under continuous peak load. The system also includes multiple high-speed PCIe Gen5 x16 slots and comprehensive storage options, including 8x 2.5″ Gen5 NVMe hot-swap bays.

Super X Xn9160 B300 Ai Server 2

Technical Specs:

CPU

2x Intel Xeon 6th Gen P-Core Processor up to 350W(SP)/500W(AP)

GPU

8xNvidia Blackwell B300

Memory

32x96GB DDR5 5600 RDIMM

System Disk

2x1920GB SSD

Storage Disk

8×3.84TB NVMe U.2

Network

8xOSFP (800G) from CX8 on the module

Dimension

8U 447mm(H) x 351mm(W) x 923mm(D)

Market Positioning:
The XN9160-B300 is built for organizations pushing the boundaries of AI, where maximum scale, next-gen models, and ultra-low latency are core requirements:

  • Hyperscale AI Factories: For cloud providers and large enterprises building and operating trillion-parameter foundation models and highly demanding, high-concurrency AI reasoning engines.
  • Scientific Simulation and Research: For exascale scientific computing, advanced molecular dynamics, and creating comprehensive industrial or biological Digital Twins.
  • Financial Services: For real-time risk modeling, high-frequency trading simulations, and deploying complex large language models for financial analysis with ultra-low latency demands.
  • Bioinformatics and Genomics: For accelerating massive genome sequencing, drug discovery pipelines, and protein structure prediction at scales requiring the B300’s memory capacity.
  • Global Systems Modeling: For national meteorological and governmental agencies requiring extreme compute for global climate and weather modeling and highly detailed disaster prediction.

[1] Blog NVIDIA: Inside NVIDIA Blackwell Ultra: The Chip Powering the AI Factory Era

Read also :
Articles_bottom
ExaGrid
AIC
ATTO
OPEN-E