SuperX Launches XN9160-B200 AI Server Powered by NVIDIA Blackwell GPU
Engineered to meet rising demand for scalable, HPC in AI training, ML, and HPC workloads
This is a Press Release edited by StorageNewsletter.com on August 11, 2025 at 2:02 pmSuper X AI Technology Ltd. announced the launch of its latest product — the XN9160-B200 AI server.
Powered by NVIDIA’s Blackwell architecture GPU (B200), this next-gen AI server is engineered to meet the rising demand for scalable, HPC in AI training, ML, and HPC workloads.
The XN9160-B200 AI Server is purpose-built to accelerate large-scale distributed AI training and AI inference workloads. It is optimized for GPU-supported tasks to support intensive GPU instances, particularly for training and inference of foundation models using reinforcement learning (RL) and distillation techniques, multimodal model training and inference, as well as HPC applications such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling. Its performance rivals that of a traditional supercomputer, offering enterprise-grade capabilities in a compact form.
The launch of the SuperX XN9160-B200 AI server marks a significant milestone in the company’s AI infrastructure roadmap, delivering powerful GPU instances and compute capabilities to accelerate global AI innovation.
XN9160-B200 AI Server
The all-new XN9160-B200 features 8 NVIDIA Blackwell B200 GPUs, 5th Gen NVLink technology, 1,440GB of high-bandwidth memory (HBM3E), and 6th Gen Intel Xeon processors, unleashing extreme AI compute performance within a 10U chassis.
Built for AI – Cutting-edge Training Performance
The SuperX XN9160-B200 is powered by its core engine: 8 NVIDIA Blackwell B200 GPUs, equipped with 5th Gen NVLink technology to provide ultra-high inter-GPU bandwidth of up to 1.8TB/s. This accelerates large-scale AI model training, achieving up to a 3x speed improvement and drastically shortening the R&D cycle for tasks like pre-training and fine-tuning trillion-parameter models. For inference, it represents a quantum leap in performance: with 1,440GB of high-performance HBM3E memory running at FP8 precision, it achieves an astonishing throughput of 58 tokens per second/card on the GPT-MoE 1.8T model. Compared to the 3.5 tokens/second of the previous-gen H100 platform, this represents an extreme performance increase of up to 15x.
The inclusion of 6th Gen Xeon processors, in tandem with 5,600-8,000 MT/s DDR5 memory and all-flash NVMe storage, provides key support for the system. These components effectively accelerate data pre-processing, ensure smooth operation in high-load virtualization environments, and enhance the efficiency of complex parallel computing, enabling the stable and efficient completion of AI model training and inference tasks.
To ensure operational reliability, the XN9160-B200 utilizes an advanced multi-path power redundancy solution. It is equipped with 1+1 redundant 12V power supplies and 4+4 redundant 54V GPU power supplies, effectively mitigating the risk of single point of failures and ensuring the system can run continuously and stably under unexpected circumstances, providing uninterrupted power for critical AI missions.
The SuperX XN9160-B200 has a built-in AST2600 intelligent management system that supports convenient remote monitoring and management. Each server undergoes over 48 hours of full-load stress testing, cold and hot boot validation, and high/low-temperature aging screening, combined with multiple production quality control processes to ensure reliable delivery. The company also provide a 3-year warranty and professional technical support, offering a full-lifecycle service guarantee to help enterprises navigate the AI wave and lead the future.
Market Positioning
The XN9160-B200 is designed for global enterprises and research institutions with demanding compute needs, especially:
- Large Tech Companies: For training and deploying foundation models and GenAI applications
- Academic and Research Institutions: For complex scientific simulations and modeling
- Finance and Insurance: For risk modeling and real-time analytics
- Pharmaceutical and Healthcare: For drug screening and bioinformatics
- Government and Meteorological Agencies: For climate modeling and disaster prediction