What are you looking for ?
facts 2025 and predictions 2026
RAIDON

SC25: Compal Showcases AI Server SGX30-2 built on Nvidia HGX B300 Leveraging New Technology to Drive Next-Gen of Intelligent Data Centers

As Gen AI and HPC workloads continue to surge, data-center architecture is entering new phase of transformation

Compal Electronics announced that it will unveil its latest lineup of next-gen AI and HPC servers at the SC25, held from November 17 to 20, highlighting platforms built on the NVidia Blackwell architecture, innovative memory-interconnects, and diversified thermal-management solutions that redefine computing efficiency in the AI generation.

Compal Electronics

In terms of compute power, Compal is showcasing its latest AI server, the SGX30-2 / 10U, built on Nvidia HGX B300 platform. The system leverages the Nvidia Blackwell architecture and supports up to eight Nvidia Blackwell Ultra GPUs connected through fifth-generation Nvidia NVLink for ultra-high-bandwidth data exchange between GPUs. It also features dual Intel Xeon 6 processors, delivering multi-core performance and high-speed memory channels. 

Designed for large-scale AI model training, inference, and HPC workloads, the SGX30-2 achieves outstanding throughput and energy efficiency through deep coordination between CPUs and GPUs. The Xeon 6 architecture supports DDR5 and PCIe 5.0 interfaces and integrates Intel Advanced Matrix Extensions and AI-optimized instruction sets, enabling real-time task balancing and resource sharing between general-purpose and AI inference operations, making the SGX30-2 a true next-gen computing platform for the AI gen.

SX420-2A

Compal Sx420 2a Per Without Cover

Another highlight, The SX420-2A, based on Nvidia MGX reference architecture, features a 4U high-density design compatible with both EIA 19-inch and ORv3 21-inch racks, and supports up to eight Nvidia RTX PRO 6000 Blackwell Server Edition GPUs. This Nvidia RTX PRO Server from Compal is available in configurations that support the latest high-performance networking technologies, including Nvidia BlueField-3 DPUs and Nvidia ConnectX-8 SuperNICs with built-in PCIe Gen 6 switch, further enhancing data center connectivity and scalability. With this powerful combination, enterprises can accelerate a range of enterprise workloads, including AI reasoning, agentic AI, digital twins, robotics simulation, and scientific research. Compared with the previous-gen, Nvidia RTX PRO 6000 Blackwell GPU delivers over 5X higher performance for AI inference and physical simulation workloads, while increasing performance per $.

At the system level, Compal will demonstrate next-gen architectures for memory expansion and data-flow optimization. In this evolving design, the system establishes a direct data path between GPU memory and storage through PCIe interfaces, combined with intelligent DMA offload and low-latency data-transfer mechanisms. This approach transforms traditional NVMe storage into a memory-extended layer, enabling near-DRAM access speeds while reducing CPU workload. In addition, RDMA technology over InfiniBand or RDMA over Converged Ethernet (RoCE) protocols allows direct data movement between servers and data-center nodes, enabling cross-node memory sharing and flexible resource allocation.

By integrating these key technologies, the company showcases an AI data-center framework that is unified, reconfigurable, and energy-efficient, advancing from traditional PCIe-based systems toward a new gen of GPU-Direct Storage ideal for HPC architectures.

The SC25 exhibit will also highlight Compal’s multi-tier thermal portfolio spanning air cooling, liquid cooling, and immersion cooling technologies. Through innovative heat-management design, the company demonstrates how AI servers can sustain stable performance under extreme power density while maintaining optimal energy efficiency, a key step toward sustainable data-center operations.

“The rise of generative AI and HPC is shifting data centers from compute-centric to data-centric architectures, with Storage-as-Memory at the core. As storage reaches memory-class latency and bandwidth, data flows seamlessly across storage, memory, and compute layers-forming a unified, reconfigurable resource pool. Compal is committed to driving this evolution through advanced system integration and heterogeneous computing technologies, enabling a new era of data-driven and energy-efficient infrastructure,” said Alan Chang, VP, infrastructure systems BU, Compal.

Exhibition Information:

  • Event: Supercomputing Conference 2025 (SC25)
  • Date: November 17–20, 2025
  • Location: St. Louis, MO, USA
  • Booth: #4232
Read also :
Articles_bottom
ExaGrid
AIC
ATTO
OPEN-E