ISC High Performance 2025: MSI Showcasing AI servers Designed to Next-Gen AI and Accelerated Computing Workloads
Including CG480-S5063 4U AI server, CG290-S3063 2U AI server, CX270-S5062 2U server built on DC-MHS standard, and 2-node server Open Compute CD281-S4051-X2 21” ORv3-compliant 2OU
This is a Press Release edited by StorageNewsletter.com on June 13, 2025 at 2:01 pmAt ISC High Performance 2025, in Hamburg, Germany, MSI (Micro-Star INT’L Co., Ltd.) is showcasing its enterprise-grade, high-performance server platforms at booth #E12.
Built on standardized and modular architectures, the company’s AI servers are designed to power next-gen AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.
“As AI workloads continue to grow and evolve toward inference-driven applications, we’re seeing a significant shift in how enterprises approach AI deployment,” said Danny Hsu, GM, enterprise platform solutions, MSI. “With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.
Built on the NVIDIA MGX modular architecture, the company’s AI servers deliver a powerful and flexible foundation for accelerated computing, tailored to meet the evolving needs of diverse AI workloads.
CG480-S5063 front and rear
The CG480-S5063, MSI’s latest 4U AI server, is purpose-built for high-performance tasks such as large language model (LLM) training, deep learning, and fine-tuning. It supports dual Xeon 6 processors and features 8 FHFL dual-width GPU slots, compatible with NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition, with support for GPUs up to 600W. Equipped with 32xDDR5 DIMM slots and 20xPCIe 5.0 E1.S NVMe bays, the CG480-S5063 ensures exceptional memory bandwidth and lightning-fast data throughput. Its modular architecture and expansive storage design make it a future-ready platform, ideal for next-gen AI deployments that demand unmatched performance and scalability.
CG290-S3063 font and rear
The CG290-S3063 is a 2U AI server platform powered by the NVIDIA MGX modular architecture, designed to meet the growing demands of AI workloads in enterprise data centers. It supports a single-socket Xeon 6 processor, up to 16xDDR5 DIMM slots, and 4xFHFL dual-width GPU slots with power support up to 600W – ideal for small-scale inference and lightweight AI workloads. With PCIe 5.0 expansion, 4 rear 2.5-inch NVMe drive bays, and dual M.2 NVMe slots, the CG290-S3063 offers fast data throughput, flexible storage, and a scalable design for next-gen AI applications.
CX270-S5062 front and rear
The CX270-S5062, built on the DC-MHS (Datacenter Modular Hardware System) standard, is a 2U server featuring dual Xeon 6 processors designed for demanding enterprise compute workloads. Equipped with 32xDDR5 DIMM slots and up to 24xPCIe 5.0 U.2 NVMe bays, it delivers exceptional memory bandwidth and high-speed storage performance, making it well-suited for virtualization, database management, and other high-performance applications.
For hyperscale cloud environments, MSI offers the Open Compute CD281-S4051-X2 – a 21” ORv3-compliant 2OU, 2-node server optimized for large-scale deployments. Each node is powered by a single AMD EPYC 9005 Series processor supporting up to 500W TDP, equipped with 12xDDR5 DIMM slots and up to 12xPCIe 5.0 E3.S NVMe bays. This configuration delivers outstanding memory bandwidth, dense storage capacity, and fast data transfer. Featuring Extended Volume Air Cooling (EVAC) CPU heatsinks and compatibility with ORv3 48VDC power architecture, the platform offers energy-efficient operation and scalable performance, making it a choice for next-gen cloud data centers.
Resources:
Video: MSI 4U MGX AI platform, built on NVIDIA accelerated computing, deliver the performance user need for tomorrow’s AI workloads.
Video: Discover how MSI’s OCP ORv3-compatible nodes deliver optimized performance for hyperscale cloud deployments.