OCP Global Summit 2025: MSI Highlights ORv3, DC-MHS, and MGX Servers Solutions
Including ORv3 21" 44U rack, OCP DC-MHS platforms, GPU servers, and OCP DC-MHS server and motherboards
This is a Press Release edited by StorageNewsletter.com on October 15, 2025 at 2:01 pmAt 2025 OCP Global Summit (booth #A55), MSI (Micro-Star INT’L Co., Ltd.), provider of performance server solutions, highlights the ORv3 21″ 44U rack, OCP DC-MHS platforms, and GPU servers built on Nvidia MGX module architecture, accelerated by the latest Nvidia Hopper GPUs and Nvidia RTX PRO 6000 Blackwell Server Edition GPUs.
These solutions target hyperscale, colocation, and AI deployments, delivering the scalability and efficiency required for next-gen data centers. On Oct 16, Chris Andrada, marketing manager, MSI, will also present an Expo Hall session titled Pioneering the Modern Datacenter with DC-MHS Architecture.
“Our focus is on helping datacenter operators bridge the gap between rapidly advancing compute technologies and real-world deployment at scale. By integrating rack-level design with open standards and GPU acceleration, we aim to simplify adoption, reduce complexity, and give the industry a stronger foundation to support the next wave of AI and data-driven applications,” said Danny Hsu, GM, enterprise platform solutions, MSI.
ORv3 Rack-Scale Integration
The company’s ORv3 21″ 44OU rack comes fully validated with integrated power, thermal, and networking, reducing engineering effort and deployment time for hyperscale environments. With 16 dual-node servers, centralized 48V power shelves, and all front-facing I/O, operators gain more space for CPUs, memory, and storage while keeping airflow clear for efficient cooling.
CD281-S4051-X2 2U 2-node DC-MHS server
The CD281-S4051-X2 2U 2-node DC-MHS server supports a single AMD EPYC 9005 CPU up to 500W TDP/node, each node with 12 DDR5 DIMM slots, 12 front E3.S PCIe 5.0 NVMe drives, and 2 PCIe 5.0 x16 slots for balanced compute, storage, and expansion. This combination provides dense performance for cloud and analytics, delivered in a rack system that can be deployed faster and serviced entirely from the cold aisle.
Standardization with OCP DC-MHS Server and Motherboards
The firm‘s DC-MHS portfolio offers standardized server and HPM designs across Intel Xeon 6 and AMD EPYC 9005 processors for CSPs and hyperscale data centers. With standardized DC-SCM modules, these platforms reduce firmware effort and enable cross-vendor interoperability. Available in M-FLW, DNO-2, and DNO-4 form factors, they provide a consistent path to deploy next-gen CPUs without redesigning entire systems.
CX270-S5062
With support for DDR5 high-bandwidth memory, PCIe 5.0 for accelerators and I/O, and front-service NVMe bays, DC-MHS systems include options such as the CX270-S5062 2U Xeon 6 platform or modular HPMs, which let customers align CPU power, memory density, and drive configurations to workload needs, from cloud clusters to hyperscale data centers. Intel HPMs include the D3071 (DNO-2 single-socket, 12xDIMM slots), D3061 (DNO-2 single-socket, 16xDIMM slots), and D3066 (DNO-4 single-socket, 16xDIMM slots). AMD HPMs include the D4051 (DNO-2 single-socket, 12xDIMM slots) and the D4056 (DNO-4 single-socket, 24xDIMM slots for higher capacity).
GPU Density with NVIDIA MGX
Built on the Nvidia MGX modular architecture, MSI’s GPU servers accelerate AI workloads across training, inference, and simulation with support for the latest Nvidia Hopper GPUs and Nvidia RTX PRO 6000 Blackwell Server Edition GPUs.
-
CG481-S6053 (4U) integrates dual EPYC 9005 CPUs, 8xFHFL PCIe 6.0 GPU slots, 24xDDR5 DIMM slots, and 8×400G Ethernet networking via Nvidia ConnectX-8 SuperNICs, for large-scale AI training clusters requiring maximum GPU density and bandwidth.
-
CG290-S3063 (2U) features a single Xeon 6 CPU, 4xFHFL PCIe 5.0 GPU slots, and 16xDDR5 DIMM slots, providing a compact, efficient system optimized for AI inference and fine-tuning in space-sensitive environments.
Resources:
Video: Discover how MSI’s OCP ORv3-compatible nodes deliver optimized performance for hyperscale cloud deployments.
Video: Watch the MSI’s 4U and 2U Nvidia MGX AI platform, built on Nvidia accelerated computing to deliver the performance for tomorrow’s AI workloads.