What are you looking for ?
facts 2025 and predictions 2026
RAIDON

Supermicro Expands Nvidia Blackwell Portfolio with 4U and 2-OU (OCP) Liquid-Cooled Nvidia HGX B300 Solutions

Ready for high-volume shipment, deliver GPU density and power efficiency for hyperscale data centers and AI factory deployments

Summary:

  • Introducing 4U and 2-OU (OCP) liquid-cooled Nvidia HGX B300 systems for high-density hyperscale and AI factory deployments, supported by Supermicro Data Center Building Block Solutions with DLC-2 and DLC technology, respectively
  • 4U liquid-cooled Nvidia HGX B300 systems designed for standard 19-inch EIA racks with up to 64 GPUs/rack, capturing up to 98% of system heat through DLC-2 (Direct Liquid-Cooling) technology
  • Compact and power-efficient 2-OU (OCP) Nvidia HGX B300 8-GPU system designed for 21-inch OCP Open Rack V3 (ORV3) specification with up to 144 GPUs in a single rack

Super Micro Computer, Inc. announced the expansion of its Nvidia Blackwell architecture portfolio with the introduction and shipment availability of new 4U and 2-OU (OCP) liquid-cooled Nvidia HGX B300 systems.

B300 Liquid Cooled Super Micro

These latest additions are a key part of the company‘s Data Center Building Block Solutions (DCBBS) that deliver unprecedented GPU density and power efficiency for hyperscale data centers and AI factory deployments.

2-OU (OCP) liquid-cooled Nvidia HGX B300 system

Supermicro Supercluster B300 2ou Systems 1

With AI infrastructure demand accelerating globally, our new liquid-cooled Nvidia HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today,” said Charles Liang, president and CEO, Supermicro. “We’re now offering the industry’s most compact Nvidia HGX B300 solutions – achieving up to 144 GPUs in a single rack – while reducing power consumption and cooling costs through our proven direct liquid-cooling technology. Through our DCBBS, this is how Supermicro enables our customers to deploy AI at scale: faster time-tomarket, maximum performance per watt, and end-to-end integration from design to deployment.”

The 2-OU (OCP) liquid-cooled Nvidia HGX B300 system, built to the 21-inch OCP Open Rack V3 (ORV3) specification, enables up to 144 GPUs per rack to deliver maximum GPU density for hyperscale and cloud providers requiring space-efficient racks without compromising serviceability. The rack-scale design features blind-mate manifold connections, modular GPU/CPU tray architecture, and state-of-the-art component liquid cooling solutions. The system propels AI workloads with 8 Nvidia Blackwell Ultra GPUs at up to 1,100W TDP each, while reducing rack footprint and power consumption. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling with Nvidia Quantum-X800 InfiniBand switches and Supermicro’s 1.8MW in-row coolant distribution units (CDUs). Combined, 8 Nvidia HGX B300 compute racks, 3 Nvidia Quantum-X800 InfiniBand networking racks, and 2 Supermicro in-row CDUs form a SuperCluster scalable unit with 1,152 GPUs.

Supermicro 2u B300 Liquid CooledComplementing the 2-OU (OCP) model, the 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system leverages the company’s DLC-2 technology to capture up to 98% of heat generated (1) by the system through liquid-cooling, achieving superior power efficiency with lower noise and greater serviceability for dense training and inference clusters.

Supermicro 4u B300 Liquid CooledThe firm Nvidia HGX B300 systems unlock substantial performance speedups, with 2.1TB of HBM3e GPU memory/system to handle larger model sizes at the system level. Above all, both the 2-OU (OCP) and 4U platforms deliver performance gains at the cluster level by doubling compute fabric network throughput up to 800Gb/s via integrated Nvidia ConnectX-8 SuperNICs when used with Nvidia Quantum-X800 InfiniBand or Nvidia Spectrum-4 Ethernet. These improvements accelerate heavy AI workloads such as agentic AI applications, foundation model training, and multimodal large scale inference in AI factories.

The company developed these platforms to address key customer requirements for TCO, serviceability, and efficiency. With the DLC-2 technology stack, data centers can achieve up to 40% power savings (1) , reduce water consumption through 45°C warm water operation and eliminate chilled water and compressors in data centers. The firm’s DCBBS delivers the new systems as fully validated, tested racks ready as L11 and L12 solutions before shipment, accelerating time-to-online for hyperscale, enterprise, and federal customers.

These new systems expand Supermicro’s broad portfolio of Nvidia Blackwell platforms – including the Nvidia GB300 NVL72, Nvidia HGX B200, and Nvidia RTX PRO 6000 Blackwell Server Edition. Each of these Nvidia-Certified Systems from the company are tested to validate optimal performance for a wide range of AI applications and use cases – together with Nvidia networking and Nvidia AI software, including Nvidia AI Enterprise and Nvidia Run:ai. This provides customers with flexibility to build AI infrastructure that scales from a single node to full-stack AI factories.

(1) SuperMicro Liquid-Cooling systems

Read also :
Articles_bottom
ExaGrid
SNL Awards_2026
AIC
ATTO
OPEN-E