What are you looking for ?
Infinidat
Articles_top

Compatibility for DDN AI400X2 Storage Appliance with Nvidia DGX H100 Systems

And DDN partnership with Lambda to address needs of enterprises seeking to accelerate AI transformation

DDN (DataDirect Networks Inc.) announced compatibility with the next-gen of Nvidia DGX systems, each including 8 H100 Tensor Core GPUs.

Ddn Nvidia Logos 2303

Ddn Nvidia Mini Rack L Ai400x2 2303

DGX H100 supercomputers are the 4th gen of Nvidia Corp.‘s purpose-built AI system and designed to handle the most taxing training workloads like natural language processing and DL recommender models.

These types of workloads require large data models and high-speed throughput to deliver breakthrough results, and pairing DGX systems with the company’s A³I is a proven combination for AI centers of excellence WW.

AI400X2 storage appliance

Ddn Ai400x2 Angle

The firm’s AI400X2 storage appliance compatibility with DGX H100 systems build on the firm‘s field-proven deployments of DGX A100-based DGX BasePOD reference architectures (RAs) and DGX SuperPOD systems that have been leveraged by customers for a range of use cases. Offered as part of A3I infrastructure solution for AI deployments, customers can scale to support larger workloads with multiple DGX systems. DDN also supports the latest Nvidia Quantum-2 and Spectrum-4 400Gb/s networking technologies. Validated with Nvidia QM9700 Quantum-2 IB and SN4700 Spectrum-4 400GbE switches, the systems are recommended by Nvidia in the DGX BasePOD RA and DGX SuperPOD. With double the IO capabilities of the prior generation, DGX H100 systems further necessitate the use of performance storage solutions like the firm‘s AI400X2.

Takes next step to simplify AI supercomputing adoption
The demand for scalable AI infrastructure continues to grow, as enterprises realize the power that AI delivers to transform their business,” said Dr. James Coomer, SVP, products. “We see more and more organizations that are moving from assessing AI to applying AI to deliver business results. These organizations are looking for proven infrastructure that integrates into their data center in a simple and efficient manner, which is exactly what Nvidia DGX systems with DDN storage delivers.

In addition to these on-premises deployment options, the company is also announcing a partnership with Lambda, Inc. to deliver a scalable data solution based on Nvidia DGX SuperPOD with over 31 DGX H100 systems. Lambda intends to use the systems to allow customers to reserve between 2 and 31 DGX instances backed by DDN’s parallel storage and the full 3,200Mb/s GPU fabric. This hosted offering supplies rapid access to GPU-based computing without a commitment to a large data center deployment along with a simple competitive pricing structure. Lambda chose the company as the backend storage for this project because of the firm‘s established record of DGX SuperPOD deployments, as well as the expertise for storage at scale that DDN brings to the table. Lambda will also be selling DGX BasePOD and DGX SuperPOD with A3I storage for customers looking to establish on-site deployments.

As organizations continue to modernize around AI, they’re experiencing explosive demand around performance and data needs,” shared David Hall, head, HPC, Lambda. “To address that need, Lambda, as a market leader in the deep learning infrastructure space, is bringing Nvidia DGX systems with DDN A3I storage into our reserved cloud offering. This provides our customers with a full-service experience coupled with industry-leading performance in a matter of weeks rather than months.”

Resources:
Blog: DDN and NVIDIA: Your Key to Creating an AI Center of Excellence
More about DDN’s new RAs, why performance parallel storage supplies an advantage for AI workflows, and Lambda’s GPU cloud by watching DDN’s on-demand video, Unleash Lightning-Fast Storage for Unprecedented AI

Read also :
Articles_bottom
AIC
ATTO
OPEN-E