What are you looking for ?
Infinidat
Articles_top

… Enables Peer-to-Peer Communication Between GPUs …

With Nvidia GPUDirect technology

Mellanox Technologies, Ltd. announced support for the new generation of NVIDIA GPUDirect technology.

mellanox_peertopeer_communication_gpus__nvidia

NVIDIA GPUDirect technology accelerates communications between GPUs by providing a direct peer-to-peer communication data path between Mellanox’s scalable HPC adapters and NVIDIA GPUs, without transferring data to the CPU or server memory subsystem.

Without GPUDirect, GPU data must first be copied into the system memory before going over the network. Now, the interconnect and GPU are linked with data exchanges directly between the two devices over a PCIe 3.0 bus, bypassing the CPU and the system memory. This direct-connected approach enabled by GPUDirect delivers faster communications between GPUs across different systems by as much as 80%, while reducing the end-to-end latency.

"The high performance and compute density of GPUs have made them a compelling solution for computationally intensive HPC applications," said Gilad Shainer, VP of market development at Mellanox Technologies. "To ensure the highest level of application performance, scalability and efficiency, the communication between GPUs within a cluster must be performed as quickly as possible. GPUDirect enables NVIDIA GPUs and Mellanox ConnectX-3 adapters to provide an optimum GPU clustering technology."

"Mellanox’s support for GPUDirect helps users maximize their cluster performance," said Sumit Gupta, senior director of the Tesla business at NVIDIA. "The ability to transfer data directly to and from GPU memory dramatically speeds up system and application performance, enabling users to run computationally intensive code and get answers faster than ever before."

GPU-based clusters are used for computationally-intensive tasks, such as seismic processing, computation fluid dynamics and molecular dynamics. Since the GPUs perform high-performance floating point operations over a very large number of cores, a high-speed interconnect is required to connect between the platforms to deliver the necessary bandwidth and latency for the clustered GPUs to operate efficiently and alleviate any bottlenecks in the GPU-to-GPU communication path.

Mellanox ConnectX based adapters are IB solutions that provide full offloading capabilities critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters.

Articles_bottom
AIC
ATTO
OPEN-E