What are you looking for ?
Infinidat
Articles_top

“30% to 250% Higher Performance” by Mellanox 100GB IB EDR Vs. Intel Omni-Path

For HPC

Mellanox Technologies, Ltd. announced that its EDR 100Gb/s IB solutions have demonstrated from 30 to 250% higher HPC applications performance versus Omni-Path.

These performance tests were conducted at end-user installations and Mellanox benchmarking and research center, and covered a variety of HPC application segments including automotive, climate research, chemistry, bio-science, genomics and more.

Examples of extensively used mainstream HPC applications:

  • GROMACS is a molecular dynamics package design for simulations of proteins, lipids and nucleic acids and is one of the fastest and broadly used applications for chemical simulations. GROMACS has demonstrated a 140% performance advantage on an IB-enabled 64-node cluster.
  • NAMD is highly noted for its parallel efficiency and is used to simulate large biomolecular systems and plays an important role in modern molecular biology. Using IB, the NAMD application has demonstrated a 250% performance advantage on a 128-node cluster.
  • LS-DYNA is an advanced multi-physics simulation software package used across automotive, aerospace, manufacturing and bio-engineering industries. Using IB interconnect, the LS-DYNA application has demonstrated a 110% performance advantage running on a 32-node cluster.

Due to its scalability and offload technology advantages, IB has demonstrated higher performance utilizing just 50% of the needed data center infrastructure and thereby enabling the industry’s lowest TCO for these applications and HPC segments. For the GROMACS application example, a 64-node IB cluster delivers 33% higher performance in comparison to a 128-node Omni-Path cluster; for the NAMD application, a 32-node IB cluster delivers 55% higher performance in comparison to a 64-node Omni-Path cluster; and for the LS-DYNA application, a 16-node IB cluster delivers 75% higher performance than a 32 node Omni-Path cluster.

IB solutions enable users to maximize their data center performance and efficiency versus proprietary competitive products. EDR IB enables users to achieve 2.5X higher performance while reducing their capital and operational costs by 50%,” said Gilad Shainer, VP marketing, Mellanox. “As a standard and intelligent interconnect, IB guarantees both backward and forward compatibility, and delivers optimized data center performance to users for any compute elements – whether they include CPUs by Intel, IBM, AMD or ARM, or GPUs or FPGAs. Utilizing the IB interconnect, companies can gain a competitive advantage, reducing their product design time while saving on their needed data center infrastructure.

The application testing was conducted utilizing end-user data centers and the Mellanox benchmarking and research center. The full report of testing conducted at end-user data centers and the Mellanox benchmarking and research center will be available on the Mellanox web site.

Articles_bottom
AIC
ATTO
OPEN-E