What are you looking for ?
Infinidat
Articles_top

Mellanox Delivers 40Gb InfiniBand HCA

With a PCIe Gen2 host bus interface

Mellanox
Technologies, Ltd.
announced the availability of the dual port ConnectX IB 40Gb/s (QDR)
InfiniBand Host Channel Adapter (HCA), the industry’s highest
performing adapter for server, storage, and embedded applications. The
adapter products deliver the highest data throughput and lowest latency
of any standard PCI Express adapter available today, thereby
accelerating applications and data transfers in High Performance
Computing (HPC) and enterprise data center (EDC) environments.
According to IDC, the market for total InfiniBand HCAs is expected to
increase at a compound annual growth rate (CAGR) of 51.5% to 991,878 in
2011, with a strong ramp for 40Gb/s adapters from 19,182 ports in 2008
to 781,104 ports in 2011.

With the growing deployment of
multiple, multi-core processors in server and storage systems, overall
platform efficiency and CPU and memory utilization depends increasingly
on interconnect bandwidth and latency. For optimal performance,
platforms with several multi-core processors can require interconnect
bandwidth of more than 10Gb/s or even 20Gb/s. The new ConnectX adapter
products deliver 40Gb/s bandwidth and lower latency, helping to ensure
that no CPU cycles are wasted due to interconnect bottlenecks. As a
result, ConnectX adapters help IT managers maximize their return on
investment in CPU and memory for server and storage platforms.

Mellanox
continues to lead the HPC and enterprise data center industry with
advanced I/O products that deliver unparalleled performance for the
most demanding applications,
” said Thad Omura, vice president of
product marketing at Mellanox Technologies. “We are excited to see
efforts to deploy 40Gb/s InfiniBand networks later this year which can
leverage the mature InfiniBand software ecosystem established over the
last several years at 10 and 20Gb/s speeds
.”

Enterprise
vertical applications, such as customer relationship management,
database, financial services, insurance services, retail,
virtualization, and web services are demanding the leading I/O
performance offered by ConnectX adapters to optimize data center
productivity. High performance applications such as bioscience and drug
research, data mining, digital rendering, electronic design automation,
fluid dynamics, and weather analysis are ideal for ConnectX adapters as
they require the highest throughput to support the I/O requirements of
multiple processes that each require access to large datasets to
compute and store results.

Our strategic partnerships with
leading edge companies such as Mellanox enable Amphenol to be on the
forefront of this exciting new technology
,” said John Majernik, Product
Marketing Manager of Amphenol Interconnect Products. “We will be among
the first companies to bring QDR technology to a large scale HPC
infrastructure where our high speed QDR copper cables will connect with
Mellanox ConnectX adapters.

Gore’s copper cables meet the
stringent demands of 40Gb/s data rates and will satisfy the majority of
QDR interconnect requirements in clustered environments,
” said Eric
Gaver, global business leader for Gore’s high data rate cabling
products. “We continue to work with Mellanox to bring to market both
passive and low-power active copper technologies which will be
essential for cost effective cluster scalability at QDR data-rates.

Companies
are using servers with multi-core Intel Xeon processors to solve very
complex problems
,” said Jim Pappas, Director of Server Technology
Initiatives for Intel’s Digital Enterprise Group. “Intel Connects
Cables and high-bandwidth I/O delivered by solutions such as ConnectX
via our PCI Express 2.0 servers are key for applications to deliver
peak performance in clustered deployments. We also continue to work
closely with Mellanox and the industry, on development and testing of
our new 40Gb/s optical fiber cable products
.”

Luxtera’s
40Gb/s Optical Active Cable cost effectively extends the reach of QDR
InfiniBand, enabling large clusters to be implemented in data center
environments,
” said Marek Tlalka, vice president of marketing for
Luxtera. “We are proud to be working with Mellanox to ensure
interoperability
.”

The new Mellanox 40Gb/s InfiniBand
adapters address a critical need for faster, low latency bandwidth in
rapidly growing cluster-based data centers interconnects,
” said Tony
Stelliga, CEO of Quellan Inc. “Quellan is pleased to be working with
Mellanox and the InfiniBand industry on Active Copper Cabling that will
enable this 40Gb/s throughput to run over thinner, lighter, lower power
interconnects.

The demand for semiconductor and optical
connectivity solutions is rapidly growing, especially for modules that
can operate under the most intensive conditions at an aggregate
bandwidth of 40Gb/s
,” said Gary Moskovitz, president and CEO, Reflex
Photonics. “Reflex Photonics supports the efforts of companies like
Mellanox and we are addressing the market needs for cable solutions
that are longer, lighter and less expensive through our InterBOARD
line of products
.”

QDR InfiniBand solutions further
exemplify the need for innovative optical interconnect solutions in HPC
and enterprise data centers,
” said Dr. Stan Swirhun, senior vice
president and general manager of Zarlink’s Optical Communications
group. “With Zarlink’s industry-leading DDR active optical cables
ramping in HPC solutions, Zarlink is looking forward to working with
industry leaders such as Mellanox to enable 40 Gb/s optical
interconnects.

The dual port 40Gb/s ConnectX IB InfiniBand
adapters maximize server and storage I/O throughput to enable the
highest application performance. These products have a PCI Express 2.0
5GT/s (PCIe Gen2) host bus interface complementing the 40Gb/s
InfiniBand ports to deliver up to 6460 MB/s bi-directional MPI
application bandwidth2 over a single port with latencies of less than 1
microsecond. This and all ConnectX IB products support hardware based
virtualization that enable data centers to save power and cost by
consolidating slower-speed I/O adapters and associated cabling
complexity.

The ConnectX IB device and adapter cards are
available today. The device’s compact design and low power requirement
makes it well suited for blade server and Landed on Motherboard designs
(order number MT25408A0-FCC-QI). Adapter cards are available with the
established microGiGaCN connector (MHJH29-XTC) as well as the newly
adopted QSFP connector (MHQH29-XTC). Switches from major OEMs
supporting 40Gb/s InfiniBand are expected later this year.

 

Mellanox
Technologies, Ltd.

Articles_bottom
AIC
ATTO
OPEN-E