What are you looking for ?
RAIDON

History 2004: IB Gets Momentum for Super Clusters

Due in large part to superior speed compared to Ethernet and FC

InfiniBand (IB) interconnect solutions seemed to be running out of steam.

For the past 6 months, however, and due in large part to its superior speed compared to Ethernet and FC, with 10Gb/s transfer rate via copper wires, it is making a come back for high-performance clusters (HPC), if the following developments are any indication:

  • Oracle announced IB support for its forthcoming Oracle Database 10g.
  • IBM has signed a 5-year agreement with Topspin Communications IB switches for use with IBM eServer, pSeries, zSeries, iSeries, and xSeries systems.
  • Dell Computer, HP and NEC also selected Topspin as IB technology provider for HPC solutions.
  • Intel demonstrated >1 teraflop Xeon and IB cluster, while HP showed IB over HPUX at the Supercomputing Conference.
  • Sun Microsystems’ HPTC group launched new IB support.
  • Several Apple Computer’s PowerMac G5 with IB power the Virginia Tech Cluster.
  • Los Alamos Lab deploys a 256-node and Sandia National Labs a 128-node IB cluster from LinuxNetworx.
  • Silicon Graphics will use Voltaire products for clustering Altix 350 servers.
  • Voltaire and SBS Technologies now deliver 24-port 10Gb/s IB switches at approximately $300 per port.
  • Infinicon, which provides a 128-node IB-based network fabric for HPCs, has been selected by Fujitsu for Japan’s Riken 2,000-node lnfiniBand cluster.
  • Mellanox surpassed the 20,000 IB port milestone and plans to boost the next-gen PCIe technology with IB.
  • The Infiniband Trade Association began to work on the interconnect technology beyond its current limit of 30Gb/s to push it beyond 100Gb/s.

This article is an abstract of news published on issue 194 on March 2004 from the former paper version of Computer Data Storage Newsletter.

Articles_bottom
SNL Awards_2026
AIC
hyperbunker