What are you looking for ?
Advertise with us
RAIDON

NASA Scales SGI Pleiades IB Cluster With Mellanox

Utilizing 25,000 Intel Xeon processor cores

Silicon Graphics International Corp. announced that with over 60 miles of InfiniBand cabling in place at the NASA Advanced Supercomputing Division
at NASA Ames Research Center at Moffett Field, CA, a scientist was able
to utilize 25,000 SGI ICE Intel Xeon processor cores on Pleiades to run a space weather simulation.

nasa_sgi_pleiades_ib_cluster_mellanox

One particular area of study is magnetic reconnection, a physical
process in highly conducting plasmas such as those that occur in the
Earth’s magnetosphere, in which the magnetic topology is rearranged and
magnetic energy converted to kinetic or thermal energy. This field of
research is critical, as these disturbances can disable wide scale power
grids, affect satellite transmissions and disrupt airline
communications.

"We study the efficiency of reconnection under different conditions in the magnetosphere," said Dr. Homa Karimabadi, space physics group leader at the University of California, San Diego. "This
includes such things as the amount of mixing of the plasma from the
solar wind and from the magnetosphere. The team then uses the
local-scale details to improve models of magnetic reconnection in
‘global’ simulations encompassing a region starting at about three earth
radii and extending to 30 to 200 earth radii, with the Earth’s radius
being about 6400 kilometers
."

As detailed in the article, Cracking the Mysteries of Space Weather,
by Jarrett Cohen of NASA, Earth is mostly protected from solar flares,
coronal mass ejections, and other space weather events by the
magnetosphere, a magnetic field cocoon that surrounds it. But sometimes
Earth’s magnetosphere ‘cracks’ and lets space weather inside, where it
can cause damage. Getting space weather details right means capturing
everything from the 1.28 million kilometer-sized magnetosphere down to
subatomic-scale electrons. Doing that in one simulation would require
supercomputers more than 1,000 times faster than those available today,
so the research team breaks the problem into two parts. They start with
‘local’ simulations that include full electron physics of regions in the
magnetosphere where reconnection is known to occur, followed by
‘global’ simulations.

Accessing up to 25,000 processor cores on Pleiades, Dr. Karimabadi said
that his group can run ‘kinetic’ simulations that treat ech electron
with its full properties and understand how electrons allow reconnection
to occur. In the local simulations, electrons are treated as individual
particles. In the global simulations, electrons are treated as fluids
and ions (electrically charged atoms) as particles. With Pleiades,
simulations can run for five days straight, enabling many parameter
studies. Among recent findings is that magnetic reconnection by itself
is quite turbulent, producing vortices in the plasma that create many
interacting flux ropes-twisted bundles of magnetic field. As observed by
spacecraft, flux ropes can extend several times the radius of Earth.

Space weather is just one application running in the petascale range
today that will benefit from the move to Exascale computing. SGI is
committed to bringing Exascale solutions to the marketplace in an open
computing paradigm in which InfiniBand will have an integral role by
providing the key interconnect elements. Having demonstrated that
InfiniBand can enable petaflop-sized systems at NASA, SGI will also be
partnering with Total to bring InfiniBand into the multi-petaflop
commercial space. Coupled with software tools such as SGI Management
Center and SGI Performance Suite, as well as big data InfiniteStorage
solutions, SGI is positioned to offer an optimal user experience for
multi-petaflop deployments moving to the exascale range.

"The growing complexity of problems that Pleiades is expected to
solve is why we remain committed to developing and providing exascale
solutions,
" said Praveen K. Mandal, senior vice president of engineering at SGI. "NASA
has been a long-time partner of SGI in pushing the boundaries of
technology development and scientific discovery. For five generations,
SGI ICE has tightly integrated InfiniBand technology to achieve top
performance and scaling capabilities in a completely open networking
architecture, and at the same time, with five different networking
topologies, ICE is flexible enough to fit all the workloads that benefit
science
."

Maximizing productivity in today’s HPC cluster platforms requires using
enhanced data messaging techniques. On Pleiades, every single node has a
‘direct’ InfiniBand connection to the rest of the network. That is why
Pleiades has the largest InfiniBand network of any HPC system in the
current Top 500 list. By providing low-latency, high-bandwidth, and a
large message rate, high efficiency interconnect solutions are used as
high-speed interconnects for large-scale simulations such as these
conducted by NASA, and are replacing proprietary or low-performance
solutions.

"The scalable architectural design of the SGI ICE platform provides
an excellent vehicle for showcasing the performance, scalability and
efficiency of Mellanox InfiniBand server and storage interconnect
solutions
," said David Barzilai, vice president of marketing at Mellanox Technologies, Inc.
"Utilizing InfiniBand’s ability to efficiently scale to 10,000 and more
server nodes, Pleiades is able to run space weather simulations of
greater complexity faster and with more precision
."

The Pleiades supercomputer is ranked the #7 most powerful HPC
system in the world based upon the Top 500 list published in November
2011.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E