What are you looking for ?
Advertise with us
RAIDON

SDSC Boosts HPC Comet’s Capabilities

With Seagate 800GB SAS SSDs

The addition of the 800GB Seagate SAS SSDs will boost Comet, a petascale supercomputer at San Diego Supercomputer Center (SDSC), by expanding its node-local storage capacity for data-intensive workloads.

Comet

Pairs of the drives will be added to all 72 compute nodes in one rack of Comet, alongside the existing SSDs. This will bring the flash storage in a single node to almost 2TB with total rack capacity at more than 138TB.

The installation process of the Seagate drives began in October under a donation arrangement with SDSC and its Center for Large Scale Data Systems Research (CLDS), an industry-university collaboration that focuses on issues including ‘big data’ system architectures and software, analytics, and performance, with the goal of understanding the value that can be extracted from voluminous amounts of data becoming available to organizations. User access to the drives will begin before January 2016.

Under the partnership, Seagate and SDSC/CLDS are deploying a lightweight framework for extracting metrics suitable for the analysis of data I/O patterns and overall drive performance. These results and other metrics will be used to further develop best practices and reference HPC architectures, while leading to more precise analyses of SSD performance in operational HPC workloads.

The addition of the Seagate SSDs on Comet continues the work we pioneered in the area of system versatility around flash-based SSDs on our Gordon supercomputer,” said SDSC director Michael Norman, also principal investigator for the Comet project. “It complements Comet’s other dimensions, such as its fast Lustre parallel file storage systems and large memory nodes. With such a wide range of workflows in both traditional and emerging science domains such as genomics, the greater research community will benefit from these heterogeneous but integrated capabilities.

The new drives will also extend the abilities of Comet’s upcoming virtualized HPC clusters.

Currently, some VMs on Comet can have large disk images that take advantage of the fast local storage on the compute nodes hosting them,” said Rick Wagner, SDSC’s manager of HPC systems. “Groups using VMs will be able store more data inside of their VMs, closer to their custom application stacks.

Comet is capable of an overall peak performance of almost two petaflops, or two million billion operations per second. It has the ability to perform 10,000 research jobs simultaneously. Like the tail of a comet, SDSC’s newest HPC cluster is intended to serve what’s called the ‘long tail’ of science: the idea that the large number of modest-sized computationally-based research projects represent, in aggregate, a tremendous amount of research that can yield scientific discovery.

The goal of our partnership with SDSC is to inform the wider HPC community via papers and workshops on how to select the most appropriate, high-performance components suitable to their architectures and workloads, while gaining insight into how Seagate SSDs are used in domains that are relying on advanced computation and storage, such as genomics and the social sciences,” said Tony Afshary, director of ecosystem solutions and marketing for flash, Seagate.

The result of a National Science Foundation grant worth nearly $24 million including hardware and operating funds, Comet is available for use by US academic researchers through the NSF’s eXtreme Science and Engineering Discovery Environment (XSEDE) program, a national collection advanced, integrated digital resources and services.

Comet is a Dell-integrated cluster using Intel’s Xeon Processor E5-2600 v3 family, with two processors per node and 12 cores per processor running at 2.5GHz. Each compute node has 128GB of traditional DRAM and 320GB of local flash memory. Since Comet is designed to optimize capacity for modest-scale jobs, each rack of 72 nodes (1,728 cores) has a full bisection IB FDR interconnect from Mellanox, with a 4:1 over-subscription across the racks. There are 27 racks of these compute nodes, totaling 1,944 nodes or 46,656 cores.

In addition, Comet has four large-memory nodes, each with four 16-core sockets and 1.5TB of memory, as well as 36 GPU nodes, each with four NVIDIA GPUs (graphic processing units). The GPUs and large-memory nodes are for specific applications such as visualizations, molecular dynamics simulations, or de novo genome assembly.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E