DDN HPC Storage Powers UK Wellcome Trust Sanger Institute
Part of 22PB genomic storage environment
This is a Press Release edited by StorageNewsletter.com on October 7, 2013 at 2:26 pmTo accelerate advancements in biomedical research, the Wellcome Trust Sanger Institute, a charitably funded genomic research center based in the United Kingdom, has deployed DataDirect Network, Inc.‘s (DDN) high-performance storage as part of a 22PB genomic storage environment.
As one of the top five scientific institutions in the world specializing in DNA sequencing, Sanger Institute embraces the latest technologies to research the genetic basis of global health problems, including cancer, malaria, diabetes, obesity and infectious diseases.
In order to manage the surge in the volume of data required to evaluate genetic sequences, the institute selected DDN’s SFA high-performance storage engine and EXAScaler Lustre file system appliance to deliver unprecedented levels of throughput and scalability to support tens of thousands of data sequences requiring up to 10,000 CPU hours of computational analysis.
With more than 2,000 scientists around the world, DDN SFA storage will also help facilitate data access and sharing including for those who access data through the Sanger Institute’s website, which results in 20 million hits and 12 million impressions each week.
As the 30 DNA sequencers in Sanger Institute’s Illumina Production Sequencing core facility each pump out about 1TB of data daily, with DDN technology the Sanger Institute has an easy-to-manage, integrated system that offers scalability to address both complex computing problems and ever-changing collaboration requirements associated with its research.
DDN’s experience serving some of the world’s fastest computers ensures that the Sanger Institute can deliver high level of compute performance and throughput, as well as maximum system uptime, to optimize the latest sequencing technologies. This is critical as today’s sequencers produce a million times more data than those used a decade ago.
The institute now can provide its diverse scientific community with a tool for leveraging its approximately £80 million research budget to the fullest in order to further the exploration of groundbreaking scientific and medical discoveries
DDN Infrastructure Supports Diverse Research Workloads
- With DDN storage, the Sanger Institute can achieve its goal of supporting different research workloads with a range of computational analysis and storage requirements while being able to expand quickly and without disruption.
- Since installing its initial SFA storage platform, the institute keeps pace with ever-increasing computational and analytical demands by taking advantage of DDN’s ongoing performance increases to achieve speeds of up to 20GBps, which enables meeting the needs of the demanding workloads.
- To accommodate demands for increased bandwidth, it is upgrading its 10GbE network to 40GbE and plans to scale its current DDN storage to support expanded network capacity.
- Additionally, it is exploring DDN WOS distributed object storage platform, for increased collaboration and data sharing as part of a private cloud.
Tim Cutts, acting head of scientific computing, Wellcome Trust Sanger Institute, said: “If you need 10,000 cores to perform an extra layer of analysis in an hour, you have to scale a significant cluster to get answers quickly. You need a real solution that can address everything from very small to extremely large data sets. We have to explore emerging technologies that could play a significant role in our future architecture. We need solutions that give us a much better way to provide storage to our expanding user community with good access controls through iRODS.”
Phil Butcher, director of information communications technology, Wellcome Trust Sanger Institute, said: “The sequencing machines that run today produce a million times more data than the machine used in the human genome project. We produce more sequences in one hour than we did in our first 10 years. For instance, a single cancer genome project sequences data that requires up to 10,000 CPU hours for analysis and we’re doing tens of thousands of these at once. The sheer scale is enormous and the computational effort required is huge. Our storage strategy gives us incredible scaling. If we need to add a new sequencer, we can expand quickly and without disruption.”