What are you looking for ?
Infinidat
Articles_top

Panasas PanFS With Dynamic Data Acceleration Technology to Support Diverse and Changing Workflows in HPC and AI

Automatically adapts to evolving workloads to deliver fast, total-performance HPC storage solution.

Panasas Inc. released Dynamic Data Acceleration on the new PanFS parallel file system, a proprietary software feature that delivers predictable high performance by automatically adapting to the changing and evolving small file and mixed workloads that dominate today’s HPC and AI landscape.

Inconsistent performance and lack of adaptability in the face of change has been a major headache for both application users and storage administrators. PanFS with Dynamic Data Acceleration is the remedy to this headache and the answer to HPC and enterprise IT organizations who are looking for a high-performance plug-and-play storage solution that keeps up with their pace of change.

“Diversity and change are the major watchwords in HPC applications today,” said Addison Snell, CEO at Intersect360 Research. “Technical computing has always raced to outdo itself year after year in pursuit of scientific or engineering advancement. Now data science and machine learning have also broadened the aperture of the types of workloads administrators have to manage. A high-performance storage system has to be able to incorporate and adapt to this kind of dynamic environment.”

What is Dynamic Data Acceleration?
PanFS with Dynamic Data Acceleration takes the complexity and manual intervention of tiered HPC storage systems off the system administrator’s hands, maximizing the efficiency of all storage media in a seamless, total-performance system that automatically adapts to changing file sizes and workloads. In this integrated system, NVMe SSDs store metadata, low-latency SSDs store small files, and large files are stored on low-cost, high-bandwidth HDDs. By dynamically managing the movement of files between SSD and HDD, and maximizing the full potential of NVMe, PanFS delivers the highest possible performance for HPC and AI workflows.

Difference between PanFS with Dynamic Data Acceleration and other HPC storage systems
All other parallel file systems require clumsy tiering and/or manual tuning to compensate for specific workload characteristics and changes. The approach of piecing together various tiers to achieve performance leads to a level of complexity and inconsistent performance that negatively impacts productivity and increases overall costs. In contrast, PanFS with Dynamic Data Acceleration automatically adapts to changing file sizes and workloads without tuning or manual intervention, delivering a consistent and fast total-performance HPC storage solution.

“The rate of change in high-performance workloads and the extension of parallel file systems to AI and enterprise use cases call for a file system that is predictably fast, resilient and reliable in the face of change,” said Jim Donovan, chief marketing officer. “Adding Dynamic Data Acceleration to the latest version of the PanFS parallel file system on ActiveStor Ultra delivers an HPC storage solution that will remain consistently fast as your workloads change.”

PanFS is deployed on the commodity off-the-shelf (COTS) hardware-based Panasas ActiveStor Ultra HPC turnkey appliance to deliver an integrated storage solution that offers performance at any price point.

Click here to download the PanFS with Dynamic Data Acceleration architecture white paper.

Comments

Two philosophies clashes with full flash models and hybrid ones. Some vendors promote full flash with one of multiple products meaning that for production and protection tiers, flash products are positioned promoting a better $/TB with aggressive data reduction techniques coupled with flash/QLC and other components. On the other hand, some articulate other simple approach with hybrid architectures coupling SSDs and HDDs with transparent multi-layers model (tiering, caching, etc.).

This announcement related to Dynamic Data Acceleration from Panasas confirms what the company promotes for quite some time with a hybrid model aligned to file sizes injected into the system. In other words, the idea here is to consider a HPC storage filled on NVDIMMs, SSDs and HDDs and store files on the "right" storage device based on a size criteria. Small files are kept on SSDs and large ones on HHDs still delivering a good bandwidth performance with good stripe width.

Panasas markets this model as one of the best $/TB and even delivering better throughput vs. other modern parallel file system products. It also promotes a simple architecture with all things integrated without the need to couple external elements to build a multi-layer configuration. There are again 2 distinct philosophies with pure software model deployed on commodities hardware and connected to other products and presented as the storage farm and a second one still with flexible software installed on controlled systems with qualified components to guarantee performance, all appearing included in machines. Sizing of course is critical in both approaches and scalability is made at different levels.

Also established vendors remove dust on their product, clearly WekaIO shaking a "pretty" static or slow moving market landscape, DDN insisting on Lustre, wishing to migrate its Spectrum Scale business, GRIDScaler is absent of the DDN product web page, fast NAS product appearing in HPC top sites as listed in IO500 with Qumulo and Vast Data, BeeGFS shaking leaders positions confirming once again the power of open source and communities behind, the roles of Asian players in the HPC performance race and of course the cloud with NAS and HPC offerings. Just for the HPC side, on AWS are listed AWS FSx for Lustre strengthened with the acquisition of KMesh and external products like DDN, IBM Spectrum Scale, WekaIO, BeeGFS or OrangeFS. On GCP, Google offers and recommends DDN and Quobyte is present. On Azure, there is DDN and BeeGFS.

But at a macro level, HPC landscape has evolving recently illustrated by 2 moves, the first towards commercial HPC essentially to expand business footprint and installed base and the second with AI that really serves as a new area of growth that legitimates the HPC storage approach. We implicitly include here vendors and products such Panasas, WekaIO, some related to open source Lustre with DDN and HPE/Cray, or even ones based on BeeGFS, and also of course some integrated with IBM Spectrum Scale (formerly GPFS).

In fact, this AI new fast growing domain is not limited to HPC storage and we can list some high-end NAS products, also very active in that space, such Dell EMC, NetApp, Pure Storage, Qumulo or Vast Data. AI for storage introduces a new level of stress with some similarities with HPC like bandwidth needs, mixed file access and size but radical model being read intensive. AI is definitely the new battlefield for file storage.

Articles_bottom
AIC
ATTO
OPEN-E