What are you looking for ?
Advertise with us
RAIDON

WW Distributed Scale-Out File System Market

Dell, NetApp, Qumulo and Pure Storage leaders

Burgener IdcThis study was written by Eric Burgener, research VP, infrastructure systems, platforms and technologies group, IDC Corp.

 

WW Distributed Scale-Out File System 2022 Vendor Assessment

Idc Ww Distributed Scale Out File System F1

Over the next 5 years, scale-out file systems will be widely deployed by enterprises looking to consolidate file-based workloads, improve file-based infrastructure efficiencies, and handle many of the performance and scalability requirements of modernized applications that are very data intensive.

All of the products evaluated here will be able to do that very well for most enterprises, although there are some differences in top-end performance and scalability and ease of use between offerings – that is why figure on top of this article has many of the vendors clustered closely together. What the reader should note, however, is that there can be significant differences between vendors in their architectures, product strategies, areas of focus, and software-defined flexibility that should be evaluated as purchase decisions are made.

The “Advice for Technology Buyers” section is probably the most important section to read for those who will be involved in making a purchase decision. This section introduces a number of strategic questions enterprises should ask themselves when determining what is most important in selecting a scale-out file system offering.

As an example, all evaluated products can support a 1PB file system, but what each system looks like, how easy it is to manage and upgrade, how much it costs and, in general, how it gets there can be very different. There is no “best” offering in this market, but there are certain products that are better suited for certain workloads and will cater better to certain objectives like top-end performance and scalability, ease of use and management, lower energy and floorspace consumption, hybrid cloud capabilities, and how different access methods are supported.

Enterprises can expect a lot more innovation to occur in the scale-out file market going forward, driven primarily by the fact that 80% of the data that will be created over the next 5 years will be file and/or object based. If enterprises just need to simplify basic file sharing (home directories, etc.), there are a lot of very viable options. Modernized applications, particularly those using AI or those which are very data intensive, will have additional demands that may not be well met by the simpler products, and that’s where enterprises will need to turn to true distributed scale-out file system platforms.

IDC marketscape vendor inclusion criteria
This study assesses the capabilities and business strategies of popular suppliers in the distributed scale-out file-based storage market segment. For a complete definition of distributed scale-out file systems (and a discussion of the new file-based storage taxonomy that IDC introduced in July 2021), see Reclassifying File Storage – A New Approach for the Future of Digital Infrastructure (IDC #US48051221, July 2021). This evaluation is based on a comprehensive framework and a set of parameters that gauge the success of a supplier in delivering a scale-out file-based storage solution to the enterprise market.

To be evaluated in this study, a vendor needs to have a scale-out file-based storage platform:

  • That conforms to IDC’s taxonomy. According to Reclassifying File Storage A New Approach for the Future of Digital Infrastructure, assessed products need to meet the definition of a distributed scale-out file system platform or a clustered scale- up file system that is sold primarily against distributed scale-out file systems.
  • Whose IP is fully owned by the vendor. The vendor being assessed has developed the distributed scale-out file-based storage solution in-house or obtained the technology through acquisition.
  • That was generally available by September 2021 and generates at least $30 million in annual revenue. This is to ensure that the vendor product has at least some level of maturity and market traction.

Advice for technology buyers
Given that the vendors in this assessment are using widely varying product strategies, an important place to start the evaluation process for an enterprise is to understand which of the different approaches appeal to the enterprise and/or are a better fit for its needs. Do you like the idea of being able to manage block-, file-, and object-based workloads on the same storage system through a unified management interface? Do you prefer unified storage (which can avoid semantic loss issues but will use more storage capacity to provide multiprotocol access to the same data object) or multiprotocol access (which uses less storage capacity but where semantic loss may be an issue)? Are you a federal agency that requires FIPS 140-2 compliant encryption? Do you prefer a storage architecture built around server-based storage nodes or are you open to different architectures that may offer differentiators in certain environments? Six of the vendors assessed use server-based storage nodes (although some of them have some proprietary content), while NetApp and Pure use different architectures.

Would you prefer to use traditional access methods like NFS and SMB but also have access to an intelligent client that offers significantly more parallelization if/when you might need it? Other vendors will tell you how they’ve extended the performance capabilities of NFS over TCP beyond the 2GB/s limit per mount point with nconnect or features specific to their platform that still use the standard NFS client (for example) so you don’t have to deploy an intelligent client. Do you require NDMP support? Are you interested in the idea of a cacheless architecture that can offer very high degrees of data concurrency or do more traditional cache-based architectures meet your needs just fine? Do you need POSIX compliance? POSIX really isn’t the future, but there are hundreds of thousands of already deployed applications that use it.

Do you have a preference for an HCI-based architecture (like Cohesity or Nutanix) or a disaggregated storage approach? Do you want to buy your solution from a major OEM (Cisco sells Cohesity, Dell sells Nutanix, and HPE sells Qumulo) or would you prefer to buy it from the developing vendor directly (or a channel partner of theirs)? Do you like the idea of combining data protection and enterprise file sharing under a single system or not? While this is not an exhaustive list of questions, these are the kinds of questions an IT manager should ponder when evaluating scale-out file systems for enterprise workloads.

As with most enterprise workloads, HA is important and enterprise file sharing is no exception. Solutions that have been around for a long time tend to have an extensive, proven feature set in this area. Understand your RPOs) and RTOs) for both local and DR, and match that with capabilities in the scale-out file system offerings.

Tunable erasure coding (EC) (so data durability and capacity utilization can be set differently for different workloads), snapshots, replication, a simple “snap to object” feature that makes it easy to back up the entire namespace to an external object store, air-gap protection to defend vs. ransomware, and integration with third-party backup products like Commvault and Veritas, all these are features that can impact data protection workflows, availability, and recovery times.

Ease of management at scale is another differentiating area. There are many challenges in managing scale-out file system environments, and there has been a lot of employee interchange between the various scale-out file system players in the past 20 years. The challenges are well known at all vendors, but how they address them varies.

If you have managed a scale-out file system before, what are your hot-button issues?

  • Do you need absolutely the lowest latencies for random small file accesses or are sub-millisecond average response times good enough?
  • Are you trying to consolidate workloads across your data stage pipelines that need both native and intelligent client-based access methods?
  • Do you want to be able to rapidly create delta differentials for backup purposes without having to walk all the file trees?
  • Do you want particularly low-capacity utilization of on-disk data protection options at your target level of durability because you have multiple petabytes of data under management?
  • Do you need support for compression and/or de-dupe because your data sets can benefit from these technologies (or not, since much unstructured data does not compress and/or de-dupe very well)?
  • Are disruptive upgrades and slow disruptive recovery in SMB environments a particular pain point?
  • Are you particularly concerned about large capacity drive rebuild times or how easy and nondisruptive it is to expand the cluster by adding a new node?
  • Are you concerned about how easy and efficient is it to use file quota management systems?

These (and many more) are all issues many scale-out file system administrators have struggled with.

The key to selecting a platform best suited for your requirements is to thoroughly understand your needs and preferences up front. The vendors assessed here all provide a range of performance, scalability, availability, and core functionality that meet the requirements for most enterprise file-based workloads, but among the eight vendors, there are very different ways to get there and very different emphases in their product designs. List what is most important to you, and map that to the vendor offerings. Doing that will require going beyond this document since we do not provide direct head-to-head comparisons between vendors. IDC has, however, published a number of technical reviews of different vendor offerings in separate research, discussing the benefits of the approaches they have taken.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E