What are you looking for ?
Infinidat
Articles_top

2020 Predictions From 50 Storage Vendors – Part Two

Top 3: cloud, NVMe, Kubernetes

A few days following the 2019 vendors’ facts and review publication, we publish our classic annual vendors’ predictions for 2020.

We collect 50 opinions and consolidate them to find some majors trends and directions for the storage industry:

  1. Without any surprise, cloud arrives #1 in various flavors (multi, hybrid, on-premises, private and we choose to add the edge in that category)
  2. New devices and connectivity around flash, NVMe(-oF), SCM/PM, QLC
  3. Kubernetes of course and containers
  4. Analytics and AI
  5. Object storage

Here is the second part of this report, the first one being published yesterday.

Nasuni (Russ Kennedy, CPO)
1. Prepare for even faster exponential data growth
Data is already doubling every 2-3 years. In 2020, the rate of data growth will accelerate, driven by higher resolution mobile phone cameras, the growth of 4K video, increased video surveillance, genomics research, IoT machine data, 3D medical images… and so many more. Both the number and size of files will increase at an ever faster rate, and IT organizations need to prepare now to store, protect and provide access to it all.
2. Anticipate continued “cloud-washing” by incumbent storage vendors
Incumbent vendors’ storage systems were designed for on-premises environments, but as companies move more of their workloads to the cloud, the market for on-premises gear is shrinking. Unfortunately, vendors’ efforts to connect or port these systems to the cloud doesn’t necessarily make things more efficient or easier to use. And in some cases, it actually makes IT harder to manage, especially if applications and data are still partly on-prem, and you’re trying to provide access to users around the world.
3. Consolidation of cloud-based companies will accelerate
The incredible success and growth of AWS and Azure have other companies eager to carve out their own slice of the cloud. As a result, cloud-friendly technologies are highly coveted, especially by large, traditional IT vendors. Expect to see a lot more M&A in 2020 of cloud-based companies, which we recently witnessed through the acquisition of Veeam in 2020.

NGD Systems (Scott Shadley, VP marketing)
1. Move less and analyze more at the edge
NVMe has provided a measure of relief and proven to remove existing storage protocol bottlenecks for platforms churning out terabytes and petabytes of data on a regular basis. But, is that enough? Even though NVMe is substantially faster, it is not fast enough by itself when petabytes of data are required to be analyzed and processed in real time. This is where computational storage comes in and solves the problem of data management and movement. Computational storage, especially the way we marry the use of NVMe SSDs and compute power, adds analytical power and speed so that results can be accomplished right away and where the data is generated.
2. Better 5G connectivity
In 2020, more edge related devices will be needed to process massive data workloads. The advent of 5G is no different. As more cell towers are built to support 5G, there also needs to be more complex infrastructure at each bay station that can manage the data “in and out of the box” so that user data is optimally utilized. Computational storage with its small form factors and added compute power can pack an analytical uppercut punch in the limited size and power enabled edge- datacenters that live at each of these new cell tower platforms. By providing additional compute to the confined resources that exist is paramount to successful growth of this space. Instead of requiring even more hardware and power to the server, the advent of high capacity computational storage provides the needed offload to the system to allow for great deployments.
3. Simplify the traffic flow of CDN
Streaming services have continued to dominate headlines this year, with the recent launches of Apple TV+ and Disney Plus combined with Netflix, Hulu and Amazon Prime’s increasing investments. This poses a major hurdle for the CDNs – and where computational storage can be a major asset. While a typical CDN traffic flow involves lots of data movement and processing spread out over a variety of edge infrastructure, computational storage, can simplify this flow.

Panasas (Curtis Anderson, Senior Software Architect)
1. The importance of Total Cost of Ownership (TCO) : HPC storage solutions must deliver value far beyond the initial purchase price
As the requirements for HPC storage systems are becoming more diverse with the addition of new workloads such as Artificial Intelligence (AI), there is an increasing need to start looking at the overall impact on the organisation of the ongoing cost of operations, user productivity and the time to quality outcomes. In addition to evaluating the price/performance ratio, buyers will need to start paying close attention to a range of purchasing considerations that go beyond the initial investment. Those include the cost of unplanned downtime in terms of application user productivity, the cost of complexity and the headcount required to manage it, and the need for responsive support for mission-critical infrastructure such as your storage.
2. As Enterprise’s AI projects graduate from “exploratory” to “production” they will leave the public clouds for less costly on-premises solutions, funding a boom in HPC infrastructure build-out, but the requirements for that infrastructure will have changed based upon their cloud experience
Public clouds are great for learning and experimentation, but not for high-utilisation production operations. Public clouds will, however, have a large influence on the next generation of on-premise infrastructure that is built. The need for the lowest time-to-solution, quickly taking action based upon the insights that AI can give you, drives AI to push the underlying hardware (e.g: GPUs and storage) as hard as it can go. But the simple truth is that the cost of a dedicated resource in a public cloud is higher than the cost of owning that resource. Another simple truth is that the value of AI is the computer deriving information that you can act upon from mountains of data. Add in the fact that AI has an insatiable need for growth of training data, and that public clouds have never-ending charges for data storage, and the costs climb. Put those simple facts together and it’s clear that production AI will be less costly if it is performed on-premises. The industry has become used to the extreme flexibility and simplicity of management that public clouds provide, and they will want to retain those characteristics in their on-premise solutions at the lower cost it provides.

Pavilion Data Systems (VR Satish, CTO)
1. Convergence of primary and secondary storage ushers NVMe as a cost-effective and performant media for secondary storage.
2. NVMeOF becomes the preferred protocol for all new infrastructure deployments.
3. High-performance edge storage becomes a requirement for hybrid cloud deployment.

Portworx (Murli Thirumale, CEO)
1. Kubernetes will start to be used to manage IT infrastructure, not just containerized applications
VMWare’s Project Pacific one of the first examples of this, but I expect to see more such offerings in 2020. The move to containerize applications and orchestrate them with Kubernetes is well underway and driving rapid application deployment and portability in enterprises led by DevOps. Project Pacific is a bold move to extend VSphere with Kubernetes and get traditional IT admins into the mix by having VSphere and Kubernetes now manage IT infrastructure. VMWare is both making Kubernetes a first citizen along with VSphere but also saying Kubrnetes via VSphere can now manage not ust Apps but also VMs, storage, networking and compute.
2. IPOs for companies with ‘cult’ CEOs and founders will be discounted by the market as bearing too much risk
The days of CEO worship are over. High profile problems at Theranos, Uber, and more recently WeWork have made investors skittish about relying on charismatic individuals. In 2020, a good product backed by a sustainable business model will be far more attractive to investors than a messianic CEO.
3. We heard a lot about 5G this year but network coverage from the major telecoms providers is spotty
I expect to see this change in 2020. With 5G networks spanning the country, device makers and application developers will start to take advantage of the new high-speed technology. This will mean not just richer smartphone apps but also a range of IoT uses that will reshape computing at the edge.

Pure Storage (Matt Kixmoeller, VP strategy)
1. Customers will demand a subscription to innovation with as-a-service business models
As-a-service models have existed since the beginning of public cloud. For most consumers of storage, hybrid cloud is the reality and the future – and they are looking to get the best out of both worlds; to drive simplicity and automation from their on-premise infrastructure so they can manage it like they manage the cloud, and to get the same enterprise capabilities and control in the cloud, as they have on- premise – both in a flexible, subscription-based as-a-service model. In 2020, the demand for as-a-service in storage will increase and organizations are speaking with their wallets with more investment in Opex models, but successful models need to balance both the operations and purchasing aspects. From an operations perspective, key attributes include standardization (vs. snowflakes), on-demand access, API-driven management, and limitless scale. On the consumption side, key traits include a pay for what you use model, bursting capabilities (flex up/down as needed), and a non-disruptive evergreen experience, services can be grown/evolved over time without disruption. And all this delivered as a 100% pay-per-month Opex service.
2. Modern analytics reached rocketship status
Fueling the growth for modern analytics is more affordable infrastructure options such as more powerful CPUs, consumption-based infrastructure, available both on-prem and in the public cloud, and lower priced flash memory. There is also a significant growth in stream analytics platforms, both open source (Apache Flink, Apache Beam and Spark Streaming) and commercial (Splunk DSP) replacing more and more batch-based processing platforms. Modern analytics can now reach larger scale with cloud native analytics architectures comprised of stateless servers and container and high-performance S3 object stores. Additionally, the unbridled growth of data sources including smart devices (smart home, wearables, connected cars, industrial internet, etc.) will drive the adoption of modern analytics in order to drive more insights.
3. AI operations will go from advisory roles to automated action as customers want a hands-free approach
Organizations will be more open to AI making decisions for them. Customers want to set policies and let the vendors implement the policies, which is partially driven by the declarative nature of Kubernetes and container management. The simplicity of containers will enable organizations to define a state, and the container will be the catalyst. The technology should then drive and deliver insights within the whole environment. AI will be applied to efficiently finding where the predictive model critical for AI applications like anomaly detection and automatic root cause analysis to scale and be applicable in more contexts.

Quantum (Eric Bassier, senior director product marketing)
1. Video and images represent biggest data generator for most enterprises
Between surveillance footage, video for marketing and training purposes across all industries, and the use of high-res image and video content generated by machines in use cases as diverse as movie and TV production, autonomous vehicle design, manufacturing, healthcare – we believe video and high-res image content will represent biggest ‘class’ of data for most enterprises.
2. The tape storage market will grow, reversing a decade-long declining trend
Tape has emerged as a key technology for massive scale cold storage infrastructure – both in the cloud and on-premise. And we believe the architectures used in the cloud will eventually make their way back into the enterprise. So we believe the tape market will grow, and continue to grow over the next 5-10 years, based on a new use case for tape as cold storage for (primarily) video and high-res image data.
3. Hybrid- and multi-cloud architectures become the norm
Many companies are in some type of hybrid-cloud state, and customers are expecting that vendors provide an even more seamless experience between on-premise hardware infrastructures, and cloud infrastructures. Customers will also expect that vendors can offer a multi-cloud experience, so customers are not locked into a single cloud provider.

Qumulo (Molly Presley, global product marketing director)
1. NVMe file storage will be adopted broadly for performance starved, low-latency applications in 2020
NVMe is a communications protocol developed specifically for all-flash storage. It enables faster performance and greater density compared to legacy protocols. It’s geared for enterprise workloads that require top performance, such as real-time data analytics, online trading platforms and other latency-sensitive workloads.
2. Data-driven businesses will have to shift some workloads to the cloud for data processing, ML- and AI-driven workloads
Every major enterprise in the world is going to become a hybrid enterprise. In industries across all major vertical markets including M&E, transportation, bio and pharma, customers are using large volumes of unstructured data in order to accomplish their mission. Despite tremendous downward pressure, IT budgets don’t grow at the same rate as the rest of business. The public cloud enables a way to solve that problem with its elastic compute and elastic resources.
3. Scale-out file storage will become the preferred technology for on-prem unstructured data active archives
Modern file storage solutions deliver performance and economics in a single tier solution managed by intelligent caching. Object storage is not the best fit for on- premises customers seeking simplicity to deliver to performance applications and retain cost effectively, object storage was developed as a precursor to webscale technology and as the storage medium for web technologies. It was meant to be great for datasets that approach EB data level and are geographically distributed. In 2020, we believe the on-premises object storage market will evaporate and will become wholly file based.

Reduxio (Jacob Cherian, CMO)
The true power of Kubernetes is to redefine the idea of the cloud for customers by breaking down their infrastructure silos – public cloud resources, and the customers owned and dedicated resources. Customers will increasingly leverage Kubernetes and containers to build infrastructure agnostic IT for their applications – one environment that can be instantiated anywhere rapidly and run all their applications. This is cloud infrastructure as code. A key requirement for this will be storage that is native and integrated into Kubernetes and provides mobility for applications and data across all the infrastructure pools, stitching the pools together into a single cloud.

Scale Computing (Alan Conboy, office of the CTO)
1. Customers don’t want solutions that are complex, difficult to use, require lots of training and cost more because of the additional licenses.
2. Gartner Magic Quadrant‘s niche players will continue driving innovation with feature rich technologies that simplify IT operations, for example by supporting the growing edge computing industry.
3. Technologies from niche players will make VMware’s quasi-vendor agnostic approach look increasingly old-fashioned

Looking ahead to 2020, we will see niche players stay niche. Why?
Leaders will always have a place offering well-entrenched legacy infrastructure, but staying niche means being a trendsetter. It means the offering is different, great at what it does, and makes tech better and more accessible to everyone. This is the reason why I believe customers will continue investing in niche players that offer the technology they need in the way that works best for them.

Scality (Paul Speciale, CPO)
1. Object storage at the edge will be on flash
Object storage will move into the edge for applications that capture large data streams from a variety of mobile, IoT and other connected devices. This will include event streams and logs, sensor and device data, vehicle drive data, image and video media data and more, with high data rates and high concurrency from thousands or more simultaneous data streams. These applications will be developed for cloud native deployment and will therefore naturally embrace RESTful object style storage protocols, making object storage on flash media an optimal choice on the edge to support this emerging class of data-centric applications.
2. New ways of identifying patients, customers, and depositors
will be developed in 2020, as the already accelerating pace of hacking and data breaches continues to pick up speed. There is enormous value in stored data. Until they make these changes, hospitals and medical providers, for example, will remain strong targets due to the value of the data they hold: not only patient health information, but the patient identification that goes along with it (government ID, birth date, address, etc.).
3. Storage will become massively decentralised
As enterprises leverage a combination of on-premises and public cloud IT resources. This will create a need for a unified namespace and control plane to simplify data visibility and access.

Moreover, corporations will use a variety of public clouds, each one selected to help solve specific business problems, thereby creating a multi-cloud data management problem. In addition, the emergence of edge computing will further drive decentralisation as corporations choose to deploy IT resources “near” the edge devices they manage. These trends all help to create a new and extreme “cloud data silos” scenario, that can only be addressed by solutions that provide global data visibility across these distributed clouds and data centers.

Spectra Logic (Matt Starr, CTO)
1. Digital data ownership is akin to the oil/steel of the early 20th Century
Think about Rockefeller, Carnegie, Ford and the like, industrial giants of the 20th century. In the 21st century it will be the data owner. Like Disney, who recently purchased 21st century fox to become one of the largest Media and Entertainment content owners in the world. Health Care, oil and gas exploration, geospatial, it is not just media and entertainment where data is the most valuable asset. Nearly every industry is seeing an acquisition strategy that takes into account the digital asset ownership.
2. Cloud Egress cost will start to drive end-user strategies
Cloud storage is all the rage in 2019, some of the larger users of these storage-as-a-service offerings are discovering that not having a local copy is the driving factor in their storage cost, in-turn forcing a closer look at hybrid cloud, especially having a local copy while still utilizing the cloud as the second copy. Allowing a local restore and usage of data without egress costs and a cloud based copy for DR and sharing.
3. ML and AI have moved forward in 2019 and will continue in the next decade
The data sets used to train these new systems will be a leading factor for how far AI can go. It is easy for AI to find copyrighted music on Youtube, and soon AI will be creating more of the musical background we hear. Outside of art, another example is AI used to develop autonomous vehicles will deliver safer vehicles as real world data sets are brought back and worked through AI for better vehicle safety. Look for AI to be in nearly every part of your life soon.

StorCentric (Mihir Shah, CEO)
1. Adoption of QLC Flash
Organizations are demanding faster and more reliable storage than traditional HDDs for read-intensive applications. As a result, QLC NAND memory will be widely adopted in 2020. QLC being more cost- effective is ideal for read-intensive applications which represent some of the fastest growing enterprise applications such as AI/ML, big data, media streaming and analytics.
2. Blockchain Technology
One of the most disruptive technologies of 2020 will be private blockchain technology capable of securely archiving digital assets for long-term data protection, retention and compliance adherence. Correspondingly, those with the technology knowledge and an understanding of how to apply blockchain to business processes will be a hot commodity.
3. Hybrid Storage
Businesses have been moving to the cloud for primary and archive/DR storage for a long period of time. In 2020, on premise storage as part of an overall hybrid storage strategy whether for active or standby, will see a resurgence. As customers see cloud storage fees that are dramatically higher than anticipated, IT organizations will need to achieve the highest performance and scalability, as well as the safest retention, at the most cost-effective price.

StorMagic (Bruce Kornfeld, CMO and GM Americas)
1. More data is being created at the edge than ever before
In 2019, more data was created at the edge than ever before, and that will continue into 2020 and beyond due to a mass movement away from traditional datacenters and public clouds. The overall storage landscape is transforming, and more customers now prefer to process and manage data at the edge of the network because it is more efficient and cost-effective.
2. Key management adoption in edge HCI environments will increase
Security is a growing concern at edge computing sites, where there is typically no IT staff member and no physical security present. As servers continue to get smaller in size the risk of theft dramatically increases. CIOs – particularly in the healthcare, finance and retail markets – will begin integrating data encryption with key management to fully protect data managed at their remote sites and branch offices located at the edge.
3. Virtualization enhancements will be introduced to support physically smaller environments
As IT footprints continue to shrink, virtualization isn’t just for your grandfather’s datacenter anymore. Smaller-sized servers will continue to hit the market at a 2U, or smaller, footprint. Software companies will respond with lightweight virtual storage solutions that offer a low-cost entry point to suit these smaller hardware products and justify ROI and IT projects as a whole.

StorONE (Gal Naor, CEO)
1. Price per terabyte
Dramatically reduction due to better software solution with less hardware required. What we have seen in 2019 and we expect will continue through 2020 is inefficient utilization that plagues legacy storage architectures. This will only get worse as customers still aren’t getting the rated speed and capacity that they pay for, that the drives are capable of. End users are increasingly frustrated by this, and frustrated having to overbuy. In 2020 companies will look for more efficiency and better resource utilization, for a lower footprint and to achieve the performance and density they’re paying for.
2. Built in complete data protection and backup will emerge as part of storage solutions
No longer will organizations have to maintain multiple systems to get complete enterprise data protection and performance. NO need to purchase, integrate and manage standalone systems for data retention/protection, backup, and data integrity. In addition, data protection (RAID) levels based per volume and not the same RAID for the entire storage solution. And no more compromising on data protection to get high performance. Built in data reliability will be a mandatory requirement as part of any storage solution and will be all of these services will be included without extra cost.
3. In 2020 all enterprise storage services will finally be available in the same solution
All the storage protocols (block, file and object) will be include and all drives types (NVMe, SSD, HDD, and Optane) will be supported in the same solution with the ability to simultaneously run different protocols on the same drives. Storage managers will have the full flexibility to manage all their storage requirements and to grow on demand according to their needs. A single environment can manage any use case – performance, capacity, protocol, drive type and data protection based per volume.

StrongBox Data Solutions (Floyd Christofferson, CEO)
1. Data management across storage types is the biggest emerging problem
• This is made worse by exponential growth of unstructured data
• Increasingly data managers don’t know for sure what they have, and where it should be kept, or if it can be deleted
• Solutions focused on intelligent cross-platform management of data are needed to address this
2. The proliferation of storage types, including multiple cloud and software-defined choices makes the problem worse
• This adds complexity and cost to IT administrators
• Causes them to rely on multiple tools, and manage user experience
• Often leads to over provisioning, and difficulty to contain costs or take full advantage of the savings they seek from more economical storage choices
3. On-prem cloud (object), plus the increasing public cloud offerings cause protocol problems for traditional file-based workflows
• The ability to bridge multiple vendor solutions for on-prem and off-prem without adding complexity or user disruption becomes key
• The ability to understand the data itself, and let data intelligence drive data placement enable true storage optimization and cost savings
• This also reduces the load on IT staff, which has otherwise become a rapidly growing operational expense. Storage and data management OPEX are about 5x

Talon (Jaap van Duijvenbode, product director)
1. In 2020, we will see a continued but even stronger pivot to an enterprise hybrid cloud strategy, one that leverages a combination of on-premises storage and cloud storage.
Depending upon the nature of the workload, the data characteristics (i.e. size) or compliance factors (GDPR, etc.) can drive a paradigm of regionally located cloud storage footprints, accessed by users/locations in proximity to that storage resource. Backup/DR use cases are the simplest and most forgiving in terms of ongoing operational complexity, and as such are where many enterprises get started in the use of multi-cloud storage resources.
2. Also, we predict that 2020 will see enterprises looking at a varied footprint, with different functions supported from different vendors.
From a business perspective most larger enterprises do not want to sacrifice negotiating leverage with regards to their Opex-based infrastructure. There are advantages in going all in with one vendor. In today’s pay-as-you-go world, an enterprise could achieve a volume discount by sending everything in the direction of one cloud provider. It takes work to regularly review if the organization is getting the best deal, and switching costs are high. By maintaining a footprint with different providers, customers stay current with pricing policies/promotions, and have easier flexibility in directing workloads to resources.
3. Compliance will only continue to be an increasingly important factor in the multi-cloud decision.
For larger or publicly-traded companies, the requirement to employ best practices of risk mitigation will drive a multi-cloud approach to spreading the risk. For global companies, the need to keep certain data assets in certain geographic domains will factor into the equation.

Virtana (Tim Van Ash, SVP products)
1. Cloud is not cheaper, and overprovisioning must be brought under control
In 2019 organisations came to the realisation that cloud is not necessarily cheaper. This drove the need for visibility and control of cloud expense from the start to prevent escalating costs. Cloud cost management has become a full-time role as organisations have tried to figure out and get a handle on costs. 2020 will see the continued maturation of the cloud story with a far more considered approach, focused on cost optimisation and cost control. Arguably, cost has always been a driver for cloud, but the highest priorities in 2020 will be determining the real cost of cloud, and developing effective cost governance. Instead of rushing workloads to the cloud, organisations will seek to understand to how they can optimise their workloads. It will start with right sizing of their workloads, then deciding whether it is a candidate to move to the cloud, or remain on-premises. Enterprises will begin to see the value and savings that can arise as a result of intelligent workload management and data decision-making, leading to more effective outcomes in terms of cost and performance. Organisations in 2020 will look to reduce or eliminate the over provisioning of public cloud resources that they saw in 2019, which resulted from poor visibility into workloads and their resource consumption. For example, AWS Lambda requires the purchase of a set memory allocation regardless of whether it is needed or used, resulting in a tendency to vastly overprovision memory. This has been impossible to achieve, with cost and performance being managed by separate tools and teams. In 2020, hybrid applications and workloads will continue to consume digital infrastructure across both private and public cloud. The need for a single pane of glass that provides continuous visibility into costs, and while assuring performance will be mandatory to assure effective capacity management and optimization across hybrid cloud.
Bearing in mind the considerable transformation taking place in IT environments, the critical issue in 2020 for CIOs, will be seeking to understand how to govern IT spend.
2. Hybrid and multi-cloud will demand consistent levels of visibility to mitigate risk
The start of 2019 was all about multi-cloud. Now, we are hearing people talk hybrid applications running in the hybrid cloud because it crosses multiple clouds, in addition to the data centre. So it is pretty clear now that hybrid is a term used to describe an environment with components of applications and the infrastructure supporting it that spread across private and public cloud. There is debate over whether it includes one or more cloud providers, and one or more private data centres, but in either scenario, anything that resides over multiple clouds still needs to be properly controlled. The lack of common metrics and granularity has contributed to performance and reliability problems across hybrid infrastructures. Everyone is looking for the single pane of glass, and this will be an on-going topic for at least the next three to five years. The hybrid environment is definitely where everyone will be pivoted towards from 2020 and onwards.
3. The impact of containers on shared infrastructure will start to be understood
There is no doubt that Kubernetes has become the dominant orchestration platform for the cloud. As highlighted by IBM’s acquisition of Red Hat, and VMware’s acquisition of Pivotal, which positioned VMware to deliver enterprise-grade Kubernetes-based portfolio.

In observing the maturation of the market overall during 2019, we saw that infrastructure teams had learned from the virtualisation experience. With numerous VMs running on top of hypervisor, it was very difficult to see which VMs were responsible for the workloads being generated vs. the back end infrastructure, a scenario known as the ‘blender effect’.

Going into 2020, this challenge lies at the heart of infrastructure teams’ concerns around container adoption. Although the expectation is that a large percentage of on-premise containers will be long-lived, the ability to dynamically scale up and scale down could dramatically impact infrastructure.

Organisations looking to deploy containers in their data centres must consider and manage the impact and performance of their infrastructure, otherwise the move to containerised applications and deployment will likely fail.

In 2020 IT teams will adopt smart (AI-powered) hybrid infrastructure management platforms that will not only help them understand what applications are generating the workloads, but also how workloads impact every physical element of the infrastructure.

Weebit (Coby Hanoch, CEO)
1. We’ll start hearing of more ReRAM commercial deals
2. China will continue its very strong push into memory and storage with big
investments and potentially also acquisitions
3. Neuromorphic computing research will move to center stage

WekaIO (Andy Watson, CTO)
1. Smart NIC’s (Network Interface Cards)
From companies like Mellanox, with comprehensively extensive capabilities for offloading will bring about revolutionary changes in the infrastructure landscape for the most data-hungry applications. Storage solutions able to leverage this important new networking technology will push the envelope in terms of both performance (i.e., greater throughput and lower latency data access) and improved scalability. Initially the higher cost will likely limit adoption to only the most demanding environments but over time (2021 and beyond) we can expect to see Smart NICs deployed more widely.
2. SCM (Storage Class Memory)
Will have a big impact now that Intel has at long last delivered to market Optane DC Persistent Memory. Sub-microsecond latency for persistent storage at meaningful scale (up to 768TB in a clustered file system is already being promised by at least one software provider) will revolutionize the industry. If nothing else, this new tier-0 will enable near-instantaneous checkpointing (and recovery from those checkpoints). SCM can also be used to extend memory complements, and this is already impacting how engineers are thinking about in-memory databases and other memory-intensive applications. Anyone skeptical about this coming tectonic shift need look no farther than MemVerge or Levyx (or other emerging software companies in this hot new space) who are already offering early access to their software which allows applications to benefit from SCM without any modifications. And as tier-0 is redefined, the adjacent tier-1 will also be impacted by intensified performance expectations; at WekaIO our flash-native file system is ready, willing and able to assist our customers’ exploration of SCM.
3. My third and final prediction here is a familiar one from last year: flash storage pricing will continue to fall faster than HDD storage pricing.
This time I’ll go further and predict that by 4Q20 there will be large-capacity SSDs (probably QLC-based) which are irrefutably less expensive per-gigabyte than meaningful HDD alternatives. Considering that SSDs already incur lower OpEx (consuming less power, requiring less A/C, failing less often, etc), 2020 may be the year we later collectively remember as The Crossover.

Western Digital (Phil Bullinger, SVP and GM)
1. In 2020, new data center architectures will emerge to manage the growing volume and variety of data
In the zettabyte-scale age, data infrastructure needs to be re-architected to address the growing scale and complexity of workloads, applications and AI/IoT datasets. These constructs will involve multiple tiers of workload-optimized storage as well as new approaches to system software. Zoned Storage, an open-source initiative, will help enable customers to take advantage of zone block management across both SMR HDDs and ZNS SSDs for sequentially-written, read-centric workloads. In 2020, we’ll see a substantial amount of application and storage software investment in Zoned Storage to help drive more efficient storage tiers as data centers are redefined in the zettabyte-scale era.
2. In 2020, tiering of data leveraging device, media and fabric innovation, will expand not contract
There will continue to be strong exabyte growth in read-centric applications in the data center, from AI, ML, and big data analytics to a variety of business intelligence and accessible archive workloads. These at-scale use cases are driving a diverse set of performance, capacity and cost-efficiency demands on storage tiers, as enterprises deliver increasingly differentiated services on their data infrastructure. To meet these demands, data center architecture will continue advancing toward a model where storage solutions will be consistently provisioned and accessed over fabrics, with the underlying storage platforms and devices delivering to a variety of SLAs, aligned with specific application needs. And while we certainly expect to expand the deployment of TLC and QLC flash in these at-scale, high-growth workloads for higher performance use cases, the relentless demand for exabytes of cost-effective, scalable storage will continue to drive strong growth in capacity enterprise HDD.
3. In 2020, fabrics and composable will form a symbiotic relationship
Ethernet fabrics are becoming the “Universal Backplane” of the data center, unifying how storage is shared, composed and managed at scale to meet the demands of increasingly varied applications and workloads. In 2020, we’ll see increasing adoption of composable, disaggregated storage solutions that efficiently scale over Ethernet fabrics and deliver the full performance potential of NVMe devices to diverse data center applications. Composable storage will increase the agility and flexibility in how enterprises provision and optimize their data infrastructure to meet dynamic application requirements.

Zadara (Jeff Francis, senior enterprise solutions architect)
1. XaaS (everything-as-a-service) will accelerate, taking market share from ownership models
Many 3-5 year IT refresh cycles are coming due at a time when multiple options for mature/stable XaaS options exist, plus the rate of technology change is accelerating. It will make less sense – both financially and technologically – to commit to a rigid platform for a span of multiple years.
2. BaaS and DRaaS (Backup and DR-as-a-service)
Will grow in adoption, as more customers start to implement their first offsite and truly on-demand backup and DR capabilities. BaaS and DRaaS bring reliability, convenience and affordability in an on-demand fashion. The combination of backup suite subscriptions and storage-as-a-service makes it possible for enterprises to meet their BC and compliance needs in a rapid, reliable and hassle-free fashion. Additionally, the economics for DR are improved, as costs for compute and additional storage only apply for days of “test” and “actual DR” vs. buying and managing that infrastructure all year.
3. Datastores will increase in size
partially due to the increasing use of AI/ML models with large pools of unstructured data, and partially due to the rapidly declining cost for storage. Managed storage offerings (storage-as-a-service) both in the public cloud and on-premises (private clouds) make it feasible and affordable to implement extremely large datastores (from hundreds of petabytes to exabytes) and extract value, insights and monetary value from the larger data sets.

Zerto (Gijsbert Janssen van Doorn, technology evangelist)
1. Growth in adoption of hybrid and multi-cloud solutions
2. Customers increasingly utilising management systems for DR
and backup as a service
3. Complete visibility over all workloads, data and costs becomes vital as the need for flexibility rises

One of the things I believe we’ll see in 2020 is the true adoption of things like hybrid and multi-cloud solutions – but the difference in this upcoming year will be that customers and organisations will really begin to look for more management layers on top of their solutions. A lot of companies already have things like backup-in-the-cloud and DRaaS somewhere else, so what they’re now looking for is a uniform management layer on top of that to give visibility on cost, as well as a knowledge of where all data is located. It’s important to know where data lives, whether workloads are protected, and whether they need to move workloads between different clouds if and when requirements change.

Articles_bottom
AIC
ATTO
OPEN-E