2015 IT Predictions by NetApp CTO
Jay Kidd
This is a Press Release edited by StorageNewsletter.com on January 6, 2015 at 3:09 pm
Jay Kidd, SVP and CTO, NetApp, Inc., revealed its IT predictions for 2015.
In my 35 years in IT, I have never seen so much simultaneous change in technology. Every part of the IT stack is in transition – end user devices, networks, application design, virtual server software, physical server design, storage systems, and even storage media. Some of these transitions are well underway and will accelerate in 2015 while others are just starting to emerge. Either way, buckle up! IT is going to be a wild ride in 2015.
Two Mythical Beasts:
Internet of Things and Big Data Analytics – Will Produce Corporeal Children
The rise of integrated telemetry in industrial equipment, health monitoring devices, mobile payment systems, along with a host of new sensors measuring the world will provide the relevant data fuel for the next wave of business relevant analytics. Companies that had found their existing datasets were not sufficient to yield real insight can now correlate with real-world datasets to optimize business processes and change their customer’s experience. Acquisition management of data from connected things coupled with real-time and background analytics tools will change how companies touch the world.
Future of All-Flash Arrays is Not All Flash
Flash is transformative to the future of Enterprise Storage. But the idea of an all-flash datacenter is utter nonsense, and at least 80% of data will continue to reside on disks. Cost matters, and the least expensive SSDs will likely be 10 times more expensive than the least expensive SATA disks through the end of the decade. Compression and deduplication apply to both disk and flash equally. Every storage architecture will incorporate flash to serve the ‘hot’ data. However, those that choose to only include flash, and have no integration with other hybrid flash/disk arrays, will be the hot rod in the garage of IT. Fun to tinker with, but not the reliable storage workhorse IT needs.
Multi-Vendor Hybrid Cloud is Only Hybrid Cloud that Will Matter
Every customer is using cloud in some form. Just as most customers were reluctant to bet on a single vendor for their on-premise IT, they will choose to work with multiple cloud providers. Avoidance of lock-in, leverage in negotiations, or simply a desire for choice will drive them to seek a hybrid cloud that does not lock them in to any single provider. SaaS vendors who offer no way to extract data will suffer. PaaS layers that only run in a single cloud will see less usage. Software technologies that can be deployed on premise and in a range of clouds will find favor with customers thinking strategically about their model for IT.
Software-Defined Storage Will Build Bridge Between Public and Private Clouds
Software-defined storage (SDS), with the ability to be deployed on different hardware and supporting rich automation capabilities, will extend its reach into cloud deployments and build a data fabric that spans premise and public clouds. SDS will provide a means for applications to access data uniformly across clouds and simplifies the data management aspects of moving existing applications to the cloud. SDS for object storage will bridge on-premise and cloud object repositories. The storage efficiencies in some software-defined storage offerings, such as cloud ONTAP, also reduce the cost of moving data to and from the public cloud, and storing active data in the public cloud for long periods of time.
Docker Replaces Hypervisors as Container of Choice for Scale-Out Applications
As new applications for SaaS or large-scale enterprise use cases are written using the scale-out microservices model, Docker application containers have proven to be more resource efficient than VMs with a complete OS. All major VM orchestration systems now support Docker and we will see the emergence of a robust ecosystem for data management and other surrounding services in 2015.
Hyper-Converged Infrastructure is New Compute Server
Hyper-converged infrastructure (HCI) products are becoming the new compute server with DAS. Traditional data center compute consists of blades or boxes in racks that have dedicated CPUs, memory, I/O and network connections, and run dozens of VMs. HCI such as VMware’s EVO allows local DAS to be shared across a few servers, making the unit of compute more resilient, while broadly shared data is accessed over the LAN or SAN. Starting in 2015, the emergence of solid state storage, broader adoption of remote direct memory access (RDMA) network protocols, and new interconnects will drive a compute model where the cores, memory, and IO/s storage will be integrated in a low-latency fabric that will make them behave as a single rack-scale system.