What are you looking for ?
Infinidat
Articles_top

What’s Hot and What’s not in 2016?

By Load DynamiX, Infinidat, Spectra Logic, StorPool, Virtual Instruments

We’re in the countdown to Christmas. But it’s not just the presents under the tree that we’re excited about: the storage indusry has some gifts of its own this year. Some of vendors have told to their PR, A3 Communications Limited, what they’re looking forward to in 2016, and how 2015 has shaped up for some of the industry’s hottest technologies.

Flash
One of 2015’s most talked about technologies was flash – barely a day has gone by when it wasn’t part of one of the top storage stories. Be it new developments, impressive deployments or predictions that it will take over European data centres by 2020, flash has really been at the centre of things.

And in 2016, Load DynamiX, Inc.‘s VP marketing Len Rosenthal believes flash adoption will accelerate over the next 12 months. “Flash storage moved from the evaluation phase in 2014 to significant production deployment in 2015 and should see significant growth in 2016,” he says. “It’s set to dominate all tier one storage requirements.”

Rosenthal predicts the mass migration to solid state flash-based storage will even expand from the Global 1000 companies into the Global 5000 as costs continue to decline. He argues that TLC flash will be a major new technology due to its low cost/gigabyte: “The reliability of these devices will improve dramatically and have a major impact in lowering the cost of deploying solid state storage systems.”

However in Rosenthal’s opinion cost is a barrier, although he admits that TCO benefits related to power, cooling and floor space are very compelling. “Every data centre would love to deploy flash storage instead of HDDs, but it simply is not yet cost justified for every workload,” he explains, adding that the challenge for most users is characterising their workloads to identify which require flash and which should remain on HDDs.

StorPool Ltd‘s chief product officer Boyan Krosnov agrees that flash is seen as a must-have technology. This, above all, is down the fact that flash and SSDs, although already understood to be some of the industry’s highest-performing solutions, are still being improved.  “We are getting a new generation of solid state media every year, and this trickles up through the storage systems to provide even better performance and even better economics with every generation of product,” he says. “HDD-based systems don’t have nearly the same improvement rate.

In fact most of our experts agree that flash is going continue to be a major player on the storage scene. Infinidat Ltd.‘s CMO Randy Arseneau thinks there’ll be some interesting developments to look out for. “There’s a lot of competition, there’s a lot of noise, there’s a lot of fragmentation among the flash players in the industry,” he explains. “I think that’s not necessarily sustainable over the long term, so I think you’ll see some consolidation.”
 
And for Arseneau that means new, emerging players will pop up, and some established players will try to realign their business models and attack the market in slightly different ways. “Flash continues to be very pervasive, and new flash technology will continue to make the economics of flash more desirable,” he continues. “But in the foreseeable future they won’t overtake the economic advantages and the cost elasticity of traditional spinning drives, which is why I think hybrid systems will continue to be a very prevalent and prominent player in the industry.”

Virtual Instruments, Inc.‘s VP marketing and alliances John Gentry also puts flash at the top of his list, but argues that the hype around the technology is set to calm down. “Adoption will continue, but I think it will become more of a given or standard and users will have to be rational about spend-to-revenue ratios,” he says. “This will provide opportunities for some of the bigger players to gain share.”

Cloud
Gentry also points to the cloud as remaining a key technology for 2016. “From a cloud perspective, I think you’ll start to see the emergence of service level requirements for the customer, as businesses are impacted by the shortcomings of the cloud,” he says. “In particular the commodity or commercial cloud of Amazon and Azure. With Amazon we are starting to see the evolution of some of the services becoming more robust, but we still haven’t seen any move toward service level commitment.” Because of this he sees the need for SLAs or service level commitments from cloud providers as key, especially as more critical workloads and even entire companies move solely to the cloud.

Arseneau also looks to the cloud, and suggests it will continue to be the main focus for most organisations looking for ways to cost effectively and operationally maintain their ever-growing volumes of data.

Cloud providers will continue to offer a lot of services and continue to grow,” he explains. “The challenge with that is that as you’re seeing a lot of these larger, very computationally intensive workloads, it’s very cost effective to put those workloads into the cloud and have a very core intensive and compute intensive workload running in a cloud environment, but when you start to apply those workloads to very large data sets, the cost of storage in the cloud becomes prohibitive very quickly.

Which, according to Arseneau, will push customers towards a next generation hybrid cloud, where data is either co-located or on-premise, and serving or servicing compute workloads in the cloud. “And that’s going to create a lot of interesting possible consumption models and deployment models that, frankly, we haven’t really conceived of yet,” he says.

Recording (SMR) drives
This relatively new HDD technology is high on Krosnov’s list of storage must haves in 2016. “The economics of this new class of media are clear already,” he states. “SMR HDDs are nearly half the cost of the closest regular HDD available.” As well as its financial benefits, he suggests another reason users are being drawn towards SMR drives is because of their ability to archive online – much more attractive than an offline tape archive. He also notes that software stacks and products incorporating them are just coming into place, which sets the technology on its road to success in 2016.

Software-Defined Storage (SDS)
SDS is always contentious – without a proper definition it’s very easy for vendors to repackage legacy products as software-defined, never mind whether they are truly SDS. But that hasn’t stopped the hype around the technology in 2015. And the reason for that, according to Len Rosenthal, is the great value it offers larger scale, tier two deployments.

In 2015 SDS was still very much in the evaluation phase, but I expect it to have reached deployment phase in 2016 – at least for non-mission-critical applications,” Rosenthal suggests. And because of this progress, he argues, SDS solutions will continue to grow. There is a downside for him though, as in a company deploying SDS the end-user becomes the storage integrator. “In the past they could rely on NetApp, EMC or IBM to do the integration and scalability testing of storage products,” he explains. “But that burden now shifts to the end users who have a ‘do it yourself and save money’ mind set.”

Non-Volatile Memory (NVM)
Often used in flash memory (which has already been pitted as a hot contender for 2016), NVM is a technology that’s hard to imagine being without: being able to retrieve stored information even after having been power cycled (turned off and back on) is definitely a plus point.

Krosnov sees this technology growing in 2016 with the release of 3D XPoint memory by Intel/Micron and HP/SanDisk cooperation around memristors. “I expect the first products to hit in 2016,” he predicts. “It will either be a PCIe card or a memory bus device, depending on what Intel decides to release first.

And the reason NVM will be so popular? “This is a new class of memory/storage, faster than flash, slower than RAM, but non-volatile,” Krosnov replies. “Initially pricing will be higher than NAND flash, but I see that it can potentially replace flash completely in the long run.” And, according to this CEO, this will turn storage on its head, with NAND flash becoming the new disk and the new non-volatile technology taking the role of flash. “Longer term, this means a new crop of storage vendors focusing on a new storage tier,” he concludes.

Open source
Open Source will continue to proliferate, extend and expand, according to Arseneau. “I think you’ll see a lot more consortia or groups stepping up to provide hardened distributions of open source products and sell them into the marketplace at an attractive price,” he explains. This will attract more and more enterprises to the OpenStack community.

They’ve had a difficult time cracking the real, mission critical, bet-your-business kind of workloads for a variety of reasons: scale, manageability, supportability,” he continues. “I think as more constituencies step into that space and provide hardened solutions and supported platforms and stacks, the resistance in the enterprise will gradually wear down and you’ll start to see Open Source become more prominent.

Converged infrastructures
Another hot topic in 2015: converged infrastructure has seen servers, storage devices, networking equipment and software grouped into a single, optimised computing package. Rosenthal suggests this technology’s growth is rooted in a need for easy deployment and manageability, rather than performance and scalability. “I think hyperconverged systems may be short-lived as it’s really just going back to a proprietary systems architecture of the 1980s,” he explains. “It may have some standardised components, but IT managers have told us it’s vendor lock-in all over again- just from a different perspective.”

Arseneau doesn’t agree and argues that the hyper-converged space will continue to grow. “It’s rapid time to market, it’s easy to implement, and it’s easy to manage,” he explains. “It doesn’t, at this point anyway, scale to the point where it can support the very large or the very performance-sensitive, mission-critical workloads at the enterprise, but it’ll continue to improve over time.”

And Arseneau sees more entries into the hyper-converged market. “It’s such a hot space right now with just a few established players dominating it,” he suggests. “All the incumbents and legacy players and mainline suppliers are stepping into the fray and developing their own solutions for their shot at a piece of that rapidly ascending market.” He predicts that will create fragmentation and consolidation, but aggregate growth none-the-less.

Spectra Logic Corporation‘s CTO Matt Starr talks about the effects of convergence. “Converged architecture is going to open up storage IT staff to high level of provisioning,” he explains. “There will still be a storage admin at the largest data sites, but more of the mid-sized IT business will move to a converged architecture.”

The changing face of storage
As data is recognised as being crucial to business growth, surely it follows that storage will be even higher on the priority list in 2016? Our experts agree that in larger organisations that seems to be the case.

Rosenthal suggests that different types of data will start to be treated differently. “Data will be divided in most larger organisations,” he offers. “Tier one business-critical workloads will remain in house and storage will be treated as a strategic asset to the business. Storage will become a commodity for the remaining application workloads, meaning cloud storage will increasingly become more viable for a broader range of requirements.

For him, the key will be aligning production application workload performance requirements to storage purchase, configuration and deployment decisions. “Without this knowledge, massive over-provisioning of storage will continue, unnecessarily wasting capital and operating expenses,” he says. “Storage architects and engineers struggle to understand the I/O profiles of their production storage environment. If they have better tools and insight into their production environment they can cost-effectively take advantage of all of the new storage technologies that are evolving and cut their storage costs by 50%  or more.”

Starr thinks one of the big challenges for IT teams will be learning to deal with SMR drives. “That’s if they plan to stay with rotating disk technology,” he adds.

Krosnov agrees SMR drives will feature in a completely different storage landscape: “Non-volatile memory and SMR drives will enter as two new tiers. NVM will fit between flash and RAM, displacing the highest end of the flash arrays, and enabling a small segment of new applications. SMR drives will fit between tape and regular HDDs, displacing both to a degree. For me, tape is dead now, and adding SMR drives for cold storage to the mix makes it very dead.”

He also argues that the move towards all-flash is over-hyped, instead he predicts a move away from unified storage. “At the medium scale we see an increasing realisation that you probably need several different systems for different needs,” he states. “The best architecture for an archival system is very different from the best architecture for primary storage. So I expect in 2016 mid-sized storage users will look even more to solve different requirements with different systems.”

This will bring its own pressures though, with complexity and keeping things simple to deploy and operate major factors to consider.

Arseneau argues that the growth of data and the emergence of expensive technologies like flash have pushed up the price of storage, and that the industry and its customers have to think of new ways to manage these exploding volumes of data, without breaking the bank. “It all comes back to the importance of the economics, right?” he explains. “The shift to flash is obviously here to stay. It’s not going away, and it’s going to continue to grow as the%age of the overall storage footprint. But I think data growth will always outstrip the growth of flash. So we need to provide solutions that help enterprises bridge where they are today, and think about what that next generation looks like, whether it’s all-flash or a ‘fill in the blank’ next generation memory technology data centre.”

Whichever technology IT teams eventually go for Starr indicates it’s going to be an interesting time for major data holders. “Disk capacity growth rates are slowing and capacity needs are not,” he says. So the pressure, as ever, will be to find the right solution.

Gentry is on the same page. “Sheer growth is not slowing down and so storage strategies can’t be monolithic in nature,” he suggests. “There’s going to have to be better classification and curing just to keep pace of the growth without scaling costs at the same rate.”

So 2016 will be the year of the over-arching strategy: where IT teams classify and cure the data, so they can get to the data that’s most important and keep the data that is regulatory. “I think in a similar vein, the ability to leverage big data analytics techniques against all the data available as opposed to just the business intelligence use case, will become increasingly front and centre in extracting value from the IT infrastructure and from all the data infrastructure generates,” he continues.

Easing the pain points
Pain points = the bits that we all wish would go away. Whether it’s bottlenecks, unplanned downtime or just a lack of scalability, these are issues that, if sorted out, would make every CIO’s life a little easier.

Rosenthal is determined that 2016 should be the year storage managers get to know their production application workloads, and how these affect storage performance. “Many storage managers lack workload I/O profile information, which is the foundation of performance assurance and cost optimisation,” he explains. “Understanding performance requirements for installed production applications and determining which new architecture or technology is best for those workloads is becoming essential. This is where storage performance analytics solutions offer tremendous value as they provide the workload insight that has been so lacking.”

Ivanov suggests that complexity is a major pain point – and that it’s only getting worse as increasing numbers of technologies are added to the mix. “Hyper-convergence promises to address these issues to a large degree,” he claims. “Cloud operators also help a lot by hiding the complexity of their infrastructures behind an easy-to-use web interface.”

Arseneau agrees. “Most organisations are running a varied mix of workloads,” he says. “They’ve not standardised on a particular or singular application platform, so that means they often are supporting multiple storage devices, multiple server platforms, and multiple cloud providers.” According to him, this complexity increases risk and cost, and it makes training staff and giving them the support they need more complicated and costly.

And he believes that cost is high on the list of CIO headaches. “It’s no surprise that if you look at any industry survey or you talk to any analyst firm, they’ll tell you that people are struggling with cost,” he explains. “They’re dealing with ever-increasing volumes of data, and trying to manage, manipulate, protect and move all these workloads, and all these increasing data volumes, and do so at the same or lower cost.

And this, Arseneau argues, is putting tremendous cost pressure on IT organisations. They’re looking at the cloud, they’re looking at tiering solutions, and they’re looking at data reduction technologies as ways to physically reduce the footprint of data that they’ve got to store, so they can keep the economics under some degree of control.

Gentry agrees that cost is a major pain point for IT teams, and will be high on the agenda in 2016. “There is cost driven pressure coming from the top which is clashing with knowledge at the IT level,” he says. “The IT team is being asked to meet certain requirements with technology that wasn’t built for their needs. And there’s so much hype around flash as a better alternative or software defined as the answer, or iSCSI because it can run on 10Gb Ethernet networks, but whether they’ll work or not, they’re being forced on IT.

He argues that taking this approach to storage procurement could cost more in the long run. “In the case of the company that built out an end-to-end FCoE infrastructure, the attempt was to save millions of dollars and what they did was waste hundreds of millions,” he explains. “So when XYZ bank decides to move the online banking application to iSCSI, this goes down for several days and no one can get to their accounts. It’s a conflict between top line revenue and bottom line cost.”

So 2016 looks like an exciting year – and not just for the main technologies like flash and cloud – keep an eye out for those emerging technologies too. They might be the answer to the big cost challenges of the coming 12 months.

Articles_bottom
AIC
ATTO
OPEN-E