What are you looking for ?
Infinidat
Articles_top

Software-Defined Data Centers Have Arrived – Sort of

By Jerome Wendt, president, DCIG

This article was written by Jerome M. Wendt, president and lead analyst of DCIG, Inc. on July 31, 2017.

 

 

Software-Defined Data Centers Have Arrived – Sort of

Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.

The concept of software-defined data centers is really nothing new. This topic has been discussed for decades and was the subject of one of the first articles I ever published 15 years ago (though the technology was more commonly called virtualization at that time.) What is new, however, is the fact that the complementary, supporting set of hardware technologies needed to enable the software-defined data center now exists.

More powerful processors, higher capacity memory, higher bandwidth networks, scale-out architectures, and other technologies have each contributed, in part, to making software-defined data centers a reality. The recent availability of SSDs may have been perhaps the technology that ultimately enabled this concept to go from the drawing boards into production. SSDs reduce data access times from milliseconds to microseconds helping to remove one of the last remaining performance bottlenecks to making software-defined data centers a reality.

Yet as organizations look to replace their hardware defined infrastructure with a software-defined data center, they must still proceed carefully. Hardware defined infrastructures may currently cost a lot more than software-defined data centers but they do offer distinct benefits that software-defined solutions currently are still hard-pressed to match.

For instance, the vendors who offer the purpose-built appliances for applications, backup, networking, security, or storage used in hardware defined infrastructures typically provide hardware compatibility lists (HCLs). Each HCL names the applications, OSs, firmware, etc., for which the appliance is certified to interact with and which the vendor will provide support. Deviate from that HCL and your ability to get support suddenly gets sketchy.

Even HCLs are problematic due to the impossibly large number of possible configurations that exist in enterprise environments which vendors can never thoroughly vet and test.

This has led to the emergence of converged infrastructures. Using these, vendors guarantee that all components in the stack (applications, servers, network, and storage along with their firmware and software) are tested and certified to work together. So long as organizations use the vendor approved and tested hardware and software component in this stack and keep them in sync with the vendor specifications, they should have a reliable solution.

Granted, obtaining solutions that satisfy these converged infrastructure requirements cost more. But for many enterprises paying the premium was worth it. This testing helps to eliminate situations such as I once experienced many years ago.

We discovered in the middle of a system wide SAN upgrade that a FC firmware driver on all the Unix systems could not detect the LUNs on the new storage systems. Upgrading this driver required us to spend nearly two months with individuals coming in every weekend to apply this fix across all these servers before we could implement and use the new storage systems.

Software-defined data centers may still encounter these types of problems. Even though the software itself may work fine, it cannot account for all the hardware in the environment or guarantee interoperability with them. Further, since software-defined solutions tend to go into low cost and/or rapidly changing environments, there is a good possibility the HCLs and/or converged solutions they do offer are limited in their scope and may have not been subjected to the extensive testing that production environments.

The good news is that software-defined data centers are highly virtualized environments. As such, copies of production environments can be made and tested very quickly. This flexibility mitigates the dangers of creating unsupported, untested production environments. It also provides organizations an easier, faster means to failback to the original configuration should the configuration now work as expected.

But here’s the catch. While software-defined data centers provide flexibility, someone must still possess the skills and knowledge to make the copies, perform the tests, and do the failbacks and recoveries if necessary. Further, software-defined data centers eliminate neither their reliance on underlying hardware components nor the individuals who create and manage them.

Interoperability with the hardware is not a given and people are known to be unpredictable and/or unreliable from time to time, the whole system could go down or function unpredictably without a clear path to resolution. Further, if one encounters interoperability issues initially or at some point in the future, the situation may get thornier.

  1. Organizations may have to ask and answer questions such as:
    1 When the vendors start finger pointing, who owns the problem and who will fix it?
    2 What is the path to resolution?
    3 Who has tested the proposed solution?
    4 How do you back out if the proposed solution goes awry?

Software-defined data centers are rightfully creating a lot of buzz but they are still not the be all and end all. While the technology now exists at all levels of the data center to make it practical to deploy this architecture and for companies to realize significant hardware savings in their data center budgets, the underlying best practices and support needed to successfully implement software-defined data are still playing catch-up. Until those are fully in place or you have full assurances of support by a third party, organizations are advised to proceed with caution on any software-defined initiative, data center or otherwise.

Articles_bottom
AIC
ATTO
OPEN-E