This article was written by Cleversafe, Inc.:
Why RAID is Dead for Big Storage
Business case for why IT Executives are making
a strategic shift from RAID to Information Dispersal
Data is exploding, growing 10X every five years. In 2008, IDC projected that over 800EB of digital content existed in the world and that by 2020 that number is projected to grow to over 35,000EB. What’s fueling the growth? Unstructured digital content.
Over 90% of all new data created in the next five years will be unstructured digital content, namely video, audio and image objects. The storage, archive and backup of large numbers of digital content objects is quickly creating demands for multi-petabytes TBs storage systems.
Current storage systems based on RAID arrays were not designed to scale to this type of data growth. As a result, the cost of RAID-based storage systems increases as the total amount of storage increases, while data protection degrades, resulting in permanent digital asset loss. With the capacity of storage devices today, RAID-based systems cannot protect data from loss. Most IT organizations using RAID for big storage incur additional costs to copy their data two or three times to protect it from inevitable data loss.
Information Dispersal, a new approach for the challenges brought on by big data, is cost-effective at the petabyte and beyond levels for digital content storage. Further, it provides extraordinary data protection, meaning digital assets are preserved essentially forever. Executives who make a strategic shift from RAID to Information Dispersal can realize cost savings in the millions of dollars for their enterprises with at least one petabyte of digital content.
Big data is a term that is generally over-hyped and largely misunderstood in today’s IT marketplace. IDC has put forth the following definition for big data technologies: big data technologies describe a new generation of technologies and architectures designed to economically extract value from large volumes of a variety of data, by enabling high-velocity capture, discovery and/or analysis.
Unfortunately, this definition does not apply to traditional storage technologies that are based on RAID despite the marketing efforts of some large, established storage providers. RAID and replication inherently add 300% to 400% of overhead costs in raw storage requirements, and will only balloon larger as storage scales into the PB range and beyond.
Faced with the explosive growth of data plus challenging economic conditions and the need to derive value from existing data repositories, executives are required to look at all aspects of their IT infrastructure to optimize for efficiency and cost-effectiveness. Traditional storage based on RAID fails to meet this objective.
Why RAID Fails at Scale
RAID schemes are based on parity, and at its root, if more than two drives fail simultaneously, data is not recoverable. The statistical likelihood of multiple drive failures has not been an issue in the past. However, as drive capacities continue to grow beyond the terabyte range and storage systems continue to grow to hundreds of terabytes and petabytes, the likelihood of multiple drive failures is now a reality.
Further, drives aren’t perfect, and typical SATA drives have a published bit rate error (BRE) of 10*14, meaning that once every 100,000,000,000,000 bits, there will be a bit that is unrecoverable. Doesn’t seem significant? In today’s big storage systems, it is.
The likelihood of having one drive fail, and encountering a bit rate error when rebuilding from the remaining RAID set is highly probable in real world scenarios. To put this into perspective, when reading 10TB, the probability of an unreadable bit is likely (56%), and when reading 100TB, it is nearly certain (99.97%). (i)
RAID advocates will tout its data protection capabilities based on models using vendor specified MTTF values. In reality, drive failures within a batch of disks are strongly correlated over time, meaning if a disk has failed in a batch, there is a significant probability of a second failure of another disk. (ii)
Meet Replication, RAID’s Expensive Friend
As a result, IT organizations address the big data protection shortcomings of RAID by using replication, a technique of making additional copies of data to avoid unrecoverable errors and lost data. However, those copies add additional costs: typically 133% or more additional storage is needed for each additional copy, after including the overhead associated with a typical RAID-6 configuration. (iii)
Organizations also use replication to help with failure scenarios, such as a location failure, power outages, bandwidth unavailability, and so forth. Having seamless access to big data is key to keeping businesses running and driving competitive advantages.
As storage grows from the terabyte to petabyte range, the number of copies required to keep the data protection constant increases. This means the storage system will get more expensive as the amount of data increases.
RAID and Replication Raw Storage Requirements
IT Executives should realize their storage approach has failed once they are replicating data three times, as it is clear that the replication band-aid is no longer solving the underlying problem associated with using RAID for data protection.
Organizations may be able to power through making three copies using RAID-5 for systems under 32TB; however, once they have to make four copies around 64TB, it starts to become unmanageable. RAID-6 becomes insurmountable around two petabytes, since three copies are not manageable by the IT staff of an organization, as well as cost prohibitive for storage in such a large scale. (iv)
Key Considerations for RAID Successors
In looking for a better approach to make big storage systems reliable, there are key considerations that must be addressed:
Large-scale data protection – It is much easier to design a highly reliable small storage system than to design a highly reliable large storage system. The problem is that given the overall growth rate of data, today’s smaller systems are likely to grow to be large systems in very short order. IT organizations must look for a solution that is as reliable storing terabytes as it will be in the petabyte and exabyte range. Otherwise, the storage system will eventually hit a failure wall.
Handles multiple simultaneous failures- RAID-6, based on parity, cannot recover from more than two simultaneous failures, or two failures, and a bit rate error before one of the failing drives is recovered. IT organizations must look for a solution that can be configured to handle multiple simultaneous failures that would match realistic outage scenarios in big data environments, provide seamless availability and automatically recover itself.
Large-scale cost-effectiveness – Keeping reliability constant, RAID gets more expensive as the amount of data increases. IT organizations must find a solution that doesn’t multiply raw storage requirements as it grows, to avoid a storage problem that no amount of money can solve. The right solution must also allow the number of nines of reliability to be set versus having to live with the limitations of what is affordable.
Information Dispersal – Better Approach
Information Dispersal helps IT organizations address big storage challenges and reduce storage costs, reduce power and the footprint of storage, as well as streamline IT management processes
Information Dispersal Basics – Information Dispersal Algorithms (IDAs) separate data into unrecognizable slices of information, which are then distributed-or dispersed-to storage nodes in disparate storage locations. These locations can be situated in the same city, the same region, the same country or around the world.
Each individual slice does not contain enough information to understand the original data. In the IDA process, the slices are stored with extra bits of data which enables the system to only need a pre-defined subset of the slices from the dispersed storage nodes to fully retrieve all of the data.
Because the data is dispersed across devices, it is resilient against natural disasters or technological failures, like drive failures, system crashes and network failures. Because only a subset of slices is needed to reconstitute the original data, there can be multiple simultaneous failures across a string of hosting devices, servers or networks, and the data can still be accessed in real time.
Suppose an organization needs one petabyte of usable storage, and has the requirement for the system to have six nines of reliability – 99.9999%. Here’s how a system built using Information Dispersal would stack up against one built with RAID and Replication.
To meet the reliability target, the Dispersal system would slice the data into 16 slices and store those slices with a few extra bits of information such that only 10 slices would be needed to perfectly recreate the data – meaning, the system could tolerate six simultaneous outages or failures and still provide seamless access to the data. The raw storage would increase by 1.6 (16/10) times the usable storage, totaling 1.6PB.
To meet the reliability target for one petabyte with RAID, the data would be stored using RAID-6, and replicated two, three or even four times possibly using a combination of disk/tape. The raw storage would increase by .33 for the RAID-6 configuration, and then be replicated three times for a raw storage of four times, totaling four petabytes.
Comparing these two solutions side by side for one petabyte of usable storage, Information Dispersal requires 60% the raw storage of RAID-6 and replication on disk, which translates to 60% of the cost. (v)
When comparing the raw storage requirements, it is apparent that both RAID-5 and RAID-6 require more raw storage per terabyte as the amount of data increases. The beauty of Information Dispersal is that as storage increases, the cost per unit of storage doesn’t increase while meeting the same reliability target.
RAID and Replication
vs. Dispersal Raw Storage Requirements
(Six nines of reliability – 99.9999% over 10)
To translate into real world costs, here’s an example of storing one petabyte, with six nines of reliability (99.9999%). This also assumes the cost per terabyte is a commodity and the same for either a RAID-6 and replication or Dispersal solution, and set at $1,000.
It quickly becomes apparent that an organization can save millions of dollars in the petabyte range, and that dispersal is 40% less expensive.
Now another real world scenario: suppose the storage is four petabytes, with six nines of reliability (99.9999%), and assume the cost per terabyte is again set at $1,000.
In this example, it is clear that RAID-6 and replication are more expensive from a capital expenditure perspective, and further, more costly to manage since there are three copies of data. In this example, Dispersal is 60% less expensive.
Clearly 40% to 60% less raw storage results in not only hardware cost efficiencies, but also additional savings. Using Information Dispersal, an organization realizes cost savings in power, space and personnel costs associated with managing a big storage system.
Attainable Data Protection
Executives begin to fear permanent data loss or significant impact to their ability to analyze the big data under their watch as systems swell to petabytes. Just how likely is data loss using RAID versus Information Dispersal?
When looking at number of years without data loss, with a 99.99999% confidence level, RAID schemes on their own (meaning, without replication) clearly degrade as the storage amount increases to the point where data loss is imminent.
Years Without Data Loss
RAID-5 clearly wasn’t designed for protecting terabytes of data, as data loss will be close to certain in no time at all. RAID-6 can prevent data loss for several years with lower storage requirements; however, at just 512TB, there is a high probability of data loss in less than a year.
Information Dispersal doesn’t even appear on the chart because even for a large storage amount like 524K TB, the confidence for years without data loss is not within anyone’s lifetime. (It is theoretically over 79 million years.) Clearly, Information Dispersal offers superior data protection and availability as systems scale to larger capacities, and won’t incur the expense of replication to increase data protection for big data environments.
Avoiding a Resume-Generating Event
Big data is here to stay. If an organization relies on RAID and replication, its storage systems will hit a wall in terms of becoming cost prohibitive, or having to sacrifice data protection to store all of its data, and inevitably experience data loss. At this point, many IT executives fear the resume-generating event of having to admit the data they are responsible for managing is not available for analysis or processing and the enterprise is no longer competitive.
Some IT executives may be in denial: "My storage is only 40TB, and I have it under control, thank you." Consider a storage system growing at the average rate of 10 times in five years per IDC’s estimation. A system of 40TB today will be 400TB in five years, and four petabytes within just 10 years. This illustrates that a system in the terabyte range that is currently using RAID will most likely start to fail within the executive’s tenure.
Expected Data Growth Rates
Savvy IT executives will make a shift to Information Dispersal sooner, realizing it will be easier to migrate their data in a smaller range to the new technology. Further, they will realize significant cost savings within their tenure.
Cleversafe’s Dispersed Storage Platform offers advantages over traditional storage systems relying on RAID and replication, including:
- Big data protection – Designed to specifically address big data protection needs, Cleversafe’s Dispersed Storage Platform can meet storage needs at the petabyte level and beyond today.
- Ability to handle multiple simultaneous failures – Interested in planning for multiple simultaneous failures over RAID’s limit of two? Information Dispersal can handle as many simultaneous failures as the IT organization chooses to configure. Data center has a power outage, bandwidth connectivity issues at a second location, bit rate error on a drive, and another location unavailable due to routine maintenance? With Information Dispersal, big data is still seamlessly available.
- Large-scale cost effectiveness – Information Dispersal is cost effective because it is not relying on copies of data to increase data protection, so the ratio of raw storage to usable storage remains close to constant even as the data increases. Finally, a solution that doesn’t get more expensive per terabyte as the storage system grows.
(i) Calculated by taking the chance of not encountering a bit rate error in a series of bits.
- For 10TB: 1 – ((1 – 10^-14)^(10*(8*10^12)))
- For 100TB: 1 – ((1 – 10^-14)^(100*(8*10^12)))
(ii) Based on research report Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? Bianca Schroeder Garth A. Gibson, Computer Science Department, Carnegie Mellon University, Usenix, 2007.
(iii) For RAID-6 (6+2), assumed array configuration is 6 data drives, and 2 parity drives, resulting in 33% overhead.
(iv) The first three figures rely on the same underlying model and assumptions:
- Disk drives are 1TB.
- An annual failure rate (AFR) for each drive is set to 5% based on Google’s research.*
- For RAID-5, an array is configured with 5 data drives, and 1 parity drive.
- For RAID-6, an array is configured with 6 data drives, and 2 parity drives.
- Calculation for a drive’s reliability: (1-AFR).
- Calculation for the reliability of a single array: Drive reliability Number of drives
- Calculation for multiple array reliability: Array reliability Number of arrays
- Disk failure rebuild rate: 25MB/s
- Disk service time (hours to replace with new drive): 12 hours
- Calculations do not factor in bit rate errors (BRE), which would increase the storage cost for RAID-5 and 6 due to additional replication required. Dispersal would not increase because in the event of an encountered bit rate error, Dispersal has many more permutations than RAID-5 or 6 in which to reconstruct the missing bit, as well as lower probability of six simultaneous failures.
* Google’s research on data from 100,000 disk drive system showed disk manufacturer’s published annual failure rates (AFR) understate failure rates they experienced in the field. Their findings were an AFR of 5%. Failure Trends in a Large Disk Drive Population, Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz Andr´e Barroso, Google Inc., Usenix 2007.
(v) Since disk drives are commodities, this model assumes the same cost per gigabyte of raw storage for a RAID-6 system and a dispersed storage system.
vi This is a hypothetical price per gigabyte. Based on current research, this would be highly competitive in the market. Readers may adjust math by using a different price per gigabyte as desired.