What are you looking for ?
Advertise with us
RAIDON

Pillar Posted SPC-1 Result

On the blog of CEO Mike Workman

Here is what you can read on the blog of CEO of Pillar Data Systems, Mike Workman, an article dated April 22, 2011:

On January 13, 2009 Pillar posted its first SPC-1 Result. While we were pleased with how our SPC-1 Result stacked up against the competition, I lamented in my blog at the time that it was temporary, as these things ought to be. Technology moves on, as it should.

There are two issues and perhaps a third that are of primary import when deciding to do a benchmark:

  1. They are of material cost (especially to smaller sized companies).
  2. You need to post respectable numbers, else why bother?
  3. You have to realize that once you post numbers, you are motivating others to best your results, and therefore you need to keep upping your game.

So the SPC-1 Result we posted Tuesday shows:

  • SPC-1 IOPS: 70,102.27
  • SPC-1 Price-Performance: $7.32/SPC-1 IOPS
  • SPC-1 LRT: 2.53 milliseconds
  • Total ASU Capacity: 32,000.000 GB
  • Data Protection Level: Protected (Mirroring)
  • Total TSC Price (including 3-year maintenance): $513,112
  • Submission Identifier: A00104

(Current as of April 20, 2011)

In our SPC-1 Full Disclosure Report, you can find additional details of the test result:

  • 10.02 millisecond average response (@70,102.27 SPC-1 IOPS)
  • 312 spindles all-in

The priced storage configuration includes an application available RAID-10 protected capacity of 42,858.154 GB. Divide the Priced Storage Configuration’s total price of $513,112 by that and you get a price of $11.97 per GB.

So how does this stack up? For similarly sized systems our most recent SPC-1 Result is excellent. But this is the CEO of a storage company talking about his SPC-1 Result the day after it’s posted … so this is what I am supposed to say. It is true, but our relative position is also ephemeral.

What Matters
There are a few points that merit discussion, and they are not all obvious by looking at the metrics we typically look at. Specifically, the latency curves which are contained in the SPC-1 Results are important, but few people look at them:

pillar_spc1_1

The response times listed above come from the SPC-1 Full Disclosure Reports posted by each of the Test Sponsors. I used the reported SPC-1 IOPS and corresponding Average Response Time, as of 4/20/2011, for each of the SPC-1 Results:

pillar_spc1_2

Pillar had been criticized in a couple of corners a few months ago for having a sub-competitive latency profile (in a comparison of a 2010 system to a 2008 system). The argument goes that the shape of the latency curve is important, and I buy into that argument. We have dramatically improved the results over the last two years, but we haven’t kept posting SPC-1 Results as we released new software, firmware, and a dozen hardware improvements in 3 Slammer releases during the interim period since our last posting; our bad. So our current SPC-1 Result is far more attractive than that of 2 years ago. While we made > 50% improvement in our latency metric at top load (1) – I would argue that there’s more to the story. While 100% load latency numbers are important, systems usually should not be operated at that load. As we have discussed before, robust failover and failback in any system require average loads of 50% or less. Our latency number at that point is 3.75 milliseconds, 32% better (lower) than it was two years ago (2).

Which brings us to another point: utilized capacity. Since seek time varies as the square root of the seek length, the more disk surface you cover, the slower your system will measure out at. We went beyond the allowed amount of unused space required by the SPC-1 specifications to make the point that we can run at 80% utilization and still provide best of breed performance. As many other vendors have done, we configured the ASUs in RAID-10 format to optimize performance per spindle. Clearly RAID 10 is not universally a great idea.

The best RAID configuration depends heavily on the problem you are solving (R/W ratio). In the discussion of the crappy IBM XIV architecture that forces RAID X (which is essentially RAID 10 for the less roman numerally inclined), I have previously mentioned that while good for write performance, the efficiency of RAID X is a horrific price to pay for many applications.

Perspective
While we are proud of these numbers, we will continue to work on improving them as time passes, as others will. While the Axiom performs well on the SPC-1 workload, it also excels at a number of other typical customer workloads. So the question is why even post at all, and who gives a damn about these numbers? My pals over at EMC would have echoed that except that after many years they have recently had a change of heart and decided to join the SPC.

For Pillar, posting SPC-1 Results serves our customers in several ways. First, while not everyone’s workload resembles a transaction processing workload like SPC-1, system improvements that come out of the benchmarking effort benefit a wide variety of applications. Second, benchmark results allow buyers to compare similar configurations and be confident that they are not getting a system which is a serious underperformer. Third, if a vendor touts the performance advantages of premium priced systems or options, wouldn’t you like to see independently-verified confirmation? Look out for the systems and companies that have elected not to post anything.

There is one really interesting phenomenon which doesn’t really apply to Pillar but does apply to some other storage providers; multi-platform performance overlap. Essentially, a company like EMC for example can be averse to benchmarking because the benchmarks show that their high-end systems are either worse or only marginally better than their less costly mid-range systems. If you want to see a horrific sales meeting, watch a sales team try and explain why the pricey system is outperformed by the cheaper one; it’s ugly.

With the single platform Axiom we don’t have the issues faced by vendors who have multiple platforms whose price/performance isn’t linear. So, we look great today, and we will look great no matter what size you buy or scale out and up to with the Axiom.

Of course all storage partners have stuff in the works to improve their numbers. The essence then cannot be who beats who at the moment, but more like who is willing to play and stay competitive. This refuse-to-brag attitude will ruin a lot of marketing folks’ day, but it is honest I think.

QoS, Scale Up, Scale Out
Pillar’s great SPC-1 Result with the Axiom 600 Series 3 showcases the advantages of our cache, distributed hardware RAID, and multiple Slammer architecture. Beyond our benchmarked performance, the Pillar Axiom offers additional advantages, such as QoS to support multiple concurrent applications and performance under fault. The good news for customers is that while the full set of Axiom unique features don’t shine through for SPC-1 benchmarks, they absolutely will shine through for them in real life.

So, as the saying goes, your mileage may vary. Using Pillar’s QoS on our top performing platform puts you into far more than solid ground than you would be on with a competitor’s machine. In our Customer satisfaction surveys (run by independent parties), the Axiom’s performance always ranks in the top 3 reasons why it is the Customer’s storage platform of choice.

(1) Our 2009 SPC-1 Result had an average response time of 20.92 milliseconds at  the 100% Load throughput of 64,992.77 SPC-1 IOPS while our 2011 SPC-1 Result has an Average Response Time of 10.02 milliseconds at the 100% Load throughput of 70,102.27 SPC-1 IOPS.

(2) Our 2009 SPC-1 Result had an average response time of 5.50 milliseconds at the 50% Load throughput of 32,490.23 SPC-1 IOPS while our 2011 SPC-1 Result has an Average Response Time of 3.75 milliseconds at the 50% Load throughput of 35,050.15 SPC-1 IOPS.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E