What are you looking for ?
Advertise with us
RAIDON

PCIe Gen3 Enables Smaller, Faster Server Storage

Article by Juergen Frick, product marketing manager, PMC-Sierra

pcie_gen3_juergen_frick_pmcsierra_fpcie_gen3_enables_smaller_faster_server_storagearticle_by_juergen_frick_product_marketing_manager_pmcsierra This article has been written by Juergen Frick, product marketing manager of PMC-Sierra, Inc.‘s channel storage division since June 2010. In June 2011 he was promoted to senior product manager. In this role, he is responsible for Adaptec by PMC board level products and the EMEA market. Prior to this appointment he was the EMEA product marketing manager for Adaptec’s RAID controller products for SATA, SAS and SCSI. There he was responsible for the channel business with RAID components and for the OEM business in EMEA. Prior to Adaptec,he held various positions in technical marketing at ICP vortex and Intel.

Taking advantage of PCIe Gen3
requires disk adapter silicon that’s up to the task

Storage vendors are focusing on small form-factor solutions that will fit into smaller chassis while providing the high performance that data centers require. Storage components, such as HDDs, are getting physically smaller (though increasing in capacity). 2.5-inch drives hold more capacity per space occupied than the 3.5-inch drives they are replacing, and SFF HDDs now boast storage cacities of 1TB or more. Choosing the right SFF configuration is important consideration for data centers because finding the right combination of form, fit and function, allows them to deploy one SKU for almost any configuration and simplify everything from the purchase decision to installation to maintenance.

To match the system’s smaller footprint requirements, low-profile storage adapters are becoming more common as well, but delivering top I/O performance and low latency in such a compact form factor requires changes in connection methodology. Fortunately, PCIe Gen 3 doubles per-lane bandwidth to low profile SAS and SATA drives, but requires a minimum of 16 native 6 Gb/s ports to realize maximum performance. This article examines the performance of PCIe storage adapters and quantifies bandwidth as a function of various adapter and driver parameters.

The New Generation of PCIe
PCIe is a motherboard-mounted expansion bus that, through a connected device such as a RAID adapter, connects the host system processor to add-on peripherals, such as storage systems. Introduced into servers and workstations in 2012, the third generation of PCIe (PCIe Gen3) doubles bandwidth to the host compared to its PCIe Gen2 predecessor, increasing per-lane throughput from 250MB/s to 500MB/s.

With PCIe Gen2, eight 6Gb/s SAS/SATA ports are sufficient to achieve maximum performance. However, PCIe Gen3 requires a minimum of 16 native 6Gb/s SAS/SATA ports to double the bandwidth through the storage connections.

A select group of storage adapters that claim to be designed for PCIe Gen3 has appeared the market, but most of them max out at eight ports and cannot take full advantage of PCIe Gen3’s superior performance. As we will see, some SAS/SATA RAID adapters, available with 16 or 24 native SAS/SATA ports, are designed to fully exploit the high-performance characteristics of PCIe Gen3 (Figure 1).

pcie_gen3_juergen_frick_pmcsierra_f1_540
Figure 1: PCIe Gen3 doubles bandwidth to storage devices; this performance can only be realized with a minimum of 16 6Gb/s adapter ports

Significance of High Native Port Count
In recent years, the storage industry has been transitioning from 3.5-inch storage drives to 2.5-inch Small Form Factor (SSF) drives as advancements in technology allow storage vendors to address the aforementioned physical space challenges faced by data centers. Not only do SFF drives offer the obvious advantage of allowing more drives to fit into the same server rack space, but 2.5-inch drives hold more capacity per space occupied than the 3.5-inch drives they are replacing. Indeed, SFF HDDs now boast storage cacities of 1TB or more.

Additionally, the cost of 2.5-inch flash-based SSDs is finally coming more in line with HDDs in terms of the traditional ‘cost per GB of capacity’ metric. That, combined with a higher read bandwidth, higher IOPs, better mechanical reliability, and higher resistance to shock and vibrations compared to HDDs, is driving an industry-wide transition to SSDs. As the quantity of drives in a server chassis increases, the storage adapter card’s port count requirements also increase.

Expander is no option
The traditional method for increasing the storage adapter’s port count has been through the use of an expander – a board that enables the connection of additional attached SAS or SATA devices when the adapter does not have enough ports to accommodate them. However, expanders have a number of limitations: not only do they add complexity, they also occasionally face compatibility issues with other components in the storage solution.

On top of that, expanders are notorious for causing latency and limiting data transfer bandwidth. Both of these issues have long been tolerated by data centers using HDDs, as they did not cause a huge impact on the already slow read and write speeds of HDDs. But as higher-performance SSDs gain traction in storage solutions, the latency and bandwidth issues of expanders have become more noticeable, and therefore less acceptable.

In a RAID-5 configuration using 24 SATA SSDs (Figure 2), the use of expanders causes a roughly 60% performance drop on random read IOPs, and a roughly 20% performance drop on OLTP read/write IOPs, compared to a direct connection through native ports.

pcie_gen3_juergen_frick_pmcsierra_f2_gif
Figure 2: RAID-5 Random Performance (24 SATA SSDs)

Similarly, in a RAID-5 configuration with SATA SSDs (Figure 3), the use of expanders cause a roughly 70% performance drop on sequential read MB/s, and a roughly 40% performance drop on sequential write MB/s, compared to a direct connection through native ports.

pcie_gen3_juergen_frick_pmcsierra_f3
Figure 3: RAID-5 Sequential Performance (24 SATA SSDs)

This problem can be partially overcome if SAS devices are used, since they are dual-ported and allow all eight SAS port connections to be leveraged through the expander. However, as illustrated in Figures 4 and 5 below, performance of the 8 6Gb/s SAS ports flattens out at the peak data rate and competing products cannot match the performance of adapters with 16 or more ports, such as the Adaptec Series 7.

upcie_gen3_juergen_frick_pmcsierra_f4
Figure 4: RAID-5 Performance (24 SAS SSDs)

pcie_gen3_juergen_frick_pmcsierra_f5
Figure 5: RAID-5 Sequential Performance (24 SAS SSDs)

Another drawback of expanders is the additional cost they add to a storage solution – about $200 for the expander itself plus the cost of cables plus installation, increased power consumption, and maintenance costs. An ideal solution for data centers would be a 6Gb/s storage controller with a high native port count that can take advantage of PCIe Gen3’s performance.

However, as mentioned, most 6Gb/s storage adapters max out at only eight ports.

Multiport Silicon Solution
As shown above, higher port adapter silicon takes advantage of the performance offered by PCIe Gen3. The Adaptec Series 7 SAS/SATA RAID adapter family uses PMC’s 24 port PM8015 RAID-on-Chip (ROC), which combines an x8 PCIe Gen3 interface with 6Gb/s SAS ports to enable a new generation of high performance, high native port count RAID adapters that are unmatched by any other ROC in the industry.

Traditionally, RAID adapter performance has centered around read and write throughput, measured in megabytes per second. Using this metric, our company’s adapters perform up to 83% better than competing RAID adapters – 6.6GB/s on sequential reads and up to 2.6GB/s on sequential writes on parity RAID-5.

Moreover, with the popularity and growth of SSDs, IOPs is emerging as the new ‘lead horse’ in performance metrics, with the most common configuration being the 4K random-read number. Using 4K I/O size in random scenarios is driven by the fact that most operating systems use 4K cache sizes in the server DRAM and, with that, 4K is typically the smallest I/O size for random workloads. In a RAID-5 configuration with 16 direct-connected SSDs, these 16-port, PCIe Gen3 adapters benchmark at 450K IOPs – nearly 10x the performance of previous-generation RAID adapters, and more than double that of the competition.

As noted earlier, RAID adapters with only eight native ports cannot pass PCIe Gen3’s performance gains through from the bus to the storage connections. These 16- and 24-port adapters are the first on the market to take full advantage of PCIe Gen3 performance gains by using HD mini-SAS connectors to offer options with 16 or 24 native SAS/SATA ports (Figure 6).

pcie_gen3_juergen_frick_pmcsierra_f6
Figure 6: Configuration Complexities and Costs Expanders vs. Direct Connect

Conclusion
In order to continue meeting customer demand for fast and reliable access to data and content, data centers must employ efficient and physically smaller storage solutions that maximize I/O capability while fitting within budgetary and physical space requirements. A new generation of PCIe Gen3 storage adapters seek to enhance storage I/O performance by offering 16 ports required to maximize PCIe Gen3 performance.

Series 7 adapters perform up to 83% better than competing RAID adapters in read and write throughput – 6.6 GB/s on sequential reads and up to 2.6 GB/s on sequential writes on parity RAID-5 – and lead the field with 450K IOPs – nearly 10x the performance of previous-generation RAID adapters, and more than double that of the competition.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E