NVMe over Fabrics Explained by Western Digital
To create high performance storage network with latencies that rival DAS
This is a Press Release edited by StorageNewsletter.com on April 24, 2019 at 2:36 pmBy Dave Montgomery, director, flash storage platform marketing, Western Digital Corporation
You may have heard of NVMe over Fabrics, or its acronym NVMe-oF, but you may still not be clear on what exactly it is and how it will impact or affect your IT infrastructure. Here’s NVMe-oF explained.
NVMe – What is it?
Before we dive into NVMe-oF let’s take a step back to make sure we understand the foundation – NVMe.
Early flash storage devices were connected via SATA or SAS – protocols that were developed decades ago for HDDs and are still widely used for data infrastructure. SATA and SAS connected flash storage provided huge performance gains over HDDs. Yet, as speeds increased – on CPUs, backplanes, DRAM, and networks – the SATA and SAS protocols began to limit the performance from flash devices. SATA and SAS protocols accounted for HDD characteristics, such as rotational delay, head seek times, etc. that add unnecessary complexity for flash-based media. In order to take full advantage of flash storage performance, the NVMe protocol was created (version 1.0 in early 2008).
As the name implies, NVMe was designed assuming flash memory, not disk, was the storage target. As a result, it was developed as a much more efficient (faster) protocol, and NVMe devices use the PCIe electrical interface to communicate with CPUs, DRAM, etc. With many more IO lanes than SAS or SATA, NVMe delivers extremely high performance. As my colleague, Erik Ottem said during a recent webinar, NVMe is a divided 12-lane highway versus the two-lane country road of the past (i.e. SAS and SATA). See the video below:
Today, NVMe is often used in servers to connect a flash drive to the PCIe bus as DAS, giving the server a more efficient way to use flash media. However, the challenge with using NVMe in that way is the flash device is not accessible by any other systems beyond the server it is attached to – potentially creating a situation where it is underutilized when other servers could benefit from additional flash.
Enter Fabrics
The NVMe protocol is not limited to simply connecting flash drives, it may also be used as a networking protocol. When used in this context, a ‘fabric’ enables any-to-any connections among elements. This is to distinguish it from a network, which may restrict the connections possible among the attached elements.
NVMe-oF are enabling organizations to create a very high performance storage network with latencies that rival DAS. As a result, flash devices can be shared, when needed, among servers.
NVMe-oF – Fibre Channel vs. Ethernet
But, when it comes to networking, you may be wondering if this protocol is limited to fibre or Ethernet? The good news is that it can run on both traditional fibre channel switches and IP switches. Most modern FC switches already have updated firmware and are able to support the NVMe-oF protocol today. Ethernet, which is supported by several standards, offers additional choices for IT infrastructure.
Leveraging NVMe-oF with FC should be straight forward for environments already invested in FC infrastructure. FC is designed for storage and can support both legacy SCSI traffic and NVMe traffic simultaneously. Since most organizations will integrate NVMe-oF into an existing data center, current support of both protocols enables them to make the conversion at a pace comfortable for them.
NVMe-oF via Ethernet typically counts on Remote Direct Memory Access (RDMA), which enables two computers on the same network to exchange memory contents without involving the processor. While RDMA is not specifically a storage protocol, it can be enabled in storage networking with protocols like RoCE (RDMA over Converged Ethernet), and iWARP (internet wide area RDMA protocol). A newcomer to the Ethernet NVMe-oF implementations is NVMe/TCP, which enables customers to run NVMe-oF over existing Ethernet infrastructure, taking advantage of legacy or non-RDMA infrastructure while possibly incurring a small performance penalty.
Deciding Between NVMe-oF options
Up until the release of NVMe/TCP, NVMe/FC had clear advantages for many enterprises. It was able to run both SCSI and NVMe at the same time, and it also had an auto-discovery capability that made adding new servers or storage to the fabric easier. The challenge for FC is outside of the enterprise, it isn’t as popular as Ethernet-based use cases, like NFS and iSCSI. RoCE and iWARP are emerging in more implementations, and vendors providing these solutions claimed some performance advantages over FC because of RDMA’s direct memory access capabilities. NVMe/TCP, although late to the game, does have some advantages and seems to be a good match for organizations without a legacy FC infrastructure and not requiring RDMA.
NVMe-oF: Need and timing
When IT planners aggressively switch to NVMe-oF is largely a function of need and timing. The first step to NVMe for most organizations is an NVMe flash array with traditional networking connections to the storage network. While these systems generate IO/s approaching the millions, the reality is that there are very few workloads that require more than the performance of these systems.
However, there is an emerging class of workloads that can take advantage of all the performance and low latency of an end-to-end NVMe system.
Additionally, a NVMe-oF storage target can be dynamically shared among workloads – providing an ‘on-demand’ or composable storage resource that provides additional benefits, including flexibility, agility, and greater resource efficiency.
Customers who operate workloads that demand high performance and low latency should evaluate the advantages and disadvantages of shifting to an end-to-end NVMe-oF implementation. However, IT planners must be very careful in selecting all infrastructure elements (servers, networking, etc.) used to eliminate any performance bottlenecks due to existing IT equipment.
Suggested posts :
NVMe: A Guide to Everything You Need to Know
Data Interaction is Changing – What it Means for IT Infrastructure
What Does Helium Have to Do with the Black Hole Image?
RISC-V SweRV Core Available to Open Source Community