Have You Heard About 40Gb FC?
By J Michel Metz, Cisco, FCIA board's director
This is a Press Release edited by StorageNewsletter.com on July 22, 2014 at 2:49 pmThis article was written by J Michel Metz, FCIA board of directors, strategic product manager, storage and unified fabric at Cisco Systems
Have You Heard About 40Gb FC?
Summary: FC continues to be the gold standard for storage networking, regardless of the uderlying transport medium. Now, more tan ever before, storage administrators have the most fl exibility to deploy reliable, deterministic storage networks with unprecedented choice and agility. With up to 10,000MB/s bidirectional bandwidth to play with, storage networks can use 40G FCoE to take all of their FC applications beyond what was conceivable only a few years ago.
It may sound strange to think of the Fibre Channel Industry Association (FCIA) discussing Ethernet technologies. After all, when people think of FC something more than just the protocol comes to mind – the entire ecosystem, management, and design philosophies are part and parcel of what storage administrators think of when we discuss FC networks.
There is a reason for this. Over the years, FC has proven itself to be the benchmark standard for storage networks – providing well-defined rules, unmatched performance and scalability, as well as rock solid reliability.
In fact, it’s a testament to the foresight and planning of the International Committee for Information Technology Standards (INCITS) T11 technical committee, which is the committee within INCITS responsible for FC Interfaces, that the FC protocol is robust enough to be used in a variety of ways, and over a variety of media.
Did you know, for instance, that the T11 committee has created a number of possible forms for transporting FC frames?
In addition to the FC physical layer, you can also run FC over:
- Data Center Ethernet
- TCP/IP
- Multiprotocol Label Switching
- And others…
Because of this versatility, FC systems can have a broad application for a variety of uses that can take advantage of the benefits of each particular medium upon which the protocol resides.
10G inflection point
While using FC on other protocols is interesting, perhaps no technology has intrigued people like the ability to use FC over Layer 2 lossless Ethernet. In this way, FC can leverage the raw speed and capacity of Ethernet for the deployments that are looking to run multiprotocol traffic over a ubiquitous infrastructure inside their data center.
Realistically, 10GbE was the first technology that allowed administrators to efficiently use increasing capacity for multiprotocol traffic.
It was the first time that we could:
- Have enough bandwidth to accommodate storage requirements alongside traditional Ethernet traffic
- Have lossless and lossy traffic running at the same time on the same wire
- Independently manage design requirements for both non-deterministic LAN and deterministic SAN traffic at the same time on the same wire
- Provide more efficient, dynamic allocation of bandwidth for that LAN and SAN traffic without starving each other
- Reduce or even eliminate potential bandwidth waste
How did this work? 10GbE provided a number of elements to achieve this.
- First, 10GbE allowed us the ability to segment out traffic according to Classes of Service (CoS), within which we could independently allocate pre-deterministic and non-deterministic traffic without interference.
- Second, 10GbE gave us the ability to pool the capacity and dynamically allocate bandwidth according to that CoS.
- Third, consolidating traffic on higher throughput 10GbE media reduces the likelihood of underutilized links. How? Suppose you have 8Gb FC links but are currently only using 4G of throughput. There is a lot of room for growth but for the most part, on a regular basis half of the bandwidth is being wasted.
Consolidating that I/O with LAN traffic means that you would still have that FC throughput guaranteed, but also be able to use additional bandwidth for LAN traffic as well. Moreover, if there is bandwidth left over, bursty FC traffic could use all of the remaining additional bandwidth as well.
Because LAN and SAN traffic is not constant nor static, this dynamic approach to running multiple types becomes even more compelling when the bandwidth increases beyond 10G to 40G, and even 100G.
The 40G milestone
There is an old adage: “You can never have too much bandwidth”. In order to understand just how much throughput we’re talking about, we need to understand that it’s more complex than just the “apparent” speed.Throughput is based on both the interface clocking (how fast the interface transmits) and how efficient it is (i.e., how much overhead there is).
In this chart, you can see exactly how much the bandwidth threshold is being pushed with technologies that are either available today or just around the corner. The ability to increase throughput in this way has some significant consequences.
What to do with all that bandwidth?
There are more ways to answer that question than there are data centers. Could you dedicate all that bandwidth to one protocol, whether it be FC or something else? Absolutely. Could you segment out the bandwidth to suit your data center needs and share the bandwidth accordingly? Quite likely.
This is where the true magic of 40GbE (and higher) lies. In much the same way that SANs provided the ability for data centers to make pools of storage more efficient than silo?d disk arrays, converged networks allow storage networks to eliminate the bandwidth silos as well. The same principles apply to the networks as they did to the storage itself.
There are three key facets that are worth noting:
Flexibility
The resiliency of the FC protocol, exemplified by its easy transference from 10G to 40G to 100GE without the need for further modification, means that there is a contiguous forward-moving path. That is, the protocol doesn?t change as we move into faster speeds and higher throughput. The same design principles and configuration parameters remain consistent.
But not only that, you have a great degree of choice in how your data centers are configured. Did you accidentally under-plan for your throughput needs because of an unexpected application requirement? No problem. A simple reconfiguration can tweak the minimum bandwidth requirements for storage traffic.
Have space limitations, or a different cable for each different type of traffic you need? No problem. Run any type of traffic you need – for storage or LAN – using the same equipment and, often, on the same wire. Nothing beats not having to buy extra equipment when you can run any type of traffic, anytime, anywhere in your network, over the same wire.
Growth
Data centers are not stagnant; they can expand, and even sometimes they can contract. One thing they do not do, however, is remain static over time.
New servers, new ASICs, new software and hardware – all of these affect the growth patterns of the data center. When this happens, the network infrastructure is expected to be able to accommodate these changes. For this reason we often see administrators ‘expect the unexpected’ by over-preparing the data center’s networking capacity, just in case.
Because of this even the most carefully designed data center can be taken by surprise 3, 5, or more years down the road. Equipment is being called upon to work overtime in order to accommodate increases in capacity requirements.
Meanwhile, equipment that was ‘absolutely necessary’ remains underutilized (or not used at all) because expected use cases didn’t meet planned projections.
Multiprotocol, higher capacity networks solve both of these problems. No longer do they have to play (bandwidth leapfrog) where they have too much capacity on one network and not enough on the other (and never the twain shall meet!). Neither do they need to regret installing a stub network that winds up becoming a permanent fixture that must be accommodated in future growth because what was once temporary has now become mission critical.
Budget
What happens when these needs cannot be met simply because of the bad timing of budget cycles? How often have data center teams had to hold off (or do without) because the needs of the storage network were inconveniently outside the storage budget cycle?
In a perfect world, storage administrators would be able to add capacity and equipment whenever needed, not just because of the dictates of budgetary timing. When capacity is pooled on a ubiquitous infrastructure, however, there no longer has to be a choice between whether the LAN/Ethernet capacity should trump storage capacity. Not every organization has this limitation, of course, but eliminating competition for valuable resources (not either/or but rather and) not only simplifies the procurement process but also maximizes the money spent for total capacity (not to mention the warm fuzzes that are created between SAN and LAN teams).