Myth: Single FCoE Data Center Network = Fewer Ports, Less Complexity and Lower Costs
According from an analyst from Gartner
This is a Press Release edited by StorageNewsletter.com on March 17, 2010 at 3:30 pmThe notion that a single converged data center network makes for fewer switches and ports, resulting in a simpler network consuming less power and cooling, is flawed, according to Gartner, Inc.
Gartner research, Myth: A Single FCoE Data Center Network = Fewer Ports, Less Complexity and Lower Costs, shows that a converged data center network requires more switches and ports, is more complex to manage and consumes more power and cooling than two well-designed separate networks.
"The industry is abuzz with the promise of a single converged network infrastructure, this time in the data center core," said Joe Skorupa, research vice president at Gartner. "Alternatively described as Fibre Channel over Ethernet (FCoE), Data Center Ethernet (DCE), or more precisely, Data Center Bridging (DCB), this latest set of developments hopes to succeed where InfiniBand failed in its bid to unify computing, networking and storage networks."
"The promise that a single converged data center network would require fewer switches and ports doesn’t stand up to scrutiny," Mr. Skorupa said. "This is because as networks grow beyond the capacity of a single switch, ports must be dedicated to interconnecting switches. In large mesh networks, entire switches do nothing but connect switches to one another. As a result, a single converged network actually uses more ports than a separate local area network (LAN) and storage area network (SAN). Additionally, since more equipment is required, maintenance and support costs are unlikely to be reduced."
In addition to the financial barriers to the success of a single converged data center network, Gartner also believes that there are significant design and management issues to be addressed. When two networks are overlaid on a single infrastructure, complexity increases significantly. As traffic shares ports, line cards and inter-switch links, avoiding congestion (hot spots) becomes extremely difficult. Mr. Skorupa said that over time, emerging standards, such as Transparent Interconnection of Lots of Links (TRILL) may make it easier to avoid these hot spots, but mature, standards-compliant implementations are at least two to three years away.
Debugging problems in the converged network are also more difficult since interactions between the LAN and SAN traffic can make root cause analysis more difficult. Since many problems are transient in nature, events must be correlated across the two virtual networks, increasing complexity. Should an outage be required for solving a problem or simply for performing maintenance, a downtime window that is acceptable for both environments may be required. This increases complexity and may increase cost, as well.
"It’s clear that the barriers to a single network range from a dearth of available products and the price premium charged for those products to the requirement to ‘forklift upgrade’ your entire network to long-standing organizational conflicts," said Mr. Skorupa. "However, while the promise that a unified fabric will require fewer switches and ports, resulting in a simpler network that consumes less power and cooling, may go unfulfilled, that doesn’t mean that enterprises should forgo the benefits of a unified network technology."
Mr. Skorupa said that there is clear benefit in standardizing on a single technology for all data center networking if that technology adequately supports the needs of applications. This will simplify acquisition, training and sparing. However, settling on a single technology does not require that the networks be combined. Design, operations and troubleshooting is much easier with two separate networks, and it may also cost less to build two separate networks.