Technology Brief Blade Server I/O and Workloads of the Future Comparing Cisco UCS and HP BladeSystem November, 2014 Where IT perceptions are reality
Jun 26, 2015
Technology Brief
Blade Server I/O and Workloads of the Future
Comparing Cisco UCS and HP BladeSystem
November, 2014
Where IT perceptions are reality
New Generation of Blade Servers and Workloads
2
HP and Cisco are the two most popular blade server brands on the planet. A big reason why is the networks embedded in
the HP BladeSystem and Cisco UCS products are the most powerful and flexible networks for virtualized workloads.
On August 28th, HP announced new HP ProLiant Gen9 servers, including several enhancements to their HP BladeSystem
I/O design. Shortly afterwards, on September 4th, Cisco announced long-awaited enhancements to UCS.
The UCS enhancements centered around the UCS Mini blade system which is targeted at SMBs and the edge of the
enterprise. There were no significant changes to the 5108 chassis used for larger systems, which after 5 years, is getting
long in the tooth. With only 1.2Tb/s of mid-plane bandwidth, the 5108 is limited in its ability to support more than 8 servers
and single links greater than 10Gb.
The new HP BladeSystem c7000 Platinum chassis offers 7TB/s of mid-plane bandwidth, with new support for 20GbE
downlinks as well as 40GbE uplinks. The HP ProLiant Gen9 BladeSystem also takes converged networks to the next level
with hardware offload of important new networking protocols supporting tunneling of L2 traffic over L3 networks, and scale-
out file storage traffic.
The new HP and Cisco blade systems are hitting the market just as hyperscale-driven applications and data center
architectures are reaching the enterprise. Our conclusion? There’s a new generation of blade servers and workloads, but
the same HP advantage.
This Report Compares 3 Facets of Cisco UCS and HP BladeSystem I/O
3
To set the stage for comparing the capabilities that will matter most in the future, this Technology Brief reviews the trend towards a new mix of applications and server workloads in Webscale private clouds.
2
1
3
I/O Capabilities Which Will Differentiate Blade Servers in Webscale Environments
Performance
Consolidation
Flexibility
Inflection Point
4
Intel Xeon E5-2600 v3
In 2014, the server
industry reached a major
inflection point with the
introduction of a new
generation of Intel server
processors launched v3
of the Xeon E5-2600
family. At this inflection
point, x86 server product
lines are being refreshed,
and new technologies are
being introduced which
complement the
capabilities of the Xeon
E5-2600.
Hierarchical Networks LAN/SAN Convergence with FCoE 10GbE
20GbE and 40GbE Virtual Networks
Converged cloud , RDMA , FC and Ethernet Connectivity
Virtualized Servers
Webscale Servers
Complementary Technologies are what Differentiate Blade Server Offerings
5
Given that HP and Cisco blade systems will feature the same Xeon E5-2600
processor, it’s the complementary technologies which will differentiate the
systems. The factors which are expected to separate leaders from followers,
is 20GbE connectivity to servers, 40GbE uplinks from blade server chassis to
network, switchless connectivity to storage, and convergence of Ethernet,
FCoE, native Fibre Channel, RDMA, and cloud tunneling protocols on the
same port. Servers with the best implementations of these technologies will be
better suited to handle traditional workloads, plus a new class of Webscale
workloads.
Workload Mix of the Future
6
Share Everything Applications + Share Nothing Applications
Enterprise IT organizations, who for the most part have become private cloud builders, are blending
traditional Enterprise and Hyperscale IT into a Webscale model. Traditional IT encompasses support for
workloads such as SQL databases, and ERP applications, with “share-everything” infrastructure
featuring many VMs sharing physical servers, and many servers sharing networked storage.
Webscale IT must support traditional workloads as well as a new generation of workloads such as
NoSQL databases and predictive analytics. Many of the new applications are designed to run in “share-
nothing” distributed computing environments featuring scale-out server and storage clusters.
Workload Mix of the Future
7
Private cloud builders are also trending
towards cloud platforms like OpenStack
and vCloud. Cloud operating systems
incorporate a software defined data
center architecture which allows a
single cloud operating system to
manage servers, storage and
networking systems in different data
centers. As a result, new cloud
tunneling protocols, such as VXLAN
and NVGRE, are being deployed as a
software defined datacenter foundation,
along with a new generation of NICs
which can offload the tunnel protocol
processing.
Traditional IT + Hyperscale IT = Webscale IT
Environment for Workloads of the Future
8
Webscale Private Cloud
The defining characteristic of a Webscale Private
Cloud is data center infrastructure which efficiently
supports two distinctly different application
environments — a shared infrastructure environment
and a distributed infrastructure environment. A
Webscale Private Cloud also includes an overlapping
environment with software defined (virtualized)
servers, networking and storage.
Converged Networks Make it Possible
A key capability of blade servers in a Webscale Private
Cloud is a higher level of network convergence. In the
next generation of 2.0 Converged Networks, the
RDMA network protocol for scale-out clusters, and
hardware offload of tunneling protocol processing for
carrying L2 traffic over L3 networks, are integrated as
standard features in Webscale CNAs and/or switches.
Webscale Private Cloud Environment
Shared environments include servers heavily loaded with virtual machines, and
networked storage shared by many servers. Distributed environments support database
and application workloads spread across many servers, and scale-out storage. Cloud
operating platforms such as vCloud and OpenStack are introducing management tools for
a software defined data center, including software defined networks.
Anatomy of Blade Server I/O
9
Application Performance Depends
on a Healthy Network
Every blade server has an entire
network embedded to carry east-west
traffic between servers, and north-
south traffic to top-of-rack, end-of-
row, and core switches upstream.
The I/O performance of applications
running on blade servers can differ
significantly depending on the
capabilities of their embedded
networks.
The Blade Servers
10
Blade Server Systems Cisco UCS
in 5108 Chassis
HP BladeSystem
in c7000 Chassis
The Products
Chassis Size 6U 10U
Max. Blade Servers 8 16
Mid-plane Bandwidth 1.2Tb/s 7.168 Tb/s
Server Downlinks 10Gb 20Gb
Chassis Uplinks 10Gb 10/40Gb
Interconnect Options Ethernet/FCoE Ethernet/FCoE, Fibre
Channel, SAS, InfiniBand
I/O Slots 2 8
Cisco UCS and HP
BladeSystem
In the following pages we
will compare the
performance, network
convergence, flexibility and
software defined
networking of the Cisco
UCS in a 5108 chassis,
and the HP BladeSystem
in a c7000 Platinum
chassis.
Comparing I/O Performance
11
Why it Matters
Meeting application performance service levels is directly related to the I/O performance of a blade
server system. In addition, the new generation of servers with Xeon E5-2600 processors hosting a
generation of demanding new applications, need higher bandwidth and lower latency I/O than ever
before. And in Webscale private cloud environments, performance is needed more cost-effectively than
ever before, bringing CPU efficiency to the forefront of important performance metrics.
I/O Performance Metrics
In the following pages, we will examine the capabilities of Cisco UCS and HP BladeSystem against the
following I/O performance metrics:
· Bandwidth
· Useable Bandwidth
· Latency
· CPU Efficiency
1
I/O Bandwidth
12
80GbE is Specmanship
There are some discussions in the blogosphere about how UCS achieves 80Gb of bandwidth per blade. Based on a the Cisco UCS B200 M4 Blade Server
Spec Sheet for details, that scenario refers to the configuration of a Cisco B200 M4 blade with a VIC1340 adapter and added mezzanine card (port
expander) that allows four 10Gb links to each IO Module (2208 FEX) for a total of 80Gb of bandwidth (2 x 4 x 10Gb).
40GbE is Expensive
From the point of view of pure technology, 40GbE is a perfect solution for delivering the performance needed in a single server link, and eliminating the
need for teaming. But the cost per port for 40GbE network adapters is typically more than 3x the cost per port of 10GbE adapters. In another case of
specmanship, Cisco is promoting the availability of a 40Gb port on the new 6324 Fabric Interconnect (FI) for the USC Mini. However, as of the writing of
this report, the 40G port, called a Scalability Port, is not a native 40GbE port and can only be used to breakout to four 1GbE or 10GbE SFP+ (4x1G or 4
x10G) connections. In addition, this 40GbE port requires an expensive software license to activate.
20GbE is Juuust Right
A choice that has only recently been made available to server architects is 20GbE. Each 20GbE ports offers bandwidth equivalent to twenty 1GbE ports or
two 10GbE ports. 20GbE is juuuust right because a single 20GbE port is enough bandwidth for all but the most I/O intensive supercomputing applications,
and is available for a fraction of the price of 40GbE technology. According to the Cisco UCS B200 M4 Blade Server Spec Sheet all Cisco UCS 5108
midplane, FEX and FI network connectivity ports are currently 10GbE, including the 40Gb scalability port on the 6324 FI which must be split into multiple
10GbE ports.
The HP BladeSystem provides 20GbE links between blade server adapters and the chassis interconnects, as well as inter-switch links. With HP Flex-20
technology, Ethernet network adapters deliver twice the bandwidth of 10Gb adapters, while reducing the management overhead associated with multiple
10Gb adapters.
With 20Gb downlinks, HP Virtual Connect FlexFabric-20/40 F8 Modules offer more than twice the throughput of other 10Gb extenders and fabric
interconnects. In addition, ports on the HP Virtual Connect FlexFabric-20/40 F8 Modules can be dynamically configured to support Ethernet, Fibre
Channel, or FCoE.
Almost no Oversubscription with HP BladeSystem
13
Oversubscription occurs when the I/O capacity of the adapter ports connected
to chassis switch ports exceeds the capacity of the switch ports. The
oversubscription ratio is the sum of the capacity of the adapter ports divided
by the capacity of the switch port. Below you can see that if you actually
configured 80Gb of bandwidth per UCS blade as mentioned above, you would
be building a blade server network with 4:1 oversubscription. In contrast, a
comparable configured HP BladeSystem would result in 1.1:1
oversubscription — almost a 100% improvement in oversubscription when
compared to Cisco.
Oversubscription
14
16 ports x 20Gb from Mid-plane to 4 x Virtual Connect Modules = 1,280Gb
4 Virtual Connect Modules. Each with 4 x 40Gb ports + 8 x 10Gb ports + 2 x20Gb ISL ports = 1,120Gb
2 ports x 20Gb from FLOM + 2 ports x 20Gb for Mezz. Card x 16 Servers = 1,280Gb
HP BladeSystem: Oversubscription = 1.1:1
8 ports x 10Gb from Mid-Plane x 2 IO
Modules = 160 Gb
8 ports x 10Gb x 2 IO Modules = 160Gb
4 ports x 10Gb from VICs and 4
ports x 10Gb from expansion cards
(80Gb) x 8 Servers = 640Gb
Cisco UCS: Oversubscription = 4:1
Oversubscription occurs when the I/O capacity of the adapter ports connected to chassis switch ports exceeds the capacity of the switch
ports. The oversubscription ratio is the sum of the capacity of the adapter ports divided by the capacity of the switch port. Below you can
see that if you actually configured 80Gb of bandwidth per UCS blade as mentioned above, you would be building a blade server network
with 4:1 oversubscription. In contrast, a comparable configured HP BladeSystem would result in 1.1:1 oversubscription — almost a 100%
improvement in oversubscription when compared to Cisco.
What Oversubscription Means
15
0
200
400
600
800
1000
1200
1 2 3 4 5 7 8 9 10 11 12 13 14 15 16
Cisco HP
Blade Server I/O Hits
The Wall
If you configured 80Gb of
bandwidth per blade on
both a Cisco UCS and HP
BladeSystem, the Cisco
5108 chassis switches are
oversubscribed with the
second server. In contrast,
fifteen HP blade servers
can be configured before
reaching the bandwidth
limit of the HP c7000
Platinum chassis
switches.
1.12TB/s
Chassis
Bandwidth
160Gb/s
Chassis
Bandwidth
# Blade Servers to Hit Limit of Chassis Bandwidth
Two fully configured UCS blade servers hit the limits of the 5108 fabric extenders (FEX). It takes fifteen fully configured HP Gen 9 servers to hit the bandwidth limit of the HP FlexFabric Modules.
RDMA over Ethernet (RoCE)
16
InfiniBand networks were invented to overcome the need to plow through the Ethernet protocol stack to complete an I/O
transaction. InfiniBand boosts performance by eliminating layers of the stack for Remote Direct Memory Access (RDMA). The
Ethernet industry responded by developing an enhanced version of Ethernet called Converged Ethernet (CE), featuring Priority
Flow Control which is necessary to support RDMA over Converged Ethernet (RoCE). Blade systems with switches supporting
CE, and with NICs supporting RDMA, can deliver I/O with lower latency and less CPU usage than previous generations of CNAs.
HP ProLiant Gen9 blade servers incorporate 20Gb FlexibleLOM NICs which are RDMA NICs. Cisco has introduced RDMA LOM
and Mezz NICs called the VIC1340 and VIC1380, respectively.
I/O Without RDMA I/O With RDMA
RoCE Blade Environment
17
Networked Storage Killer Apps for RoCE A killer app for RoCE is SMB 3.0 file servers where users accessing shared storage experience the
response time of local storage. File servers turbo-charged with RoCE are commercially available via
two Windows Server 2012 features called SMB Multi-Channel and SMB Direct. With SMB
Multichannel, SMB 3.0 automatically detects the RDMA capability and creates multiple RDMA
connections for a single session. This allows SMB to use the high throughput, low latency and low
CPU utilization offered by SMB Direct.
HP FlexFabric 20Gb adapters (RDMA NICs) are certified by Microsoft for use in the killer app
described above. As of 11/14/14 the VIC 1340 is not certified by Microsoft for SMB Direct.
RoCE Blade Environment
18
In this diagram a single HP BladeSystem with HP 6125XLG Ethernet Blade Switches required to support RoCE, is a high
performance environment for 3 app clusters and 1 file server cluster. Hyper-V automatically senses the presence of RDMA NICs,
then use multi-channel communications to evacuate VMs in seconds, and uses direct memory access for higher I/O to shared
storage inside the blade server.
IOPS Performance Benefits of RoCE
19
Sequential Read
Performance (IOPs)
The HP FlexFabric 20Gb
2-port 650FLB Adapter
(Emulex OCe14102)
with RoCE, used with
Windows Storage
Server and SMB Direct,
provided 82% more
IOPs than previous
generation adapters
without RoCE.
Efficiency Benefits of RoCE
20
Server Power Efficiency
(IOPs per Watt)
The HP FlexFabric 20Gb
2-port 650FLB Adapter
(Emulex OCe14102) with
RoCE, used with
Windows Storage Server
and SMB Direct, delivered
80% higher server
power efficiency than
adapters not using RoCE
Response Time Benefits of RoCE
21
Read I/O Response Time
(Seconds)
The HP FlexFabric 20Gb
2-port 650FLB Adapter
with RoCE (Emulex
OCe14102) , used with
Windows Storage Server
and SMB Direct, reduced
I/O response time by
70% compared to NICs
without SMB Direct
capabilities.
The Cost Benefits of RoCE Offload
22
Hardware Offload
A key to achieving efficient use of processing power is adapter offload of networking protocols so that
application server CPU cycles are not wasted on network protocol processing. Using a software
initiator instead of hardware offload requires that every TCP/IP, FCoE, and iSCSI packet be sent over
the PCI bus to the NIC. A constant PCI bus busy state can interfere with traffic to other devices on the
PCI bus.
The lack of offload can have a big impact on CPU utilization. For example, a single adapter running an
iSCSI software initiator can utilize 30% of the server CPU for iSCSI protocol processing. Add more
adapters and VMs, and more CPU is needed for network protocol processing.
The lack of offload is expensive. The cost of 30% CPU utilization for a $20,000 server is $6,000 — a
cost that can be easily avoided by simply deploying a network adapter with iSCSI offload.
Cisco UCS 1300 Series VIC adapters support TCP, FCoE , NVGRE, VXLAN and RoCE offload. HP
FlexFabric adapters add to that offload for iSCSI. It is worth noting that at the time this report was
written, HP 20Gb adapter VXLAN offload is certified by VMware, while as of 11/14/14 the Cisco VIC
1340/1380 VXLAN offload does not appear on the VMware Compatibility Guide.
The Lack of Offload Can Be Expensive
23
$3,000 $4,500
$6,000 $9,000
$-
$5,000
$10,000
$15,000
$20,000
$25,000
$30,000
$10K Server $15K Server $20K Server $25K Server
Cost of Network Protocol Processing Cost of Server
There are a variety of different network protocols supported by adapters, and many are used simultaneously. The more
protocol processing that is done in the adapter, the more of your server investment can be applied to applications
instead of network protocol processing.\
Comparing I/O Consolidation
24
Why it Matters
IT consolidation is hugely important because it represents less hardware and simplified
management. The utilization of storage media leaped when storage was configured in a
SAN and could be shared by many servers. The utilization of physical servers
dramatically increased when multiple virtual servers could be hosted on a single physical
server. Similarly, network utilization increases when more network protocols can run on a
single cable, adapter or switch.
Consolidation Metrics
There are two metrics for I/O consolidation: the convergence of network protocols, and
the consolidation of cables into higher bandwidth links.
· Network Convergence
· Cable Consolidation
2
Wanted: One Blade Server Network for LAN, SAN, Cluster and SDN Traffic
25
A new best practice for data center managers is to converge traditional shared computing infrastructure
with their growing infrastructure for distributed apps. This is made possible by a new generation of
network adapters and switches with support for the RDMA, VXLAN and NVGRE protocols. Support for
these protocols enables blade servers to converge LAN, SAN, Cluster and SDN traffic on a single
network. It also allows data center managers to use software defined data center tools.
The HP 20Gb FlexibleLOM adapters supports stateless hardware offload of TCP, iSCSI and FCoE
protocols for LAN/SAN convergence, as well as hardware offload of RDMA, VXLAN and NVGRE for
efficient support of cluster and tunnel traffic. The Cisco VIC1340 supports all of the same protocols,
with hardware offload for all of the above except iSCSI.
Network Convergence Road Map
26
1.0
LAN+SAN
2.0
LAN+SAN+Clusters+SDN
IP
CE
iSCSI
FCoE
IP
iSCSI
FCoE
RoCE
VXLAN
NVGRE
At the Xeon E5-2600 inflection point, specialized adapters will no longer be needed to support RDMA. The new class of adapters will also support new tunneling protocols which are essential components of software defined data centers.
A Perfect Fit for a Webscale Private Clouds
27
Network Convergence 2.0
The added support for RDMA over Converged Ethernet, NVGRE and VXLAN allow one adapter port on a blade server to support four network environments. Hardware offload allow the blade server to use precious CPU resources for applications, instead of for network protocol processing.
Shared Distributed SDN
Cable Consolidation
28
A Single 40Gb Link Eliminates Cables for 40 x 1Gb Links or 4 x 10Gb Links
Until recently, 40GbE was used mostly for inter-switch connectivity and in the core of the
network. The availability of 40GbE ports on servers sitting on the edge of the network
has presented the opportunity for IT pros to consolidate dozens of 1GbE links and
handfuls of 10GbE links with a single cable. This is an area where the HP BladeSystem
stand out.
The Cisco UCS architecture makes extensive use of teaming of 10Gb ports to build
uplinks with higher bandwidth. That means lots of cables. Even the 40Gb port on the
UCS Mini must be split into four cables. In contrast, the Virtual Connect Modules on the
HP BladeSystem include four 40GbE ports, which in the apple-to-apples comparison
below reduced the number of cables needed from 24 to 2.
Configuring Redundant 40Gb Uplinks for 16 Blade Servers
29
This diagram shows an apples-to-apples comparison of a 16 blade servers configured with redundant connections between servers and switches, and redundant uplinks. Many more cables are needed in the Cisco UCS configuration because the switches are external, and because of the lack of 40Gb ports. Note the Cisco Mini has a 40Gb port but it can only be used in a 4 x 10GbE configuration.
Cisco UCS (24 cables) HP (2 cables)
4 x 10Gb 1 x 40Gb
Comparing I/O Flexibility
30
Why it Matters
A new era of agility awaits IT organizations who implement cloud operating systems designed to
manage multiple software defined data centers. Years required for a generation of hardware change
will be replaced by months required to deploy a software update. A foundation for this capability is
overlay networks with tunneling of L2 traffic across data centers using L3 networks. Support for
tunneling protocols is embedded in a new class of network adapters making it easy for private cloud
builders to integrate their servers into a cloud platform. Conversely, IT organizations want to continue
using native Fibre Channel SANs and want the flexibility to choose “if” and “when” they converge LANs
and SANs on Ethernet.
I/O Flexibility Metrics
There are two capabilities which are expected to effect I/O flexibility in Webscale private clouds.
· More efficient delivery of tunnel traffic with hardware offload of tunnel protocol processing
· Support for native Fibre Channel
3
Tunneling Unlocks the Cloud
31
Live Migrations a Killer App for VXLAN and NVGRE
One of the most valuable functions of server virtualization is live migration. This function frees system
administrators from the time-consuming and complex process of moving workloads to optimize performance
or mitigate a hardware failure. However, moving VMs on different networks requires extensive network
reconfiguration. IT organizations using data center infrastructure dispersed in public, private or hybrid clouds
simply can’t configure all servers and VMs on one local network, and need a tunneling mechanism to extend
live migrations.
Virtual Extensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE )
are protocols for deploying overlay (virtual) networks on top of a Layer 3 networks. VXLAN and NVGRE are
used to isolate apps and tenants in a cloud and migrate virtual machines across long distances.
While VXLAN and NVGRE allow live migrations across racks and data centers. RoCE accelerates live
migrations. In a Microsoft TechEd demo, migrating Windows Server 2012 to a like system takes just under 1
minute 26 seconds. Windows Server 2012R2 performed the same migration in just over 32 seconds. Then
using RoCE during the live migration process combined with SMB Direct, it took just under 11 seconds,
without utilizing added CPU resources.
Live Migrations Across the Cloud
32
Overlay Network Tunnel
Overlay Network Tunnel
Efficient use of the cloud requires protocols allowing the creation of virtual networks, and allowing Layer 2 network services to traverse Layer 3 networks without network configuration.
Storage Networks
33
Support for Native Fibre Channel Needed for I/O Flexibility
Based on IT Brand Pulse surveys, 40% of IT organizations are not converging
with FCoE. For the 40% of IT professionals who have been too busy to look
at FCoE, or who say they have no plans to converge their LANs and SANs,
parallel Ethernet and Fibre Channel infrastructure will be deployed.
The modular design of blade servers make them inherently flexible. But not
all blade server platforms are equal when it comes to hosting multiple
heterogeneous virtualized workloads and delivering I/O flexibility.
The Cisco UCS blade servers support Ethernet/FCoE connectivity.
The flexible HP BladeSystem supports Ethernet/FCoE, SAS, InfiniBand and
Fibre Channel connectivity.
Wanted: Ethernet & Fibre Channel Networks
34
In 2014, the prevalent data center network architecture remains a parallel network architecture, including a mix of specialized NIC, iSCSI, and Fibre Channel host adapters, as well as Ethernet and Fibre Channel switched fabrics. Cisco UCS blade servers support only Ethernet connectivity. Adoption of FCoE technology is required to access installed Fibre Channel resources.
Advantage HP
35
Blade Server Systems Cisco UCS in 5108 Chassis HP BladeSystem inC7000 Chassis
Chassis Size 6U 10U
Max. Blade Servers 8 16
Mid-plane Bandwidth 1.2Tb/s 7.16Tb/s
Max. Embedded Switches 2 8
Support for native 20Gb Ethernet No Yes
Support for native 40Gb Ethernet
(not including 40Gb port used in 4 x 10 mode) No Yes
Support for native Fibre Channel No Yes
Support for native InfiniBand No Yes
Over subscription 4:1 1.1:1
Hardware offload:
Fibre Channel over Ethernet (FCoE) Yes Yes
iSCSI No Yes
TCP offload engine (TOE) Yes Yes
RoCE offload engine (ROE) Yes Yes
VXLAN offload engine (VOE) Yes (not yet qualified by VMware) Yes
NVGRE offload engine (NOE) Yes (not yet qualified by Microsoft for SMB Direct) Yes
Designed for Workloads of the Future
36
The ProLiant Gen9 Blade Server is designed for I/O flexibility with a choice of FlexFabric converged networking or
parallel Ethernet and Fibre Channel networks. The ProLiant Gen9 Blade Server is also fully compliant with Windows
Server 2012 Virtual Fibre Channel—an innovation that will play an important role in the virtualization of Tier-1 workloads
with Microsoft Hyper-V.
HP Virtual Connect FlexFabric 20/40 F8 module supports “FlatSAN” direct
connectivity to native Fibre Channel 3PAR storage at a lower cost than using Fibre Channel switches
Native Fibre Channel server adapter Over 12 million ports shipped on this stack
718203-B21 HP LPe1605 16Gb Fibre Channel HBA
HP FlexFabric 20Gb 2-port 650FLB Adapter
HP Virtual Connect FlexFabric 20/40 F8 module supports LAN, SAN, NAS, iSCSI and FCoE connectivity
Ethernet LAN on Motherboard (LOM) or Mezz adapter
Dual 10/20GbE Ports Supports LAN, NAS, iSCSI and FCoE connectivity FlexFabric Ready Supports RoCE for scale-out cluster connectivity. Supports NVGRE and VXLAN for migrating VMs
across the cloud.
Summary
37
Infrastructure of the past is functionally defined and purpose-built. Servers are servers, networking is networking
and storage is storage. These purpose-built devices are deployed with little ability to change the function as
needs change. In the future, infrastructure needs to be more transformative, taking the shape of business
demands.
Potential power and flexibility is locked inside the aging Cisco UCS 5108 chassis which severely limits the use of
new high-bandwidth networks and any network other than Ethernet/FCoE.
The new HP BladeSystem answers the call with:
• A new level of convergence which will allow for resources to be allocated at a very granular level, improving
efficiencies and ensuring optimal performance as workload demands change.
• Interfaces to the software-defined data center. HP ProLiant Gen9 blade servers possess the capability to
respond to intelligent orchestration of infrastructure resources in real-time, as applications and user needs
change.
• A cloud-ready architecture ready to scale-out, agile, and always on.
• Workload-optimized for traditional share-everything applications and new share-nothing applications.
Resources
38
Related Links
OCe14000 Test Report
HP FlexFabric Adapters Provided by Emulex
HP BladeSystem
HP Virtual Connect Technology
HP BladeSystem and Cisco UCS Comparison
Cisco Fabric Extender
Cisco UCS Virtual Interface Card 1340
Cisco UCS 6324 Fabric Interconnect Data Sheet
Cisco UCS Ethernet Switching Modes
IT Brand Pulse
About the Author
Joe Kimpler is a senior analyst responsible for IT Brand Pulse Labs. Joe’s team manages the delivery of technical
services including hands-on testing, product reviews, total cost of ownership studies and product launch collateral.
He has over 30 years of experience in information technology and has held senior engineering and marketing
positions at Fujitsu, Rockwell Semiconductors, Quantum and QLogic. Joe holds an engineering degree from the
University of Illinois and a MBA in marketing.