HP Bl Archi HP Virtu Technica Table o Executive Virtual Co Designing Designi VLANs Designi Designing vNetwo Hyperv HP Virtu Appendix Appendix Appendix For more i ladeSyst itecture ual Connect F al white paper f contents Summary ....... onnect FlexFabr an HP FlexFab ng a Highly Av ..................... ng a Highly Av a vSphere Ne ork Distributed S isor Load Balan ual Connect an A: Virtual Con B: Terminology C: Glossary of information ..... tem Netw FlexFabric M r ..................... ric Module Hard bric Architecture vailable Netwo ..................... vailable Netwo etwork Architect Switch Design . ncing Algorithm nd DCC........... nnect Bill of Ma y cross-referenc f Terms ........... ..................... working Module and V ...................... dware Overvie e for VMware v ork Strategy wit ...................... ork Strategy wit ture with the Vi ...................... ms .................. ...................... terials ............ ce .................. ...................... ...................... g Referen VMware vSp ..................... ew ................. vSphere ......... th Virtual Conne ..................... th FlexFabric m irtual Connect F ..................... ..................... ..................... ..................... ..................... ..................... ..................... nce phere 4 ..................... ..................... ..................... ect FlexFabric m ..................... odules and Pas FlexFabric mod ..................... ..................... ..................... ..................... ..................... ..................... ..................... ..................... ..................... ..................... modules and M ..................... ss-through VLAN dule ............... ..................... ..................... ..................... ..................... ..................... ..................... ..................... 1 ................ 2 ................ 3 ................ 4 Managed ................ 4 Ns ............ 8 .............. 11 .............. 11 .............. 13 .............. 14 .............. 15 .............. 16 .............. 17 .............. 19 1
19
Embed
c02499726_HP Virtual Connect Flex Fabric Module and VMware vSphere 4
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
HP BlArchiHP Virtu
Technica
Table o
Executive
Virtual Co
Designing
DesigniVLANs
Designi
Designing
vNetwo
Hyperv
HP Virtu
Appendix
Appendix
Appendix
For more i
ladeSystitecture ual Connect F
al white paper
f contents
Summary .......
onnect FlexFabr
an HP FlexFab
ng a Highly Av.....................
ng a Highly Av
a vSphere Ne
ork Distributed S
isor Load Balan
ual Connect an
A: Virtual Con
B: Terminology
C: Glossary of
information .....
tem Netw FlexFabric M
r
.....................
ric Module Hard
bric Architecture
vailable Netwo.....................
vailable Netwo
etwork Architect
Switch Design .
ncing Algorithm
nd DCC ...........
nnect Bill of Ma
y cross-referenc
f Terms ...........
.....................
working
Module and V
......................
dware Overvie
e for VMware v
ork Strategy wit......................
ork Strategy wit
ture with the Vi
......................
ms ..................
......................
terials ............
ce ..................
......................
......................
g Referen
VMware vSp
.....................
ew .................
vSphere .........
th Virtual Conne.....................
th FlexFabric m
irtual Connect F
.....................
.....................
.....................
.....................
.....................
.....................
.....................
nce
phere 4
.....................
.....................
.....................
ect FlexFabric m.....................
odules and Pas
FlexFabric mod
.....................
.....................
.....................
.....................
.....................
.....................
.....................
.....................
.....................
.....................
modules and M.....................
ss-through VLAN
dule ...............
.....................
.....................
.....................
.....................
.....................
.....................
.....................
1
................ 2
................ 3
................ 4
Managed ................ 4
Ns ............ 8
.............. 11
.............. 11
.............. 13
.............. 14
.............. 15
.............. 16
.............. 17
.............. 19
1
2
Execu
HP has reof the HPmodules,first techncombinedfor provis
HP has siBy combVirtual Cfabrics inHBA ada
These serpopulatioenabled standard
B
B
The follow
B
B
B
Additionaservers.
N
The VirtuFunction tune eachusing therequired console, infrastruc
When deare two fhighly av
1 http://h18
utive Sum
evolutionized P ProLiant Blad HP providednology to dividd with Virtual sioning in the
ince evolved Vining the powonnect FlexFa
nto a single mapters and Fib
rvers include von, room for aand AMD Op with the NC5
BL465 G7
BL685 G7
wing ProLiant
BL460 G7
BL490 G7
BL620/680 G
ally, the NC5
NOTE: Pleas
al Connect Flecalled the Fleh connection t
e Virtual Conn to uplink outsVMkernel, vir
cture with fewe
esigning a vSpfrequent netwovailable Virtua
004.www1.hp.com
mmary
the way IT thdeSystem Gen a great platfode and fine-tu Connect, the datacenter.
Virtual Connewer of ProLiantabric module aodule. This fu
Important: Even though the Virtual Connect FlexFabric module supports Stacking, stacking only applies to Ethernet traffic. FC uplinks cannot be consolidated, as it is not possible to stack the FC ports, nor provide a multi-hop DCB bridging fabric today.
Designing an HP FlexFabric Architecture for VMware vSphere
In this section, we will discuss two different and viable strategies for customers to choose from. Both provide flexible connectivity for hypervisor environments. We will provide the pros and cons to each approach, and provide you with the general steps to configure the environment.
Designing a Highly Available Network Strategy with Virtual Connect FlexFabric modules and Managed VLANs
In this design, two HP ProLiant c-Class 7000 Enclosures with Virtual Connect FlexFabric modules are stacked to form a single Virtual Connect management domain2. By stacking Virtual Connect FlexFabric modules, customer can realize the following benefits:
Management control plane consolidated
More efficient use of WWID, MAC and Serial Number Pools
Provide greater uplink port flexibility and bandwidth
Profile management across stacked enclosures
Shared Uplink Sets provide administrators the ability to distribute VLANs into discrete and defined Ethernet Networks (vNet.) These vNets can then be mapped logically to a Server Profile Network Connection allowing only the required VLANs to be associated with the specific server NIC port. This also allows customers the flexibility to have various network connections for different physical Operating System instances (i.e. VMware ESX host and physical Windows host.)
As of Virtual Connect Firmware 2.30 release, the following Shared Uplink Set rules apply per domain:
320 Unique VLANs per Virtual Connect Ethernet module
128 Unique VLANs per Shared Uplink Set
28 Unique Server Mapped VLANs per Server Profile Network Connection
Every VLAN on every uplink counts towards the 320-VLAN limit. If a Shared Uplink Set is comprised of multiple uplinks, each VLAN on that Shared Uplink Set is counted multiple times.
2 Only available with Virtual Connect Manager Firmware 2.10 or greater. Please review the Virtual Connect Manager Release Notes for more information regarding domain stacking requirements: http://h18004.www1.hp.com/products/blades/components/c-class-tech-installing.html
e capability toane connectior in this case Vtructure, whic
re vSphere Cluste
hysical cablindant pair of T
es, this will allohe uplink porttal redundanc
SI) can be dedapproach provical within a S
rage array ha
nd port will req
uire separate
ber of IP-based
o create an inons to facilitatVMotion and/ch will elimina
er Design
ng. The X5 anTop of Rack (T
ow for not onts assigned tocy purposes, a
dicated and sevides adminisShared Uplink
as certain limi
quire a uniqu
server netwo
d arrays base
nternal, privatte communica/or Fault Toleate the bandw
nd X6 EtherneToR) switches,
nly Virtual Cono each Sharedas shown in F
egregated bytrators to dedk Set) to provi
itations:
e vNet
rk connection
ed on the num
te network witation. This vNerance traffic. width otherwis
et ports of the using LACP (
nnect FlexFabd Uplink Set (SFigure 1-2.
y a separate vdicate a netwode access to
s
mber of unassi
thout uplink pNet can be use Traffic will ne consumed.
FlexFabric m(802.3AD) fo
5
ric module SUS) were
vNet and ork (physicallyIP-based
gned uplink
ports, by usinged for cluster ot pass to the
odule are r link
5
y
g
6
redundancy. The ToR switches can be placed End of Row to save on infrastructure cost. Ports X7 are used for vertical External Stacking Links, while X8 are used for Internal Stacking Links.
As noted in the previous section, Virtual Connect FlexFabric Stacking Links will only carry Ethernet traffic, and do not provide any Fibre Channel stacking options. Thus, ports X1 and X2 from each module are populated with 8Gb SFP+ transceivers, providing 16Gb net FC bandwidth for storage access. Ports X3 and X4 are available to provide additional bandwidth if FC storage traffic is necessary.
If additional Ethernet bandwidth is necessary, ports Enc0:Bay2:X5, Enc0:Bay2:X6, Enc1:Bay1:X5, and Enc1:Bay1:X6 can be used for additional Ethernet Networks or Shared Uplink Sets.
Figure 1-22: Physical desiggn
7
7
8
Figure 1-3
Designand Pa
In this destacked tFlexFabri
M
M
P
P
This desigthe desighosts reqadditiona
This desigVirtual Mnecessary
By providfailure, bhorizontatransceiv
IP-based assigned
3 Only availainformation
3: Logical desig
ning a Higass-throug
esign, two HP o form a singic modules, cu
Management
More efficient
Provide greate
Profile manag
gn does not tagn requires supuire access toal cost and ad
gn also does Machine netwo
y to tunnel the
ding two stackbut also Enclosal redundancyer would be u
storage (NFS uplink port.
able with Virtual Con regarding domain
n
ghly Availah VLANs
ProLiant c-Clale Virtual Conustomer can r
control plane
t use of WWID
er uplink port
ement across
ake into accopport for multo a specific VLdministrative o
not take into aorking. If there specific VLA
ked Enclosuresure failure. Ty purposes. Toused to provid
and/or iSCS This design a
onnect Manager Firn stacking requirem
able Netw
ass 7000 Encnnect manageealize the foll
consolidated
D, MAC and
flexibility and
stacked enclo
unt for other piple types of pLAN, additionoverhead to th
account wherre is a prerequ
AN(s).
e, this will allohe uplink porto reduce transde Service Co
SI) can be dedapproach prov
rmware 2.10 or greents: http://h1800
work Strat
losure with Viement domainlowing benefi
d
Serial Numbe
d bandwidth
osures
physical servephysical OS inal uplink porhe overall des
Virtual Machine FT Logging iSCSI NFS Service Console Management vMotion
NetIOC can be used to control identified traffic, when multiple types of traffic are sharing the same pNIC. In our design example, FT Logging could share the same vDS as the vmkernel, and NetIOC would be used to control the two types of traffic.
With the design example given, there are three options one could choose to incorporate FT Logging:
Table 2-2 VMware Fault Tolerance Options
FT Design Choice Justification Rating
Share with VMotion network
The design choice to keep VMotion traffic internally to the Enclosure allows the use of low latency links for inter-Enclosure communication. By giving enough bandwidth for VMotion and FT traffic, while defining a NetIOC policy, latency should not be an issue.
***
Non-redundant VMotion and FT networks
Dedicate one pNIC for VMotion traffic, and the other for FT logging traffic. Neither network will provide pNIC redundancy.
**
Add additional FlexFabric Adapters and Modules
This option increases the overall CapEx to the solution, but will provide more bandwidth options.
*
Hypervisor Load Balancing Algorithms
VMware provides a number of different NIC teaming algorithms, which are outlined in Table 2-2. As the table shows, any of the available algorithms can be used, except IP Hash. IP Hash requires switch assisted load balancing (802.3ad), which Virtual Connect does not support 802.3ad with server downlink ports. HP and VMware recommend using Originating Virtual Port ID, as shown in Table 2-2.
Table 2-3 VMware Load Balancing Algorithms
Name Algorithm Works with VC
Originating Virtual Port ID
Choose an uplink based on the virtual port where the traffic entered the virtual switch.
Yes
Source MAC Address MAC Address seen on vnic port Yes
IP Hash Hash of Source and Destination IP’s. Requires switch assisted load balancing, 802.3ad. Virtual Connect does not support 802.3ad on server downlink ports, as 802.3ad is a Point-to-Point bonding protocol.
No
Explicit Failover Highest order uplink from the list of Active pNICs. Yes
14
HP Virtual Connect and DCC
Virtual Connect firmware v2.30 introduced Device Control Channel (DCC) support to enable Smart Link, Dynamic Bandwidth Allocation, and Network Assignement to FlexNICs without powering off the server. There are three components required for DCC:
NIC Firmware (Bootcode 5.0.11 or newer)
NIC Driver (Windows Server v5.0.32.0 or newer; Linux 5.0.19-1 or newer; VMware ESX 4.0 v1.52.12.v40.3; VMware ESX 4.1 v1.60)
571956-B21 HP Virtual Connect FlexFabric Ethernet Module 4
487649-B21 .5m 10Gb SFP+ DAC Stacking Cable 2
AJ716A HP StorageWorks 8Gb B-series SW SFP+ 8
453154-B21 1Gb RJ45 SFP transceiver 2
455883-B21 10Gb SR SFP+ transceiver 4
Or
487655-B21 3m SFP+ 10Gb Copper DAC 4
16
Appendix B: Terminology cross-reference
Table C-1 Terminology cross-reference
Customer term Industry term IEEE term Cisco term Nortel term HP Virtual Connect term
Port Bonding or Virtual Port
Port Aggregation or Port-trunking LACP
802.3ad LACP
Etherchannel or channeling (PaGP)
MultiLink Trunking (MLT)
802.3ad LACP
VLAN Tagging VLAN Trunking 802.1Q Trunking 802.1Q Shared Uplink Set
17
Appendix C: Glossary of Terms Table C-1 Glossary
Term Definition
vNet/Virtual Connect Ethernet Network
A standard Ethernet Network consists of a single broadcast domain. However, when “VLAN Tunnelling” is enabled within the Ethernet Network, VC will treat it as an 802.1Q Trunk port, and all frames will be forwarded to the destined host untouched.
Shared Uplink Set (SUS) An uplink port or a group of uplink ports, where the upstream switch port(s) is configured as an 802.1Q trunk. Each associated Virtual Connect Network within the SUS is mapped to a specific VLAN on the external connection, where VLAN tags are removed or added as Ethernet frames enter or leave the Virtual Connect domain.
Auto Port Speed** Let VC automatically determine best FlexNIC speed
Custom Port Speed** Manually set FlexNIC speed (up to Maximum value defined)
DCC** Device Control Channel: method for VC to change either a FlexNIC or FlexHBA Adapter port settings on the fly (without power no/off)
EtherChannel* A Cisco proprietary technology that combines multiple NIC or switch ports for greater bandwidth, load balancing, and redundancy. The technology allows for bi-directional aggregated network traffic flow.
FlexNIC** One of four virtual NIC partitions available per FlexFabric Adapter port. Each capable of being tuned from 100Mb to 10Gb
FlexHBA*** The second Physical Function providing an HBA for either Fibre Channel or iSCSI functions
IEEE 802.1Q An industry standard protocol that enables multiple virtual networks to run on a single link/port in a secure fashion through the use of VLAN tagging.
IEEE 802.3ad An industry standard protocol that allows multiple links/ports to run in parallel, providing a virtual single link/port. The protocol provides greater bandwidth, load balancing, and redundancy.
LACP Link Aggregation Control Protocol (see IEEE802.3ad)
LOM LAN-on-Motherboard. Embedded network adapter on the system board
Maximum Link Connection Speed**
Maximum FlexNIC speed value assigned to vNet by the network administrator. Can NOT be manually overridden on the server profile.
Multiple Networks Link Speed Settings**
Global Preferred and Maximum FlexNIC speed values that override defined vNet values when multiple vNets are assigned to the same FlexNIC
MZ1 or MEZZ1; LOM Mezzanine Slot 1; LAM on Motherboard/systemboard NIC
18
Network Teaming Software A software that runs on a host, allowing multiple network interface ports to be combined to act as a single virtual port. The software provides greater bandwidth, load balancing, and redundancy.
pNIC Physical NIC port. A FlexNIC is seen by VMware as a pNIC
Port Aggregation Combining ports to provide one or more of the following benefits: greater bandwidth, load balancing, and redundancy.
Port Aggregation Protocol (PAgP)*
A Cisco proprietary protocol aids in the automatic creation of Fast EtherChannel links. PAgP packets are sent between Fast EtherChannel-capable ports to negotiate the forming of a channel.
Port Bonding A term typically used in the Unix/Linux world that is synonymous to NIC teaming in the Windows world.
Preferred Link Connection Speed**
Preferred FlexNIC speed value assigned by a vNet by the network administrator.
Trunking (Cisco) 802.1Q VLAN tagging
Trunking (Industry) Combining ports to provide one or more of the following benefits: greater bandwidth, load balancing, and redundancy. See also Port Aggregation.
VLAN A virtual network within a physical network.
VLAN Tagging Tagging/marking an Ethernet frame with an identity number representing a virtual network.
VLAN Trunking Protocol (VTP)* A Cisco proprietary protocol used for configuring and administering VLANs on Cisco network devices.
vNIC Virtual NIC port. A software-based NIC used by VMs
*The feature is not supported by Virtual Connect. **The feature was added for Virtual Connect Flex-10 ***The feature was added for Virtual Connect FlexFabric modules