HPE SimpliVity Cluster Interop with ArubaOS-CX Switch Published: Jan 2019 Rev: 3
HPE SimpliVity Cluster Interop with
ArubaOS-CX Switch
Published: Jan 2019
Rev: 3
© Copyright 2018 Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard
Enterprise products and services are set forth in the express warranty statements accompanying such products and
services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise
shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or
copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's
standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website.
CONTENTS Introduction .................................................................................................................................................................................. 1
SimpliVity Federation Elements ................................................................................................................................................... 1
OmniCube Server Network Interfaces: ......................................................................................................................................... 1
SimpliVity LAB with ArubaOS-CX Switch ..................................................................................................................................... 3
Bill of Materials ........................................................................................................................................................................ 3
ArubaOS-CX Configurations .................................................................................................................................................... 4
Connectivity Verifications from the CX Switch ......................................................................................................................... 6
Verifing the SimpliVity Federation ............................................................................................................................................ 6
SimpliVity LAB with ArubaOS-CX Switches in VSX mode ......................................................................................................... 10
VMware DvSwitch configuration ............................................................................................................................................ 12
ArubaCX VSX Cluster Configurations .................................................................................................................................... 13
Connectivity Verifications from the CX Switch ....................................................................................................................... 15
Tables of Figures........................................................................................................................................................................ 18
References ................................................................................................................................................................................. 18
1
HPE SimpliVity Cluster Interop with ArubaOS-CX Switch
Introduction
This document provides details on how to integrate ArubaOS-CX Switches with SimpliVity Cluster Infrastructure and step
through the SimpliVity hyper-convergence capabilities and verification.
SimpliVity Federation Elements
OmniCube combines compute, storage and networking resources to consolidate IT infrastructure and services below the
hypervisor level. HPE OmniStack accelerator card offloads the compression and deduplication functions in same way as TCP
offload devices which accelerate network traffic.
HPE SimpliVity OmniCube (it’s also referred as OmniStack) systems are essentially a ESXi servers, with storage, that are
added to vCenter datacenters and clusters. The SimpliVity OmniCube combines compute, storage and networking resources
to consolidate IT infrastructure and services below the hypervisor level.
HPE SimpliVity Federation is one or more SimpliVity OmniCubes deployed together into a global environment consisting of
multiple vCenter datacenters. Generally, a Federation consists of a minimum of two OmniCubes in a single datacenter. By
adding at least one OmniCube to a remote datacenter serving as a Disaster Recovery or Archive site, the Federation would
then be known as a “2+1” configuration. Without Disaster Recovery known/referred as a “2+0”.
OmniCube systems and their software releases are built on specific versions of ESXi, and as such are intended for specific
versions of vCenter Server and vSphere
The HPE OmniStack accelerator card offloads the compression and deduplication functions in same way as TCP offload
devices which accelerate network traffic.
Arbiter facilitates communication between OminiStack hosts and the Arbiter must run outside the SimpliVity federation for it
to properly arbitrate quorum decisions. Common locations for the Arbiter are the vCenter Server, provided vCenter is not
running within the Federation, or a separate Windows server.
Since all management is done directly through vSphere Web Client, SimpliVity provides a plug-in (called the SimpliVity
Extension) which adds the SimpliVity functionality to the Web Client
HPE SimpliVity Extension installed on vCenter Server, allows to manage the entire SimpliVity Federation and its inventory
objects centrally.
OmniCube Server Network Interfaces:
As shown in below Figure 1, each HPE SimpliVity OmniCube connects to three separate networks
1. The Management Network for managing the infrastructure and providing for VM Networking
2. The Federation Network which handles heartbeat communication between OmniCube systems as well as VM
replication and backup traffic
3. The Storage Network, which provides VMs access to the SimpliVity Datastores.
2
Note: – For Federation and storage networks, make sure the nodes in a single cluster are L2 connected. It is
recommended to connect all the Cluster nodes to same Top of the Rack (TOR) switch to keep the nodes L2 visible.
Incase same cluster nodes span across multiple TOR switches, which are not in same L2 domain, technologies like
VXLAN can be used to extend the L2 to keep all the nodes L2 visible.
The PCIe accelerator card offloads compute to allow the OmniCube to preserve CPU resources for other tasks and enables
inline deduplication, optimization, and compression.
Figure 1: OminiCube Server Network Interfaces
In addition to the three networks, each OmniCube requires six IP Addresses spread across those networks. As shown in
Figure 2 below, the six IP Addresses consist of:
1. One (1) at iLO
2. Two (2) at the ESXi-level (VMKernel)
3. Three (3) that will be associated with the OmniCube Virtual Controller (OVC) as part of the deployment process.
3
Figure 2: OmniCube Logical Networks and IP Addresses
SimpliVity LAB with ArubaOS-CX Switch
Bill of Materials
1. 3Qty - HPE SimpliVity 380 Servers [2 in DC and 1 in DR]
2. 1Qty - External Windows Server for Arbiter
3. 3Qty - ArubaOS-CX Switch [2 in DC and 1 in DR] for Storage & Federations
4. 1Qty - 1gb Switch for ILO and Mgmt. Networks
5. 1Qty - VMware vCenter
4
Topology
Figure 3: OmniCube 2 + 1 Topology
Figure 4: OmniCube 2 + 1 Topology – Mgmt and Data plane Connectivity with ArubaOS-CX Switches
As shown in the topology, Three Aruba CX switches are connected with 10Gb interfaces for Federation and Storage networks
(and can be used for VM traffic), and iLO and Mgmt interfaces are connected to a 1gb Switches
ArubaOS-CX Configurations
8320-SW01 Configuration
vlan 20
5
name MGMT
vlan 21
name STORAGE
vlan 22
name FEDERATION
interface 1/1/1
no shutdown
mtu 9000
no routing
vlan trunk native 1
vlan trunk allowed all
exit
interface 1/1/2
no shutdown
mtu 9000
no routing
vlan trunk native 1
vlan trunk allowed all
exit
interface 1/1/48
no shutdown
mtu 9000
no routing
vlan trunk native 1
vlan trunk allowed all
exit
8320-SW02 Configuration
vlan 20
name MGMT
vlan 21
name STORAGE
vlan 22
name FEDERATION
interface 1/1/1
no shutdown
mtu 9000
no routing
vlan trunk native 1
vlan trunk allowed all
exit
interface 1/1/2
no shutdown
mtu 9000
no routing
vlan trunk native 1
vlan trunk allowed all
exit
interface 1/1/48
no shutdown
mtu 9000
no routing
vlan trunk native 1
vlan trunk allowed all
exit
6
Connectivity Verifications from the CX Switch
SW01- Verification
8320-SW1# show interface brief
1/1/1 1 trunk SFP+DA3 yes up 10000
1/1/2 1 trunk SFP+DA3 yes up 10000
1/1/48 1 trunk SFP+DA3 yes up 10000
8320-SW01# show vlan
-------------------------------------------------------------------------------------------
VLAN Name Status Reason Type Interfaces
-------------------------------------------------------------------------------------------
20 MGMT up ok static 1/1/1-1/1/2,1/1/48
21 STORAGE up ok static 1/1/1-1/1/2,1/1/48
22 FEDERATION up ok static 1/1/1-1/1/2,1/1/48
SW02- Verification
8320-SW02#show int brief
1/1/1 1 trunk SFP+DA3 yes up 10000
1/1/2 1 trunk SFP+DA3 yes up 10000
1/1/48 1 trunk SFP+DA3 yes up 10000
8320-SW02# show vlan
-------------------------------------------------------------------------------------------
VLAN Name Status Reason Type Interfaces
-------------------------------------------------------------------------------------------
20 MGMT up ok static 1/1/1-1/1/2,1/1/48
21 STORAGE up ok static 1/1/1-1/1/2,1/1/48
22 FEDERATION up ok static 1/1/1-1/1/2,1/1/48
Verifing the SimpliVity Federation
OmniVC-01: Interfaces Login to OmniVC using “svtcli” and pwd which was set during the deployment
(default password is simplicity)
svtcli@omnicube-ip0-101:~$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:56:97:83:1c
inet addr:192.168.0.101 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:124563 errors:0 dropped:54 overruns:0 frame:0
TX packets:134938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:201880284 (201.8 MB) TX bytes:40533171 (40.5 MB)
eth1 Link encap:Ethernet HWaddr 00:50:56:97:c3:eb
inet addr:192.168.22.101 Bcast:192.168.22.255 Mask:255.255.255.0
7
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:35780 errors:0 dropped:24 overruns:0 frame:0
TX packets:28954 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3158482 (3.1 MB) TX bytes:3209893 (3.2 MB)
eth2 Link encap:Ethernet HWaddr 00:50:56:97:8f:45
inet addr:192.168.21.101 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:853669 errors:0 dropped:24 overruns:0 frame:0
TX packets:854570 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:51220108 (51.2 MB) TX bytes:35891940 (35.8 MB)
OmniVC-02 Interfaces
svtcli@omnicube-ip0-102:~$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:56:97:f1:9e
inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:104074 errors:0 dropped:10 overruns:0 frame:0
TX packets:135037 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:186036111 (186.0 MB) TX bytes:36365827 (36.3 MB)
eth1 Link encap:Ethernet HWaddr 00:50:56:97:72:16
inet addr:192.168.22.102 Bcast:192.168.22.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:28563 errors:0 dropped:0 overruns:0 frame:0
TX packets:44952 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3176888 (3.1 MB) TX bytes:22502588 (22.5 MB)
eth2 Link encap:Ethernet HWaddr 00:50:56:97:31:e7
[This ip visible only when the Active VC is down]
inet addr:192.168.21.102 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:834702 errors:0 dropped:0 overruns:0 frame:0
TX packets:833767 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:50082120 (50.0 MB) TX bytes:35018214 (35.0 MB)
Testing connectivity between OminiVC’s
OmniVC-01 svtcli@omnicube-ip0-101:~$ ping 192.168.22.102
PING 192.168.22.102 (192.168.22.102) 56(84) bytes of data.
64 bytes from 192.168.22.102: icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from 192.168.22.102: icmp_seq=2 ttl=64 time=0.098 ms
svtcli@omnicube-ip0-101:~$ ping 192.168.21.102
PING 192.168.21.102 (192.168.21.102) 56(84) bytes of data.
64 bytes from 192.168.21.102: icmp_seq=1 ttl=64 time=0.016 ms
64 bytes from 192.168.21.102: icmp_seq=2 ttl=64 time=0.028 ms
OmniVC-02 svtcli@omnicube-ip0-102:~$ ping 192.168.22.101
PING 192.168.22.101 (192.168.22.101) 56(84) bytes of data.
64 bytes from 192.168.22.101: icmp_seq=1 ttl=64 time=0.097 ms
64 bytes from 192.168.22.101: icmp_seq=2 ttl=64 time=0.095 ms
svtcli@omnicube-ip0-102:~$ ping 192.168.21.101
PING 192.168.21.101 (192.168.21.101) 56(84) bytes of data.
8
64 bytes from 192.168.21.101: icmp_seq=1 ttl=64 time=0.097 ms
64 bytes from 192.168.21.101: icmp_seq=2 ttl=64 time=0.095 ms
SimpliVity Operations from CLI
login as: svtcli
SimpliVity OmniCube
[email protected]'s password:
Last login: Tue Jan 15 03:54:47 UTC 2019 from 15.234.147.20 on pts/12
Welcome to SimpliVity OmniCube 3.7.6.160
svtcli@omnicube-ip0-101:~$ svt-session-start
vCenter server: 10.10.100.240
Enter username: [email protected]
Enter password for [email protected]:
Successful login of [email protected] to 10.10.100.240
Testing SimpliVity Federation Status
If the above output is not clear, same pic pasted below in half and half.
Get a list of all VMs svtcli@omnicube-ip0-101:~$ svt-vm-show
show all datastores
9
svtcli@omnicube-ip0-101:~$ svt-datastore-show
Show all Backup Policies svtcli@omnicube-ip0-101:~$ svt-policy-show
Clone an existing VM svtcli@omnicube-ip0-102:~$svt-vm-clone --vm LNX-VM-01 --datastore DS02-PROD
Move VM to the new datastore svtcli@omnicube-ip0-102:~$svt-vm-move -vm LNX-VM-01 -source DS01-PROD -destination DS02-PROD
–force
SimpliVity Operations from GUI
10
As shown in the above screenshot, using SimpliViti actions, you can backup,
clone, move virtual machines and it allows to set or update a backup policy
to a Virtual machine.
At a vmware cluster level, SimpliVity actions allows to search Backups, and
create common datastore using all nodes in that cluster.
SimpliVity LAB with ArubaOS-CX Switches in VSX mode
VSX is a high availability technology solution purpose built for the campus core. Designed using the best features of existing
HA technologies such as Multi-chassis Link Aggregation (MC-LAG) and Virtual Switching Framework (VSF), VSX enables a
distributed and redundant architecture that is highly available during upgrades inherently by architecture design. High
availability is delivered through redundancy gained by deploying two chassis in the core with each chassis maintaining its
independent control yet staying synchronizing information via the ArubaOS-CX unique database architecture
VSX’s benefits include the flexibility to support network designs offered by other virtualization approaches. Supported designs
include:
Dual control plane architecture: Allows for better redundancy and independently upgradable firmware. With the
enhanced configuration synchronization features and unified troubleshooting capabilities, VSX management is highly simplified.
Active-Active L2: There is no need for a spanning-tree protocol, there are no blocked links and network quickly re-
convergences in the event of link or device failures.
Active-Active L3: VSX switches can run OSPF, BGP and PIM over MCLAG links for communication between
aggregation and core. While the control plane is split, the data path is unified. The first switch that gets the packet will forward the packet to the downstream neighbor and traffic traverse between VSX peers prior to forwarding for downstream neighbor.
DHCP Relay redundancy: Both aggregation switches can be configured as DHCP forwarders but only one of the devices
plays an active role in relaying DHCP requests between the clients and the DHCP server.
11
No First Hop Redundancy Protocol (FHRP) configuration required: No VRRP protocol configuration required and if
one of the devices fail, the other device will simply take over and forward all traffic.
As shown in topology below, when SimpliVity Cluster connected to VSX topology, it is recommented to configure link-
aggregation. VMWare supports LACP link aggregation using Distibuted Switches.
Figure 5: OmniCube 2 + 1 Topology – ArubaOS-CX Switches in VSX Cluster
12
VMware DvSwitch configuration
Here is the settings on the VMWare DvSwitch enabled with LACP
And here is how the LAG sub-interfaces as uplinks to the DvSwitch
13
ArubaCX VSX Cluster Configurations
8320-SW01 Configuration
vrf KeepAlive
interface 1/1/16
no shutdown
vrf attach KeepAlive
ip address 192.168.10.1/29
!
vlan 20
name MGMT
vsx-sync
vlan 21
name STORAGE
vsx-sync
vlan 22
name FEDERATION
vsx-sync
!
interface lag 1
vsx-sync vlans
description ISL LAG
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 20,21,22
lacp mode active
!
vsx
inter-switch-link lag 1
role primary
keepalive peer 192.168.10.2 source 192.168.10.1 vrf KeepAlive
!
interface lag 10 multi-chassis
vsx-sync vlans
description DL380-1 VSX LAG
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 20,21,22
lacp mode active
loop-protect
interface lag 20 multi-chassis
vsx-sync vlans
description DL380-2 VSX LAG
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 20,21,22
lacp mode active
loop-protect
interface 1/1/1
no shutdown
mtu 9000
14
lag 10
exit
interface 1/1/2
no shutdown
mtu 9000
lag 20
exit
interface 1/1/15,1/1/48
no shutdown
mtu 9000
lag 1
exit
!
8320-SW02 Configuration
vrf KeepAlive
interface 1/1/16
no shutdown
vrf attach KeepAlive
ip address 192.168.10.2/29
!
vlan 20
name MGMT
vsx-sync
vlan 21
name STORAGE
vsx-sync
vlan 22
name FEDERATION
vsx-sync
!
interface lag 1
vsx-sync vlans
description ISL LAG
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 20,21,22
lacp mode active
!
vsx
inter-switch-link lag 1
role secondary
keepalive peer 192.168.10.1 source 192.168.10.2 vrf KeepAlive
!
interface lag 10 multi-chassis
vsx-sync vlans
description DL380-1 VSX LAG
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 20,21,22
lacp mode active
loop-protect
interface lag 20 multi-chassis
vsx-sync vlans
description DL380-2 VSX LAG
no shutdown
15
no routing
vlan trunk native 1
vlan trunk allowed 20,21,22
lacp mode active
loop-protect
interface 1/1/1
no shutdown
mtu 9000
lag 10
exit
interface 1/1/2
no shutdown
mtu 9000
lag 20
exit
interface 1/1/15, 1/1/48
no shutdown
mtu 9000
lag 1
exit
Connectivity Verifications from the CX Switch
SW02- Verification
8320-SW2# show vsx status
VSX Operational State
---------------------
ISL channel : In-Sync
ISL mgmt channel : operational
Config Sync Status : in-sync
NAE : peer_reachable
HTTPS Server : peer_reachable
Attribute Local Peer
------------ -------- --------
ISL link lag1 lag1
ISL version 2 2
System MAC d0:67:26:49:6b:fa d0:67:26:49:cc:f2
Platform 8320 8320
Software Version TL.10.02.0001 TL.10.02.0001
Device Role secondary primary
8320-SW2# sh run vsx-sync
Current vsx-sync configuration:
!
!Version ArubaOS-CX TL.10.02.0001
interface lag 1
vsx-sync vlans
description ISL LAG
no shutdown
no routing
vlan trunk native 1 tag
vlan trunk allowed 20-22
lacp mode active
interface lag 10 multi-chassis
16
vsx-sync vlans
description DL380-1 VSX LAG
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 20-21
lacp mode active
interface lag 20 multi-chassis
vsx-sync vlans
description DL380-2 VSX LAG
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 20-21
lacp mode active
vlan 20
name MGMT
vsx-sync
vlan 21
name STORAGE
vsx-sync
vlan 22
name FEDERATION
vsx-sync
8320-SW2# show run vsx-sync peer-diff
--- /tmp/running-config-vsx.dbb 2018-10-31 02:43:17.247284541 +0000
+++ /tmp/peer-running-config-vsx.dbb 2018-10-31 02:43:17.241284542 +0000
@@ -18,6 +18,7 @@
vlan trunk native 1
vlan trunk allowed 20-21
lacp mode active
+ loop-protect
interface lag 20 multi-chassis
vsx-sync vlans
description DL380-2 VSX LAG
@@ -26,6 +27,7 @@
vlan trunk native 1
vlan trunk allowed 20-21
lacp mode active
+ loop-protect
vlan 20
name MGMT
vsx-sync
RU35-8320# show lacp aggregates
Aggregate name : lag1 <<ISL link>>
Interfaces : 1/1/48 1/1/15
Heartbeat rate : Slow
Hash : l3-src-dst
Aggregate mode : Active
Aggregate name : lag10 (multi-chassis)
Interfaces : 1/1/1
Peer interfaces : 1/1/1
Heartbeat rate : Slow
Hash : l3-src-dst
Aggregate mode : Active
Aggregate name : lag20 (multi-chassis)
Interfaces : 1/1/2
Peer interfaces : 1/1/2
17
Heartbeat rate : Slow
Hash : l3-src-dst
Aggregate mode : Active
8320-SW1# show interface brief
1/1/1 1 trunk SFP+DA3 yes up 10000
1/1/2 1 trunk SFP+DA3 yes up 10000
1/1/48 1 trunk SFP+DA3 yes up 10000
8320-SW01# show vlan
-------------------------------------------------------------------------------------------
VLAN Name Status Reason Type Interfaces
-------------------------------------------------------------------------------------------
20 MGMT up ok static 1/1/1-1/1/2,1/1/48
21 STORAGE up ok static 1/1/1-1/1/2,1/1/48
22 FEDERATION up ok static 1/1/1-1/1/2,1/1/48
8320-SW02# show vlan
-------------------------------------------------------------------------------------------
VLAN Name Status Reason Type Interfaces
-------------------------------------------------------------------------------------------
1 DEFAULT_VLAN_1 up ok default 1/1/3,1/1/47
20 MGMT up ok static 1/1/3,1/1/47,lag1,lag10,lag20
21 STORAGE up ok static 1/1/3,1/1/47,lag1,lag10,lag20
22 FEDERATION up ok static 1/1/3,1/1/47,lag1,lag10,lag20
8320-SW02# sh int lag10
Aggregate-name lag10
Aggregate lag10 is up
Admin state is up
Description : DL380-1 VSX LAG
MAC Address : d0:67:26:49:cc:f2
Aggregated-interfaces : 1/1/1
Aggregation-key : 10
Aggregate mode : active
Speed 10000 Mb/s
L3 Counters: Rx Disabled, Tx Disabled
qos trust none
VLAN Mode: native-untagged
Native VLAN: 1
Allowed VLAN List: 20-22
Rx
61445 input packets 16154436 bytes
0 input error 623 dropped
0 CRC/FCS
Tx
61723 output packets 11147807 bytes
0 input error 0 dropped
0 collision
8320-SW02# sh int lag20
Aggregate-name lag20
Aggregate lag20 is up
Admin state is up
18
Description : DL380-2 VSX LAG
MAC Address : d0:67:26:49:cc:f2
Aggregated-interfaces : 1/1/2
Aggregation-key : 20
Aggregate mode : active
Speed 10000 Mb/s
L3 Counters: Rx Disabled, Tx Disabled
qos trust none
VLAN Mode: native-untagged
Native VLAN: 1
Allowed VLAN List: 20-22
Rx
60705 input packets 10937083 bytes
0 input error 682 dropped
0 CRC/FCS
Tx
62681 output packets 16305841 bytes
0 input error 6 dropped
0 collision
Tables of Figures
Figure 1: OminiCube Server Network Interfaces ........................................................................................................2
Figure 2: OmniCube Logical Networks and IP Addresses ...........................................................................................3
Figure 3: OmniCube 2 + 1 Topology ........................................................................................................................4
Figure 4: OmniCube 2 + 1 Topology – Mgmt and Data plane Connectivity with ArubaOS-CX Switches ..............4
Figure 5: OmniCube 2 + 1 Topology – ArubaOS-CX Switches in VSX Cluster .................................................... 11
References