Top Banner
BRKCOM-2003 UCS Networking Deep Dive www.ciscolivevirtual.com
67
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: BRKCOM-2003

BRKCOM-2003

UCS Networking – Deep Dive

www.ciscolivevirtual.com

Page 2: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 2

Agenda • Overview / System Architecture

•Physical Architecture

•Logical Architecture

• Switching Modes of the Fabric Interconnect

• Fabric Failover

• Ethernet Switching Modes Recommendations

• SAN / LAN Northbound Connections

•FC/FCoE Storage direct attach, NAS appliance port

•Port Channel and vPC uplinks, traffic flows and failures

• Adapter Offerings

• UCS Generation-2 Hardware

Page 3: BRKCOM-2003

Overview

Page 4: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 4

Unified Computing System (UCS)

Single Point of Management

Unified Fabric

Stateless Servers with Virtualised Adapters

Page 5: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 5

UCS Manager Embedded– manages entire system

UCS Fabric Interconnect 20 Port 10Gb FCoE 40 Port 10Gb FCoE

UCS Fabric Extender Remote line card

UCS Blade Server Chassis Flexible bay configurations

UCS Blade or Rack Server Industry-standard architecture

UCS Virtual Adapters Choice of multiple adapters

UCS Building Blocks

Page 6: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 6

Cisco UCS Networking: Physical Architecture

6100

Fabric A

6100

Fabric B

B200 B250

CNA

I

O

M

A CNA CNA

I

O

M

B

I

O

M

A

I

O

M

B

SAN A SAN B ETH 1 ETH 2

MGMT MGMT

Chassis 1 Chassis 20

Fabric Switch

Fabric Extenders

Uplink Ports

Compute Blades

Half / Full width

OOB Mgmt

Server Ports

Virtualised Adapters

Cluster

Page 7: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 7

Management

Plane

Cisco UCS Networking: Physical Architecture

6100

Fabric A

6100

Fabric B

B200

CNA

I

O

M

B

I

O

M

A

SAN A SAN B ETH 1 ETH 2

MGMT MGMT

Chassis 1

Fabric Switch

Fabric Extenders

Uplink Ports

Compute Blades

Half / Full width

OOB Mgmt

Server Ports

Virtualised Adapters

Cluster

Rack Mount

VIC FEX FEX

Page 8: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 8

vNIC

1

vEth

1

IOM

Adapter

Fabric

Interconnect

vHBA

1

vFC

1

Service Profile

(Server) Blade

Cable

Virtual Cable

(VNTag)

Network Interface Virtualisation (NIV)

vNIC (LIF)

Host presented PCI device

managed by UCSM.

VIF

Policy application point where

a vNIC connects to UCS fabric

VNTag

An id appended to the packet

which contains the source and

destination ID used for

switching within the UCS fabric.

Page 9: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 9

Adapter

Switch

10GE

A

Eth 1/1

IOM A

6100-A

Physical Cable

Virtual Cable

(VN-Tag)

Abstracting the Logical Architecture

Blade

vNIC

1

10GE

A

vEth

1

IOM A

Adapter

6100-A

vHBA

1

vFC

1

Service Profile

(Server)

Cable

vNIC

1

vEth

1

6100-A

vHBA

1

vFC

1

(Server)

Blade

Dynamic, Rapid

Provisioning

State abstraction

Location

Independence

Blade or Rack

What you get What you see

Page 10: BRKCOM-2003

Hardware Components

Page 11: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 11

UCS 6100 Hardware Architecture

Page 12: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 12

2104-IOM Architecture

Components

Woodside ASIC

Aggregates traffic to/from 32 host-facing 10G Ethernet ports from/to 8 network-facing 10G Ethernet ports

CPU (also referred to as CMC)

Controls Redwood and perform other chassis management functionality

L2 Switch

Aggregates traffic from BMCs on the server blades

No local switching – All traffic from

HIFs goes upstream for Switching

Chassis Management

Controller

FLASH

EEPROM

DRAM

Control

IO

Chassis

Signals

Switch

1 - 4

Fabric Ports

to

Interconnect

8 Backplane Ports to

Blades

Redwood ASIC

Woodside Interfaces HIF (Backplane ports)

NIF (FabricPorts)

BIF

CIF

Page 13: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 13

UCS IOM — Slot to Uplink Pinning

Number of Active Fabric Links Blades pinned to fabric link

1-Link All the HIF ports pinned to the active link

2-Link 1,3,5,7 to link-1

2,4,6,8 to link-2

4-Link 1,5 to link-1

2,6 to link-2

3,7 to link-3

4,8 to link-4

HIFs are statically pinned by the system to individual fabric ports.

Only 1,2,4 links are supported for pinning. 3 is not a valid pinning configuration.

On a link failure, only blades pinned to the NIF are brought down

Page 14: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 14 B

lad

e

1

Bla

de

2

Bla

de

3

Bla

de

4

Bla

de

5

Bla

de

6

Bla

de

7

Bla

de

8

Se

rve

r P

ort

s

Fa

bric P

ort

s

Fabric Interconnect

IOM

•Static Pinning done by the

system dependent on

number of fabric ports

• 1,2 4 (2^x) are valid links

for initial pinning

Individual Links

Static Pinning (IOM-FI)

Page 15: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 15 B

lad

e

1

Bla

de

2

Bla

de

3

Bla

de

4

Bla

de

5

Bla

de

6

Bla

de

7

Bla

de

8

Se

rve

r P

ort

s

Fa

bric P

ort

s

Fabric Interconnect

•Pinned HIFs are brought

down

•Other blades unaffected

Individual Links Fabric Port Failure

IOM

Page 16: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 16

Bla

de

1

Bla

de

2

Bla

de

3

Bla

de

4

Bla

de

5

Bla

de

6

Bla

de

7

Bla

de

8

Se

rve

r P

ort

s

Fa

bric P

ort

s

Fabric Interconnect

Un

use

d L

ink

IOM

Individual Links Re-ack of chassis

•Blades re-pinned to valid

number of links – 1,2 or 4

•HIF’s brought down/up for re-

pinning

•May result in unused links

•Addition of links requires re-

ack of chassis.

Page 17: BRKCOM-2003

Ethernet Switching Modes

Page 18: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 18

Switching Modes: End Host

Server 1

VNIC 0

Server 2

VNIC 0

Fabric A

L2 Switching

6100 A

MAC Learning

MAC Learning

VLAN 10

Spanning Tree

LAN

vEth 3 vEth 1

• Server vNIC pinned to an Uplink port

• No Spanning Tree Protocol

• Reduces CPU load on upstream switches

• Reduces Control Plane load on 6100

• Simplified upstream connectivity

• UCS connects to the LAN like a Server, not

like a Switch

• Maintains MAC table for Servers only

• Eases MAC Table sizing in the Access Layer

• Allows Multiple Active Uplinks per VLAN

• Doubles effective bandwidth vs STP

• Prevents Loops by preventing Uplink-to-

Uplink switching

• Completely transparent to upstream LAN

• Traffic on same VLAN switched locally

Page 19: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 19

VLAN 10

End Host Mode Unicast Forwarding

• Server to server traffic on the same VLAN is locally switched

• Uplink port to Uplink port traffic not switched

• Each server link is pinned to an uplink port / port-channel

• Network to server unicast traffic is forwarded to server only if it arrives on pinned uplink port. This is termed as the Reverse Path Forwarding—(RPF) check

• Packet with source MAC belonging to a server received on an uplink port is dropped (Deja-Vu Check)

Uplink

Ports

6100

RPF Deja-Vu

vEth 1 vEth 3

Server 1

VNIC 0

Server 2

VNIC 0

LAN

Server 2

Page 20: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 20

Uplink

Ports

6100

End Host Mode Multicast Forwarding

• Broadcast traffic is pinned on exactly one uplink port (or port-channel) i.e., it is dropped when received on other uplinks

• All multicast groups are pinned to same uplink port (port-channel)

• Server to server multicast traffic is locally switched

• RPF and deja-vu check also applies for multicast traffic

vEth 1 vEth 3

Server 1

VNIC 0

Server 2

VNIC 0

LAN

Broadcast

Listener

All VLANs

B

B

B

Page 21: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 21

Switching Modes: Switch • Fabric Interconnect behaves like a normal

Layer 2 switch

• Server vNIC traffic follows VLAN forwarding

• Spanning tree protocol is run on the uplink ports per VLAN—Rapid PVST+

• Configuration of STP parameters (bridge priority, Hello Timers etc) not supported

• VTP is not supported currently

• MAC learning/aging happens on both the server and uplink ports like in a typical Layer 2 switch

• Upstream links are blocked per VLAN via Spanning Tree logic

Server 1

VNIC 0

Server 2

VNIC 0

Fabric A

L2 Switching

6100 A MAC Learning

VLAN 10

LAN

vEth 3 vEth 1

Root

Page 22: BRKCOM-2003

Fabric Failover

Page 23: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 23

Fabric Failover

• Fabric provides NIC failover capabilities chosen when defining a service profile

• Traditionally done using NIC bonding driver in the OS

• Provides failover for both unicast and multicast traffic

• Works for any OS.

vNIC 1

10GE 10GE

vEth 1

OS / Hypervisor / VM

vEth 1

IOM IOM

PHY Adapter Cisco VIC – M81KR

Menlo – M71KR VIRT

Adapter

6100-A 6100-B L1 L2

L1 L2

Physical Cable

Virtual Cable

Page 24: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 24

Fabric Failover with Bare Metal OS

No OS NIC Teaming configuration required

Simple single NIC design

Fabric failures hidden from OS

NIC stays UP

6100 sends gratuitous ARP

Everything to gain

Nothing to lose

MAC A

Implicit MAC Learned MAC

6100-A 6100-B L1 L2

L1 L2

Server w/

Cisco VIC or Menlo

WINDOWS / LINUX

MAC A

vNIC MAC A

Slam Dunk: Cisco UCS simplifies the redundancy

Page 25: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 25

Fabric Failover with HYPER-V

Hyper-V soft switch only uses (1) NIC

Fabric Failover provides the missing redundancy

Everything to gain

Nothing to lose

vNIC 1

HYPER V soft switch

VM

MAC A

MAC C

MAC C

MAC A MAC B

MAC C

Implicit MAC Learned MAC

6100-A 6100-B L1 L2

L1 L2

Server w/

Cisco VIC or Menlo

MAC A MAC B

Slam Dunk: Cisco UCS provides the missing redundancy

VM

MAC B

Page 26: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 26

Fabric Failover with Hypervisor Pass-Through

Default setting

Cannot be disabled

Single adapter per VM with Redundancy

Load Sharing

Alternating fabric

Round Robin

Dynamic

vNIC 2

MAC A

Implicit MAC Learned MAC

6100-A 6100-B L1 L2

L1 L2

Server w/

Cisco VIC VM-FEX

MAC A

MAC B

Dynamic

vNIC 1

VM 1

MAC A

MAC B

VM 2

MAC B

Page 27: BRKCOM-2003

Ethernet Switching Modes Recommendations

Page 28: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 28

Scalability

• Spanning Tree protocol is not run in EHM hence control plane is unoccupied

• EHM is least disruptive to upstream network – BPDU Filter/Guard, Portfast enabled upstream

• MAC learning does not happen in EHM on uplink ports. Current MAC address limitation on the 6100 ~14.5K.

Recommendation: End Host Mode

Page 29: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 29

Pinning for Deterministic Traffic Flows • Dynamic pinning

Server ports pinned to an uplink port/port-channel automatically

• Static pinning

Specific pingroups created and associated with adapters

• Static pinning allows traffic management if required for certain applications / servers

Oracle

VNIC 0

Server X

VNIC 0

3 1

Fabric A

6100 A

vEth 3 vEth 1 Pinning

Switching

2 4

APPLIED:

PinGroup

Oracle

DEFINED:

PinGroup

Oracle

Page 30: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 30

Fabric Failover

• Fabric Failover is only applicable in EHM.

• NIC teaming software required to provide failover in Switch mode.

vNIC

1

10GE

10GE

vEth 1

OS / Hypervisor / VM

vEth 1

IOM IOM

PHY Adapter Cisco VIC – M81KR

Menlo – M71KR VIRT

Adapter

6100-A 6100-B L1 L2

L1 L2

Physical Cable

Virtual

Cable

Page 31: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 31

FI-A

LAN

FI-B

Server Ports

Primary Root Secondary Root

Active/Active Border Ports

Server Ports

Primary Root Secondary Root

Border Ports

FI-B FI-A

LAN

Active/Active use of Uplinks

Recommendation: End Host Mode

End Host Mode Switch Mode

Blocking

Page 32: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 32

Disjoint L2 Upstream

• EHM built on the premise that the L2 upstream is NOT disjoint.

• Incoming broadcast/multicast received only on 1 uplink for ALL VLANs

Recommendation: Switch Mode

External LAN

Border Ports

Backup Production

Fabric InterConnect

Designated Bcast

Receiver

Page 33: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 33

Application Specific Scenarios

• Certain application like MS-NLB (Unicast mode) have the need for unknown unicast flooding which is not done in EHM

• Certain network topologies provide better network path out of the Fabric Interconnect due to STP placement and HSRP L3 hop.

• Switch Mode is “catch all” for different scenarios.

Recommendation: Switch Mode

Page 34: BRKCOM-2003

Upstream Connectivity - Storage

Page 35: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 35

SAN “End Host” NPV Mode N-Port Virtualisation Forwarding

Fabric Interconnect operates in N_Port Proxy mode (not FC Switch mode) simplifying managment

SAN switch sees Fabric Interconnect as an FC End Host with many N_Ports and many FC IDs assigned

Server facing ports function as F-proxy ports

Server vHBA pinned to an FC uplink in the same VSAN. Round Robin selection.

Provides multiple FC end nodes to one F_Port off an FC Switch

Eliminates the FC domain on UCS Fabric Interconnect

One VSAN per F_port (multi-vendor)

F_Port Trunking and Channeling with MDS, 5K

SAN B SAN A

Server 1 VSAN 1

vFC 1 vFC 1

N_Proxy N_Proxy

F_Proxy F_Proxy

N_Port N_Port

6100-A 6100-B

F_Port

vFC 2 vFC 2

Server 2 VSAN 1

vHBA

1

vHBA

0

vHBA

1

vHBA

0

F_Port

NPIV NPIV FLOGI

FDISC

VSAN 1 VSAN 1

Page 36: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 36

SAN “End Host” NPV Mode N-Port Virtualisation Forwarding with MDS, Nexus 5000

F_Port Channeling and Trunking from MDS or Nexus 5000 to UCS

FC Port Channel behaves as one logical uplink

FC Port Channel can carry all VSANs (Trunk)

UCS Fabric Interconnects remains in NPV end host mode

Server vHBA pinned to an FC Port Channel

Server vHBA has access to bandwidth on any link member of the FC Port Channel

Load balancing based on FC Exchange_ID

Per Flow

SAN B SAN A

Server 1 VSAN 1

vFC 1 vFC 1

N_Proxy

F_Proxy

N_Port

6100-A 6100-B

F_Port

vFC 2 vFC 2

Server 2 VSAN 2

vHBA

1

vHBA

0

vHBA

1

vHBA

0

VSAN

1,2 VSAN

1,2

F_ Port

Channel &

Trunk

NPIV NPIV

Page 37: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 37

SAN FC Switch Mode Direct Attach FC & FCoE Storage to UCS

UCS Fabric Interconnect behaves like an FC fabric switch

Storage ports can be FC or FCoE

Light subset of FC Switching features

Select Storage ports

Set VSAN on Storage ports

Default zoning per VSAN

No zoning configuration inputs in UCSM

Connection to an external FC switch is reqd:

Zoning configured and pushed to UCS from MDS

Fabric Interconnect uses a FC Domain ID

SAN

Server 1 VSAN 1

vFC 1 vFC 1

F_Port

N_Port

6100-A FC Switch vFC 2 vFC 2

Server 2 VSAN 2

vHBA

1

vHBA

0

vHBA

1

vHBA

0

FC FCoE

6100-B FC Switch

F_Port

TE_Port VSAN 1 VSAN 2

N_Port

MDS MDS

Page 38: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 38

Direct Attach IP Storage to UCS NAS direct attached to “Appliance port” Active/Standby

Server 1 Accessing

Volume A

vEth 1 vEth 1

6100-A vEth 2 vEth 2

Server 2 Accessing

Volume A

vNIC

1

vNIC

0

vNIC

1

vNIC

0

6100-B

Appliance

Port A

C1

LAN

U U

Uplink

Port A

NAS IP Storage attached to “Appliance Port”

NFS, iSCSI

NAS appliance with single Controller head

Each NAS controller port owns the I/O for a given Volume

Other port provides failover

Controller interfaces active/standby for a given volume when attached to separate 6100s

Controller interfaces Active/Active when each handling their own volumes

Sub-optimal forwarding possible if not careful

Insure vNICs are accessing Volumes local to its fabric

Volume

A

Volume

B

Page 39: BRKCOM-2003

Upstream Connectivity - Ethernet

Page 40: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 40

End Host Mode – Individual Uplinks

ESX HOST 1 VNIC 0

Server 2 VNIC 0

vEth 1 vEth 3 Pinning

Fabric A Switching

L2 Switching

Dynamic Re-pinning of failed uplinks

6100 A

VLAN 10 vEth 1

VNIC 0

VNIC stays up

Sub-second re-pinning

MAC A

vSwitch / N1K

MAC C

VM 1 VM 2

MAC B

All uplinks forwarding for all VLANs GARP aided upstream convergence No STP Sub-second re-pinning No server NIC disruption

Page 41: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 41

End Host Mode – Port Channel Uplinks

vEth 1 vEth 3 Pinning

Fabric A Switching

L2 Switching

Recommended: Port Channel Uplinks

6100 A

VLAN 10

More Bandwidth per Uplink Per flow uplink diversity No Server NIC disruption Fewer GARPs needed Faster bi-directional convergence Fewer moving parts

NIC stays up

RECOMMENDED

No disruption

Server 2 VNIC 0

ESX HOST 1 VNIC 0 MAC A

vSwitch / N1K

No GARPs

needed

Sub-second convergence

MAC C

VM 1 VM 2

MAC B

Page 42: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 42

End Host Mode – vPC Uplinks

VNIC 0

vEth 1 vEth 3 Pinning

Fabric A Switching

L2 Switching

vPC uplinks hide uplink & switch failures from Server VNICs

6100 A

VLAN 10 vEth 1

VNIC 0

NIC stays up

More Bandwidth per Uplink No Server NIC disruption Switch and Link resiliency Per flow uplink diversity No GARPs Faster Bi-directional convergence Fewer moving parts

vPC RECOMMENDED

vPC Domain

Server 2 VNIC 0

ESX HOST 1

vSwitch / N1K

MAC C

VM 1 VM 2

MAC B

No disruption

No GARPs

Needed!

Page 43: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 43

Inter-Fabric Traffic Example (1)

VM

1

VM

2

VNIC 1

6100 A 6100 B

VNIC 2

L2 Switching

ESX HOST

Dynamic VNIC1

Primary fabric A

Backup fabric B

Dynamic VNIC2

Primary fabric B

Backup fabric A

VM1 on VLAN 10

VM2 on VLAN 10

VM1 to VM2 traffic:

1) Leaves Fabric A

2) Gets L2 switched

3) Enters Fabric B

EHM EHM

Dynamic Dynamic

Cisco UCS VM-FEX

PTS

Page 44: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 44

vSwitch / N1K

Mac Pinning

VM

1

VM

2

VNIC 0

L2 Switching

ESX HOST 1

vSwitch / N1K

Mac Pinning

VM

3

VM

4

VNIC 1

ESX HOST 2

6100 A

VNIC 1

6100 B

VNIC 0

VNIC 0 on Fabric A

VNIC 1 on Fabric B

VM1 Pinned to VNIC0

VM4 Pinned to VNIC1

VM1 on VLAN 10

VM4 on VLAN 10

VM1 to VM4:

1) Leaves Fabric A

2) L2 switched

upstream

3) Enters Fabric B EHM EHM

Inter-Fabric Traffic Example (2)

Page 45: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 45

Singly Attached Uplinks

6100 A 6100 B

7K1 7K2

EHM EHM

1. Traffic destined for a vNIC on the Red Uplink enters 7K1

2. Same scenario vice-versa for Green

3. All Inter-Fabric traffic traverses Nexus 7000 peer link

Page 46: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 46

Recommended Topology

6100 A 6100 B

7K1 7K2 vPC Domain

vPC peer-link

keepalive

EHM EHM

vPC uplinks to L3 aggregation switch

Page 47: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 47

Recommended Topology w/o vPC

6100 A 6100 B

All UCS uplinks forwarding No STP influence on the topology End Host Mode

Connect 6100’s with End Host Mode to Aggregation L3 switch

With 4 x 10G (or more) uplinks per 6100 – Port Channels

EHM EHM

Page 48: BRKCOM-2003

Adapter Offerings

Page 49: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 49

Gen1 “Compatibility” Adapters NIC and HBA ASICs from:

Qlogic, Emulex, Intel

Dual 10GbE/FCoE ports

Cisco “Menlo” ASIC

IEEE DCB

VN-Tag

Fabric Failover

Support for native drivers and utilities

Customer certified stacks

21w

4Gbps FC

vNIC Fabric Failover

10GbE/FCoE

PCIe Bus

FC 10GbE

Page 50: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 50

Gen2 “Compatibility” Adapters

Emulex M72KR-E (CNA) QLogic M72KR-Q (CNA)

Single Emulex ASIC design

Low power – 13w

8 Gbps FC

Emulex drivers, Eth & FC

No vNIC Fabric Failover

Single QLogic ASIC design

Lowest power – 4.5w

8 Gbps FC

QLogic drivers, Eth & FC

No vNIC Fabric Failover

Page 51: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 51

Gen2 “Cost” Adapters

Broadcom BCM57711 Intel M61KR-I

iSCSI acceleration

iSCSI offload (HBA)

iSCSI boot (future)

TCP offload engine (TOE)

Low cost 10GE

No vNIC Fabric Failover

SR-IOV compatible

iSCSI acceleration

PXE Boot

VMDq

IEEE DCB

FCoE software (future)

No vNIC Fabric Failover

Page 52: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 52

Cisco Virtual Interface Card (VIC) (Palo)

PCIe x16

10GbE/FCoE

User Definable vNICs

Eth

0

FC

1 2

FC

3

Eth

58

Converged Network Adapter

FCoE in hardware

Single-OS and VM deployments

Virtualise in hardware

PCIe compliant

Up to 58 distinct PCIe devices

(H/W capable of 128)

Ethernet vNIC and FC vHBA

2nd Tier Fabric Extender

18w

Page 53: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 53

Cisco Virtual Interface Card (VIC) (Palo)

PCIe x16

10GbE/FCoE

User Definable vNICs

Eth

0

FC

1 2

FC

3

Eth

58

For virtualisation environments

Bypass vSwitch to deliver VN-Link in hardware

Tight integration with VMware vCenter

vNIC as Hardware DVS-port

QoS

(8) COS based queues

vNIC bandwidth guarantees

18w

Page 54: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 54

Cisco VIC: VM-FEX Logical View

vhba0

Cisco VIC

vhba1

vfc 1

vEth1

vEth2

vEth3

vEth4

vfc 2

vEth6

vEth7

vEth8

vEth5

IOM A IOM B

Fabric Interconnect A Fabric Interconnect B

VM 1 VM 2 VM 3 VM 4 VM 5 VM 6 VM 7 VM 50

Page 55: BRKCOM-2003

UCS Generation - 2 Hardware

Page 56: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 56

Additions to the UCS Fabric Portfolio

Next Gen UCS Components and Capabilities

6248UP Fabric Interconnect

2x Fabric Capacity

40% Latency Reduction

Unified Ports

2208XP IO Module

2x Blade Chassis Bandwidth

160 Gb/s per Chassis

VIC 1280

4x Blade Server Bandwidth

Dual 40 Gb/s per Card

256 virtual Interfaces

UCS Version 2.0 Platform Features L2 Disjoint Networks

More Flexible Designs

Reduced Networking HW

Production Vlan 10,20

Public Vlan 31,32

Backup Vlan 40,41

End Host End Host

iSCSI Boot Support in UCSM

Increased Customer Choice

VM-FEX for RedHat KVM

Additional Hypervisor Support

Page 57: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 57

• PID: UCS-FI-6248UP

• Double the port density in 1RU

• UCS 6248UP chassis comes with 32 fixed Unified Ports and 1 expansion module slot – 16 Unified Ports

• Dual power supplies standard for both AC (at FCS) and DC -48V (July)

• Redundant front to back airflow

Highest density & performance for Unified

Computing Fabric

Customer benefits

Feature details

UCS 6248UP Fabric Interconnect

Page 58: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 58

UCS 6248: Unified Ports Dynamic Port Allocation: Lossless Ethernet or Fibre Channel

Use-cases

Native Fibre Channel

Flexible LAN & storage convergence based on business needs

Service can be adjusted based on the demand for specific traffic

FC Eth

Lossless Ethernet:

1/10GbE, FCoE, iSCSI, NAS

Benefits Simplify switch purchase -

remove ports ratio guess work

Increase design flexibility

Remove specific protocol bandwidth bottlenecks

Page 59: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 59

Base card – 32 Unified Ports GEM – 16 Unified Ports

Eth FC Eth FC

Ports on the base card or the Unified Port GEM Module can

either be Ethernet or FC

Only a continuous set of ports can be configured as

Ethernet or FC

Ethernet Ports have to be the 1st set of ports

Port type changes take effect after next reboot of switch for

Base board ports or power-off/on of the GEM for GEM

unified ports.

UCS 6248: Unified Ports Dynamic Port Allocation: Lossless Ethernet or Fibre Channel

Page 60: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 60

Customer benefits

UCS 2208XP I/O Module

Feature details

• PID: UCS-IOM-2208XP

• Double the uplinks

• 8x 10GE uplinks from each IOM/FEX

• Total 160 Gbps per chassis

• Quadruple the downlinks

• 32x 10GE or 4x 10GE from each IOM/FEX to each blade slot*

• Total: 80 Gbps per server slot*

• *Requires VIC 1280 for full server bandwidth

• Increased support for 8 egress CoS queues

• Lower latency

Double the uplink bandwidth to the Fabric Quadruple the downlink bandwidth to the

server slots Lower latency and better QoS

Page 61: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 61

IOM to Fabric Interconnect Port Pinning

Server-to-Fabric Port Pinning Configurations

160 Gb (Discrete Mode) PS

1PS

2

FAN

STAT FAN1

FAN2

FAN

STAT

STAT

OK

FAIL

N10-PAC1-550W

OK

FAIL

N10-PAC1-550W

PS1

PS2

FAN

STAT FAN1

FAN2

FAN

STAT

STAT

OK

FAIL

N10-PAC1-550W

OK

FAIL

N10-PAC1-550W

• 6100 to 2208

• 6200 to 2208

SLOT

1

SLOT

5

SLOT

3

SLOT

7

SLOT

2

SLOT

6

SLOT

4

SLOT

8

!

UCS 5108

OK FAIL OK FAIL OK FAIL OK FAIL

Slot 1 Slot 2

Slot 3 Slot 4

Slot 5 Slot 6

Slot 7 Slot 8

160 Gb (Port Channel Mode)

PS1

PS2

FAN

STAT FAN1

FAN2

FAN

STAT

STAT

OK

FAIL

N10-PAC1-550W

OK

FAIL

N10-PAC1-550W

PS1

PS2

FAN

STAT FAN1

FAN2

FAN

STAT

STAT

OK

FAIL

N10-PAC1-550W

OK

FAIL

N10-PAC1-550W

• 6200 to 2208

SLOT

1

SLOT

5

SLOT

3

SLOT

7

SLOT

2

SLOT

6

SLOT

4

SLOT

8

!

UCS 5108

OK FAIL OK FAIL OK FAIL OK FAIL

Slot 1 Slot 2

Slot 3 Slot 4

Slot 5 Slot 6

Slot 7 Slot 8

Page 62: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 62

• Dual 4x 10 GE port-channels to a single server slot

• Host connectivity PCIe Gen2 x16

• HW Capable of 256 PCIe devices

• OS restriction apply

• PCIe virtualisation OS independent (same as M81KR)

• Single OS driver image for both M81KR and 1280 VIC

• FabricFailover supported

• Eth hash inputs : Source MAC Address,Destination MAC Address,Source Pprt, Destination Port,Source IP address,Destination IP address and VLAN

• FC Hash inputs: Source MAC Address

Destination MAC Address,FC SID and FC DID and OXID

Dual 4x 10 GE (80 Gb per host)

Customer benefits

Feature details

UCS 1280 VIC

UCS 1280 VIC

UCS 2208 IOM

Side A Side B

256 PCIe devices

UCS 2208 IOM

Page 63: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 63

UCS Fabric Component Interoperability

Complete hardware inter-operability between Gen 1 and Gen 2

Fabric Interconnect IOM Adapter Supported Min Software version

required

6100 2104 UCS M81KR UCSM 1.4(1) or earlier

6100 2208 UCS M81KR

UCSM UCS 2.0

6100 2104 UCS1280 VIC UCSM UCS 2.0

6100 2208 UCS1280 VIC UCSM UCS 2.0

6200 2104 UCS M81KR UCSM UCS 2.0

6200 2208 UCS M81KR UCSM UCS 2.0

6200 2104 UCS1280 VIC UCSM UCS 2.0

6200 2208 UCS1280 VIC UCSM UCS 2.0

Page 64: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 64

End Host Mode – Disjointed L2 Domains

UCS ver 2.0 and beyond

Hardware/ FI independent

The ability to selectively assign VLANs

to uplinks

(NO overlapping VLANs)

Basing pinning decision on border port

and vNIC VLAN membership

Allocating a designated

broadcast/multicast receiver on a per

VLAN rather than global basis

Max 31 disjoint Layer 2 domains

supported

Production Vlan 10,20

Public Vlan 31,32

Backup Vlan 40,41

End Host End Host

Recommendation: End Host Mode

Page 65: BRKCOM-2003

Q & A

Page 66: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 66

Complete Your Online Session Evaluation

Complete your session evaluation:

Directly from your mobile device by visiting www.ciscoliveaustralia.com/mobile and login by entering your username and password

Visit one of the Cisco Live internet stations located throughout the venue

Open a browser on your own computer to access the Cisco Live onsite portal

Don’t forget to activate your Cisco Live

Virtual account for access to all session

materials, communities, and on-demand and

live activities throughout the year. Activate your

account at any internet station or visit

www.ciscolivevirtual.com.

Page 67: BRKCOM-2003

© 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKCOM-2003 67