© 2009 Cisco Systems, Inc. All rights reserved. Cisco Public BRKDCT-2002 1 Next Generation Data Center BRKDCT-2002 René Raeber Datacenter Architect & IEEE 802 [email protected] 11 - 13 maio, 2010 – São Paulo, Brasil
Jan 01, 2016
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 1
Next Generation Data Center
BRKDCT-2002René RaeberDatacenter Architect & IEEE [email protected]
11 - 13 maio, 2010 – São Paulo, Brasil
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 2
Housekeeping
� We value your feedback- don't forget to complete your session evaluations after each session & complete the Overall Conference Evaluation
� Visit the World of Solutions
� Please remember this is a 'non-smoking' venue!
� Please switch off your mobile phones
� Please make use of the recycling bins provided
� Please remember to wear your badge at all times
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 3
� Unified Fabrics
� VN-Link and Network Interface Virtualization
� Unified Computing System
� Foundation for Private Cloud Infrastructure
� Innovations for Cloud and Virtualization
� Summary
Agenda
© 2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 4
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 5
Historical Adoption of High Speed Ethernet
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 6
� IEEE 802.3ba D3.0 released (It’s a DONE DEAL)
Standard Update
We are here
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 7
High Speed Ethernet Adoption on Servers
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 8
Server Virtualization is Changing the Game
� Virtual networks growing faster and larger than physicalNetwork admins are getting involved in virtual interface deployments
Network access layer needs to evolve to support consolidation and mobility
� Multi-core Computing driving Virtualization & new networking needs
Driving SAN attach rates higher (10%�40%�Growing)
Driving users to plan now for 10GE server interfaces
� Virtualization enables the promise of blades10GE and FC are highest growth technologies within blades
Virtualization and Consolidated I/O removes blade limitation
� Network Virtualization enables CPU & I/O Intensive Workloads to be Virtualized
Enable broader adoption of x86 class servers
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 9
10GbE Drivers in the Datacenter
Multi-Core CPU architectures allowing bigger and multiple workloads on the same machine
Server virtualization driving the need for more bandwidth per server due to server consolidation
Growing need for network storage driving the demand for higher network bandwidth to the server
Multi-Core CPUs and Server Virtualization driving the demand for higher bandwidth network connections
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 10
Unified Fabric OverviewWire-Once Access to all Storage / Services
VM-optimized networking
MACB
MACA
MACC
A & B C
End nodes
LAN
MACB
MACA
MACC
A & B C
End nodes
LAN
Ecosystem Partners
10GE L2 non-blocking, lossless, low latency switch
Ethernet LAN
N5000
• Priority Flow Control IEEE 802.1Qbb (PFC)
• Bandwidth Management IEEE 802.1Qaz (ETS)
• Congestion Management IEEE 802.1Qau (QCN*)
• Data Center Bridging Exchange Protocol (DCBX)
• L2 Multipath (L2MP)
• Lossless Service
LAN
N5000
MACB
MACA
Unified fabric for• LAN• SAN• HPC/IPC
VirtualizationWire speed 10GE Data Center Ethernet FCoE
LAN SAN BSAN ALAN SAN BSAN A
N5000
StandardsStandards
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 11
FC TrafficFC HBA
Why Converged Network Adapters (CNA) with FCoE?
� Fewer adapters instead of NICs, HBAs and HCAs
� Enables lossless operation for Ethernet
� Enables all services on 1 adapter type
� Limited number of interfaces for Blade Servers
All traffic goes over
10GE
CNA
CNA
FC TrafficFC HBA
NIC LAN Traffic
NIC LAN Traffic
NIC Mgmt Traffic
NIC Backup Traffic
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 12
FCoE Benefits
FC over Ethernet (FCoE)
� Mapping of FC frames over Ethernet
� Enables FC to run on a lossless Data Center Ethernet network
� Wire Server Once
� Fewer cables and adapters
� Software Provisioning of I/O
� Interoperates with existing SANs
� No gateway—stateless
� Standard – June 3, 2009FibreChannel
Ethernet
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 13
Fibre Channel over Ethernet – How it works
� Direct mapping of Fibre Channel over Ethernet
� Leverages standards-based extensions to Ethernet (DCE) to provide reliable I/O delivery
Priority Flow Control (PFC)
Data Center Bridging Capability eXchange Protocol (DCBX)
MAC
PHY
FCoE Mapping
FC-0
FC-1
FC-2
FC-3
FC-4
FC-2
FC-3
FC-4
FC Frame
Ethernet Header
Ethernet Payload
Ethernet FCS
SO
F
EO
FC
RC
(a) Protocol Layers (b) Frame Encapsulation
10GE LosslessEthernet
Link(DCE)
FCoE Traffic
Other NetworkingTraffic
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 14
FCoE Enablers
� 10Gbps Ethernet
� Lossless EthernetMatches the lossless behavior guaranteed in FC by B2B credits
� Ethernet jumbo framesMax FC frame payload = 2112 bytes
Eth
erne
tH
eade
r
FC
oEH
eade
r
FC
Hea
der
FC Payload CR
C
EO
F
FC
S
Same as a physical FC frame
Control information: version, ordered sets (SOF, EOF)
Normal ethernet frame, ethertype = FCoE
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 15
Ethernet
IP
TCP
iSCSI
IB
SRP
IP
TCP
FCIP
FCP
IP
TCP
iFCP
FCP
FCoE
FCP
FC
FCP
SCSI Layer
Operating System / Applications
1, 2, 4, 8, 10 Gbps 1, 10 . . . Gbps 10, 20, 40 Gbps
Encapsulation technologies
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 16
E. Ethernet
FCoE
FCP
SCSI Layer
OS / Applications
1, 10 . . . Gbps
Encapsulation technologies
� FCP layer is untouched
� Allows same management toolsfor Fibre Channel
� Allows same Fibre Channel drivers
� Allows same Multipathingsoftware
� Simplifies certifications with OSMs
� Evolution rather than Revolution
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 17
FCoE & Cisco UCS
From ad hoc and
inconsistent…
…to structured, but
siloed, complicated and
costly…
…to simple, optimized
and automated
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 18
Provides ability to transport various traffic types (e.g. Storage, RDMA)
Lossless Service
Eliminate Spanning Tree for L2 topologies
Utilize full Bi-Sectional bandwidth with ECMP
L2 Multi-path for Unicast & Multicast
Auto-negotiation for Enhanced Ethernet capabilities DCBX
Data Center Bridging Capability Exchange Protocol - 802.1AB (LLDP)802.1AB (LLDP)
End to End Congestion Management for L2 networkCongestion Notification (BCN/QCN) - 802.1Qau802.1Qau
Grouping classes of traffic into “Service Lanes”IEEE 802.1Qaz, CoS based Enhanced Transmission
Enhanced Transmission Selection - 802.1Qaz802.1Qaz
Provides class of service flow control. Ability to support storage traffic
Priority-based Flow Control (PFC) - 802.1Qbb802.1Qbb
BenefitFeature
Data Center Bridging Standards and FeaturesOverview
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 19
Provides class of service flow control. Ability to support storage traffic
Priority-based Flow Control (PFC) - 802.1Qbb802.1Qbb
BenefitFeature
Data Center Bridging Standards and FeaturesOverview
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 20
Link Level Flow Control
Fibre Channel Buffer-to-buffer Credits:
STOP PAUSE
R_RDY
Ethernet PAUSE:
Transmit Frame
Transmit Frame
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 21
Data Center Bridging Features - PFC
� Enables lossless Fabrics for each class of service� PAUSE sent per virtual lane when buffers limit exc eeded� Network resources are partitioned between VL’s (E.g . input
buffer and output queue)� The switch behavior is negotiable per VL
Priority-Based Flow Control (PFC)Priority-Based Flow Control (PFC)
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 22
Grouping classes of traffic into “Service Lanes”IEEE 802.1Qaz, CoS based Enhanced Transmission
Enhanced Transmission Selection - 802.1Qaz802.1Qaz
Provides class of service flow control. Ability to support storage traffic
Priority-based Flow Control (PFC) - 802.1Qbb802.1Qbb
BenefitFeature
Data Center Bridging Standards and FeaturesOverview
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 23
Data Center Bridging Features - ETS
Enhanced Transmission Selection (ETS)Enhanced Transmission Selection (ETS)
� Enables Intelligent sharing ofbandwidth between traffic classes control of bandwidth
� Being Standardized in IEEE 802.1Qaz� Also known as Priority Grouping
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 24
End to End Congestion Management for L2 networkCongestion Notification (BCN/QCN) - 802.1Qau802.1Qau
Grouping classes of traffic into “Service Lanes”IEEE 802.1Qaz, CoS based Enhanced Transmission
Enhanced Transmission Selection - 802.1Qaz802.1Qaz
Provides class of service flow control. Ability to support storage traffic
Priority-based Flow Control (PFC) - 802.1Qbb802.1Qbb
BenefitFeature
Data Center Bridging Standards and FeaturesOverview
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 25
Data Center Bridging Features
Congestion ManagementCongestion Management
� Moves congestion out of the core to avoid congestio n spreading� Allows End-to-End congestion management� Standards track in 802.1Qau
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 26
Auto-negotiation for Enhanced Ethernet capabilities DCBX
Data Center Bridging Capability Exchange Protocol - 802.1AB (LLDP)802.1AB (LLDP)
End to End Congestion Management for L2 networkCongestion Notification (BCN/QCN) - 802.1Qau802.1Qau
Grouping classes of traffic into “Service Lanes”IEEE 802.1Qaz, CoS based Enhanced Transmission
Enhanced Transmission Selection - 802.1Qaz802.1Qaz
Provides class of service flow control. Ability to support storage traffic
Priority-based Flow Control (PFC) - 802.1Qbb802.1Qbb
BenefitFeature
Data Center Bridging Standards and FeaturesOverview
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 27
Data Center Bridging Exchange
Devices need to discover the edge of the enhanced Ethernet cloud:
Each edge switch needs to learn that it is connected to a legacy switch.
Servers need to learn whether or not they are connected to Enhanced Ethernet device.
Within the Enhanced Ethernet cloud, devices need to discover the capabilities of its peers.
DCBX utilizes the link-layer discovery protocol and handles local operational configuration for each feature
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 28
Eliminate Spanning Tree for L2 topologies
Utilize full Bi-Sectional bandwidth with ECMP
L2 Multi-path for Unicast & Multicast
Auto-negotiation for Enhanced Ethernet capabilities DCBX
Data Center Bridging Capability Exchange Protocol - 802.1AB (LLDP)802.1AB (LLDP)
End to End Congestion Management for L2 networkCongestion Notification (BCN/QCN) - 802.1Qau802.1Qau
Grouping classes of traffic into “Service Lanes”IEEE 802.1Qaz, CoS based Enhanced Transmission
Enhanced Transmission Selection - 802.1Qaz802.1Qaz
Provides class of service flow control. Ability to support storage traffic
Priority-based Flow Control (PFC) - 802.1Qbb802.1Qbb
BenefitFeature
Data Center Bridging Standards and FeaturesOverview
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 29
Data Center Bridging Features – L2MP
Layer 2 Multi-PathingLayer 2 Multi-Pathing
LAN
Active-Active
MACB
MACA
MACA
MACB
LAN LAN
vPC
L2 ECMP
L2 ECMP
Phase 1Phase 1 Phase 2Phase 2 Phase 3Phase 3
� Eliminates STP on UplinkBridge Ports
� Allows Multiple ActiveUplinks Switch to Network
� Prevents Loops by Pinning aMAC Address to Only OnePort
� Completely Transparent toNext Hop Switch
� Virtual Switch retains physical switches independent control and data planes�Virtual port channel mechanism is transparent to hosts or switches connected to the virtual switch�STP as fail-safe mechanism to prevent loops even in the case of control plane failure
� Uses ISIS based topology� Eliminates STP from L2
domain� Preferred path selection� TRILL is the work in progress standard
VirtualSwitch
We are here …
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 30
Let’s Analyze a Tree Structure
Is This What We Have in a Data Center?
The Root
The Branches
The Leaves
BranchSizeDecreases
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 31
Spanning-Tree and Over-subscription
� Branches of trees never interconnect (no loop!!!)
� STP (Spanning-Tree Protocol) use the same approach to build loop-free L2 logical topology
� Over-subscription ratio exacerbated by STP algorithm
11 Physical Links
5 Logical Links
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 32
Traffic Pattern Determined by Place-in-the-Network
� Mostly North-South traffic flows
� High over-subscription ratio is typically acceptable for client-server type of applications
Campus Network Data Center
� East-West traffic flows, in addition to the North-South ones
� Demand for higher bandwidth and lower over-subscription ratio are common, especially for server-to-server communication
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 33
New Requirements Driven By
� HPC Cluster• Potential of a large number nodes in a single L2 domain• Closed environment, with limited external connectivity.
� Low-Latency Computing• Based on the HPC Cluster (above), but with specific latency requirements.• Deterministic latency and relatively low jitter• High-density switch desired to help reduce latency introduced by network layers or ‘hops’
� General Purpose DC• Eliminate L2 dependency on STP• Provide wider L2 Domain, shorten x-connects• Improve sectional bandwidth and network scalabilities
� Layer-2 Internet Exchange Point
� LAN Extension across Data Centers
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 34
Modern DC: Rich Mesh
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 35
Modern DC: After Spanning Tree is Done
We need to go beyond this model
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 36
What about crossbars, fat trees and CLOS networks?
A 4 x 4 Crossbars
The next few slides are derived from a presentation of Prof. Nick McKeown at Stanford UniversityEE384Y: Packet Switch Architectures Part II: Scaling Crossbar Switches.
4 outputs
4 inputs
With a 40 nm silicon technology the largest crossbar has 100 ports
Difficult to scale since complexity is quadratic in the number of ports
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 37
Scaling number of outputs: Trying to build a crossbar from multiple chips
4 inp
uts
4 outputs
Building Block: 16x16 crossbar switch:
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 38
CLOS networks, from Charles Clos
In the field of telecommunications, a Clos network is a kind of multistage switching network, first formalized by Charles Clos in 1953, which represents a theoretical idealization of practical multi-stage telephone switching systems.
Definition Courtesy of Wikipedia
The goals were:1.Reduced complexity when compared to crossbar2.Study properties
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 39
3-stage Clos Network
1
N
N = r x n
1
2
…
r
1
2
…
…
…
m
1
2
…
r
1
N
n n
Unidirectional Communication
r n x mcrossbars
m r x rcrossbars
r m x ncrossbars
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 40
Clos network properties (remember Clos was studying telephone networks!)
� If m ≥ 2n - 1, the Clos network is strict-sense nonblocking, meaning that an unused input on an ingress switch can always be connected to an unused output on an egress switch, without having to re-arrange existing calls.
� f m ≥ n, the Clos network is rearrangeably nonblocking, meaning that an unused input on an ingress switch can always be connected to an unused output on an egress switch, but for this to take place, existing calls may have to be rearranged by assigning them to different centre stage switches in the Clos network
Additional Material
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 41
1
N
1
2
r
1
2
m
n 1
2
…
r
1
N
n
Manipulating the Clos Network
Unidirectional Network
BidirectionalNetwork
We fold itWe rotate it
Spine
Access
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 42
L2MP vs. TRILL SummaryL2MP is a superset of TRILL
TRILL Standard status as of April 2010:
� Base protocol specification is now a proposed IETF standard (March 2010)
� Control plane specification will become a proposed standard within months
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 43
L2MP: Multi-topology
� Cisco L2MP provides a mechanism to create multiple L2MP topologies within a single ISIS instance.
� Each topology can be used to forward different traffic types based on unique classifier
� Up to 64 Topologies are possible
ISIS
Instance 1
Topology A
Topology B
Instance 2
Topology C
Topology D
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 44
L2 Multi-PathingProblem Statements
Problem/Challenge L2MP Solution
Desire to deliver a ‘workload anywhere’model
L2 Flexibility for allowing VLANs anywhere helps to reduce physical constraints on server location
Too much unused Bandwidth at Layer 2
Up to 16 active paths at L2, each path a 16 member port channel for Unicast and Multicast
Undesirable Failure Handling at Layer 2
Alternative to Spanning Tree ‘drawbacks’Leveraging L3 routing concepts
Scaling MAC address tables in larger L2 domains
Hierarchical addressing + Conversational learning allow more efficient use of the available MAC table space.
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 45
Provides ability to transport various traffic types (e.g. Storage, RDMA)
Lossless Service
Eliminate Spanning Tree for L2 topologies
Utilize full Bi-Sectional bandwidth with ECMP
L2 Multi-path for Unicast & Multicast
Auto-negotiation for Enhanced Ethernet capabilities DCBX
Data Center Bridging Capability Exchange Protocol - 802.1AB (LLDP)802.1AB (LLDP)
End to End Congestion Management for L2 networkCongestion Notification (BCN/QCN) - 802.1Qau802.1Qau
Grouping classes of traffic into “Service Lanes”IEEE 802.1Qaz, CoS based Enhanced Transmission
Enhanced Transmission Selection - 802.1Qaz802.1Qaz
Provides class of service flow control. Ability to support storage traffic
Priority-based Flow Control (PFC) - 802.1Qbb802.1Qbb
BenefitFeature
Data Center Bridging Standards and FeaturesOverview
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 46
Virtual LinksAn example
VL1
VL2VL3
LAN/IP Gateway
Storage Gateway
VL1 – LAN Service – LAN/IP
VL3 – Delayed Drop Service - IPC
VL2 - No Drop Service - Storage
Up to 8 VL’s per physical linkAbility to support QoS queues within the lanes
DCECNA
DCECNA
DCECNA
Campus Core/Internet
Storage AreaNetwork
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 47
� Unified Fabrics
� VN-Link and Network Interface Virtualization
� Unified Computing System
� Foundation for Private Cloud Infrastructure
� Innovations for Cloud and Virtualization
� Summary
Agenda
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 48
Virtualization is inevitable
� As CPU core density per physical server grows, the need for virtualization becomes inevitable as apps don't yet have the need to use all the speed or the cores
� As VM density per hypervisor instance grows this introduces I/O management issues
Abstracted/Invisible on the network
No control over traffic utilization
� With the introduction of virtual appliances there is a clear need of applying SLAs for VMs
Processors are scaling horizontallyProcessors are scaling horizontally
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 49
VN-Link Brings VM Level Granularity
Problems:
VN-Link:� Extends network to the VM
� Consistent services
� Coordinated, coherent management
VMotion� VMotion may move VMs
across physical ports—policy must follow
� Impossible to view or apply policy to locally switched traffic
� Cannot correlate traffic on physical links—from multiple VMsVLAN
101
Cisco VN-Link Switch
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 50
Network Interface Virtualization (NIV): VNTAG Technology
� VNTAG: special TAG added to L2 frame
� Enables external switch to forward frames that “belong”to the same physical port
� Carries source and destination interface ID
� Used for virtualized and not virtualized Environments (example is Nexus 5000 + Nexus 2000 Virtual Modular Switch)
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 51
VNTAG
VNTAG Ethertype
source virtual interface
destination virtual interfaced p
l
VNTAG
Frame Payload
CRC[4]
VNTAG[6]
SA[6]
DA[6]
802.1Q[4]
�Cisco and VMWARE jointly submitted a proposal to IEEE to enable the switching with Network Interface Virtualization.
�The VNTAG is a special TAG added to a Layer 2 frame to make it possible for an external switch to forward frames that “belong” to the same physical port.
�Proposal by Joe Pelissier (Cisco Systems) and Andrew Lambeth (VMWARE)
�Coexists with VLAN (802.1Q) tag802.1Q tag is mandatory to signal data path priorityhttp://www.ieee802.org/1/files/public/docs2008/new-dcb-pelissier-NIV-Proposal-1108.pdf
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 52
VNTAG Processing (1)
� Interface Virtualizer adds VNTAG
Unique source virtual interface for each vNIC
d (direction) = 0
p (pointer), l (looped), and destination virtual interface are undefined (0)
Frame is unconditionally sent to the Switch
LANSAN
Interface VirtualizerInterface Virtualizer
OS
v
OS OS
v v v v v
Virtual Interface Switch
ApplicationPayload
TCP
IP
Ethernet
VNTAG
VNTAG Ethertype
source virtual interface
destination virtual interfaced p
l
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 53
VNTAG Processing (2)
� Virtual Interface Switch ingress processing
Extract VNTAG
Ingress policy based on port and source virtual interface
Access control and forwarding based on frame fields and virtual interface policy
Forwarding selects destination port(s) and destination virtual interface(s)
VIS adds a new VNTAG
LANSAN
Interface Virtualizer
OS
v
OS OS
v v v v v
Virtual Interface SwitchVirtual Interface Switch
ApplicationPayload
TCP
IP
Ethernet
VNTAGpolicy
access control& forwarding
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 54
VNTAG Processing (3)
� Virtual Interface Switch egress processing
Features from port and destination virtual interface
Insert VNTAG(2)
direction is set to 1
destination virtual interface and pointer select a single vNIC or list
source virtual interface and l (looped) filter a single vNIC if sending frame to source adapter
LANSAN
Interface Virtualizer
OS
v
OS OS
v v v v v
Virtual Interface SwitchVirtual Interface Switch
ApplicationPayload
TCP
IP
Ethernet
VNTAG(2)
VNTAG Ethertype
source virtual interface
destination virtual interfaced p
l
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 55
Multicast (vNIC list)Unicast (single vNIC)
VNTAG Processing (4)
� Interface Virtualizer (IV) forwards based on VNTAG
Extract VNTAG
Upper layer protocol features from frame fields
destination virtual interface and pointer select vNIC(s)
source virtual interface and looped filter a single vNIC
if source and destination are same IV
LANSAN
Interface VirtualizerInterface Virtualizer
OS
v
OS OS
v v v v v
Virtual Interface Switch
ApplicationPayload
TCP
IP
Ethernet
VNTAG(2)vNIC
forwarding
ULPfeatures
OS
v
OS OS
v v v v v
OS
v
OS OS
v v v v v
x x
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 56
VNTAG Processing (5)
� OS stack formulates frames traditionally
� Interface Virtualizer adds VNTAG
� Virtual Interface Switch ingress processing
� Virtual Interface Switch egress processing
� Interface Virtualizer forwards based on VNTAG
� OS stack receives frame as if directly connected to Switch
LANSAN
Interface Virtualizer
OS OSOSOS
v v vv v
Virtual Interface Switch
ApplicationPayload
TCP
IP
Ethernet
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 57
VNTAG can be extended to Blade Switches as well as generic LAN switches
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 58
VN-Link Architectural Components
Connectivity
Mobility
Operations
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 59
VN-Link Solution with VMwarePolicy-Based VM Connectivity
Port Profiles
WEB Apps
HR
DB
DMZ
Policy-Based VM Connectivity
Mobility of Network and Security Properties
Non-DisruptiveOperational Model
vSphere
Nexus1000VVEM
vSphere
Nexus1000VVEM
VM VM VM VM VM VM VM VM
Nexus 1000V
VM Connection Policy� Defined in the network
� Applied in Virtual Center
� Linked to VM UUID
VMware vCenter
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 60
VN-Link Solution with VMwareMobility of Network And Security Properties
Policy-Based VM Connectivity
Non-DisruptiveOperational Model
vSphere
Nexus1000VVEM
vSphere
Nexus1000VVEM
Nexus 1000V
VM VM VM VM
Mobility of Network and Security Properties
VM VM VM VM
Property Mobility� VMotion for the network
� Ensures VM security
� Maintains connection state
VMs Need to Move� VMotion
� DRS
� SW Upgrade/Patch
� Hardware Failure
VMware vCenter
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 61
VN-Link Solution with VMwareNon-Disruptive Operational Model
Policy-Based VM Connectivity
vSphere
Nexus1000VVEM
vSphere
Nexus1000VVEM
Nexus 1000V
VM VM VM VMVM VM VM VMVI Admin Benefits� Maintains existing VM mgmt
� Reduces deployment time
� Improves scalability
� Reduces operational workload
� Enables VM-level visibility
Non-DisruptiveOperational Model
Mobility of Network and Security Properties
Network Admin Benefits� Unifies network mgmt and
ops
� Improves operational security
� Enhances VM network features
� Ensures policy persistence
� Enables VM-level visibility
VMware vCenter
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 62
NIC & Switch: Available Options
VN-Link
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 636363
Why different models?
GREATER FLEXIBILITY/SCALABILITY, RICH FEATURE SET AND
FASTER TIME TO MARKET
GREATER FLEXIBILITY/SCALABILITY, RICH FEATURE SET AND
FASTER TIME TO MARKET
HIGHER PERFORMANCES & BETTER I/O MANAGEMENTHIGHER PERFORMANCES & BETTER I/O MANAGEMENT
VIC + UCS 6100
VMDirectPath
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 64
VN-Link OfferingsCisco VIC + UCS 6100UCS: todayNEXUS: 2H 2010
Cisco VIC + UCS 6100 w/VM DirectPath (2H 2010)
Generic Adapter+ Nexus 1000V(today)
Genericadapter
Cisco VIC
Cisco VIC
Cisco VIC
Cisco VIC+ Nexus 1000V(today)
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 65
VN-Link: Deployment Models Comparison
VIC + Nexus 1000V VIC + UCS 6100
Switching & Policy Enforcement
Distributed in the host Centralized (performed by Centralized (performed by upstream switch in HW)upstream switch in HW)
Latency Latency changes based on Latency changes based on features applied & packet features applied & packet destinationdestination
Deterministic regardless of Deterministic regardless of features applied & packet features applied & packet destinationdestination
Access Layer Features Physical & virtual access Physical & virtual access feature sets are uncoupledfeature sets are uncoupled
Physical & virtual access Physical & virtual access feature sets tied to HW feature sets tied to HW Switch ASICSwitch ASIC
VM I/O Variable host CPU cycles Variable host CPU cycles consumed to support itconsumed to support it
Offloaded from host CPU to Offloaded from host CPU to upstream switch HWupstream switch HW
VNIC/VETH VNIC and VETH live in the hostVNIC and VETH live in the host VNIC lives in the host. VETH lives in upstream switch
Platform Extension Requires hypervisor specific Requires hypervisor specific vswitch developmentvswitch development
Simple platform extension via Simple platform extension via NIC driver developmentNIC driver development
Preferred Use Advanced Features/ScalabilityAdvanced Features/Scalability Performances
NOTE: VN-Link in Hardware w/VM DirectPath requires HW specific guest OS NIC driver
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 66
Virtual Networking Standards Components
a continuum of capability with different options a continuum of capability with different options
low end full capacity
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 67
Virtual Embedded Bridge (VEB)
� There is no standard for a VEB
� A VEB is a standard bridge that “takes shortcuts” due to knowledge it has with the hypervisor (e.g. get the MAC addresses from the hypervisor)
� No frame format modification
� Existing implementations mostly with limited visibility & limited features set
� Nexus 1000V is the industry’s most advanced VEB, offers advanced features such as QoS, ACL, ERSPAN, Netflow and delivers VN-Link
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 68
Virtual Ethernet Port Aggregation (VEPA)
� Modification of a VEB: forwards all frames to the CB. It performs various policies on those frames and then forwards them back to VEPA
� Specified: Reflective Relay
� Pros: can be implemented in most bridges without hardware modification
� Cons:
strong limitations when applying policies to a subset of VMs
complicates management problemREFLECTIVE RELAY : allow a frame received on one port to be forwarded back on the same port
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 69
Multichannel
� Allows the CB to explicitly specify a function (VEB, a VEPA, or an individual VM) within the server to which a frame is to be delivered
� Multichannel w/o VEPA:
management is simplified – but no remote replication capability (multicast). Equivalent to adding a linecard
� Multichannel w/ VEPA:
equivalent to adding a new switch
� Specified by 802.1Qbc – its application for virtualized environments is specified by 802.1Qbg
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 70
Port Extension
� Adds two critical functions to multichannel: frame replication (multicast) and cascading
Replication: with multichannel CB must transmit the same frame multiple times. Port extenders instead perform frame replication
Cascading: PE may be installed up the hierarchy until a switch with the desired capability is reached
� PE may be connected to VEPAs, VEBs, and/or individual VMs
� PE + CB = Extended VLAN Bridge managed as a single entity.
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 71
802.1Qbh Introduction
P802.1Qbh specifies three major items:
� A Port Extender
� An M-Component which is used to make a Port Extender
� A EVB Controlling Bridge, a bridge that is capable of being extended using Port Extenders
The Combination of the EVB Controlling Bridge and the
Port Extenders is referred to as an Extended VLAN Bridge
(E-VLAN Bridge)
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 72
Nexus 1000V VSM
Cisco Nexus 1000V Architecture
Nexus 1000V VSM
vCentervCenter
Virtual Supervisor Module (VSM)� Virtual or Physical appliance running
Cisco NXOS (supports HA)
� Performs management, monitoring, & configuration
� Tight integration with VMware vCenter
Virtual Supervisor Module (VSM)Virtual Supervisor Module (VSM)
�� Virtual or Physical appliance running Virtual or Physical appliance running Cisco NXOS (supports HA)Cisco NXOS (supports HA)
�� Performs management, monitoring, & Performs management, monitoring, & configurationconfiguration
�� Tight integration with VMware vCenterTight integration with VMware vCenter
Virtual Ethernet Module (VEM)� Enables advanced networking
capability on the hypervisor
� Provides each VM with dedicated “switch port”
� Collection of VEMs = 1 vNetwork Distributed Switch
Virtual Ethernet Module (VEM)Virtual Ethernet Module (VEM)
�� Enables advanced networking Enables advanced networking capability on the hypervisorcapability on the hypervisor
�� Provides each VM with dedicated Provides each VM with dedicated ““switch portswitch port””
�� Collection of VEMs = 1 vNetwork Collection of VEMs = 1 vNetwork Distributed SwitchDistributed Switch
Cisco Nexus 1000V Installation� ESX & ESXi
� VUM & Manual Installation
� VEM is installed/upgraded like an ESX patch
Cisco Nexus 1000V InstallationCisco Nexus 1000V Installation�� ESX & ESXiESX & ESXi
�� VUM & Manual InstallationVUM & Manual Installation
�� VEM is installed/upgraded like an ESX VEM is installed/upgraded like an ESX patchpatch
vSphere
NexusNexus1000V1000VVEMVEM
vSphere vSphere
Nexus
1000V
VEM
NexusNexus1000V1000VVEMVEM
VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 73
� Unified Fabrics
� VN-Link and Network Interface Virtualization
� Unified Computing System
� Innovations for Cloud and Virtualization
� Key Data Center Innovations
� Summary
Agenda
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 74
Mgmt Server
Server Deployment Today
� Over the past 10 years
An evolution of size, not thinking
More servers & switches than ever
More switches per server
Management applied, not integrated
� An accidental architecture
Still a 1980’s PC model
� Result: Complexity
More points of management
More difficult to maintain policy coherence
More difficult to secure
More difficult to scale
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 75
Mgmt Server Mgmt Server� Embed management
� Unify fabrics
� Optimize virtualization
� Remove unnecessary
switches,
adapters,
management modules
� Less than 1/3rd the support infrastructure
Mgmt Server
Server Deployment Today
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 76
Mgmt Server
Our Solution: Cisco UCS
� A single system that encompasses:
Network: Unified fabric
Compute: Industry standard x86
Virtualization optimized
� Unified management model
Dynamic resource provisioning
� Efficient Scale
Cisco network scale & services
Fewer servers with more memory
� Lower cost
Fewer servers, switches, adapters, cables
Lower power consumption
Fewer points of management
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 77
Cisco Systems IT - Key Benefits of Unified Computing System
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 78
Unifed Data Center – A Case Study
Traditional Design Unified Fabric Comparison
Rack Count 135 72 63 Fewer
Fiber (48) 4320 10085172 Fewer
Copper (24) 2160 300
Power
Facility (kW) 1000
Storage (kW) 247 25% 21%
DC Network (kW) 186 19% 8%
Other Network (kW) 79 8% 8%
Available forServers (kW) 488 49% 63% 129%
Savings of ~5000 Cables
~30% More Power for Servers
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 79
Traditional Unified Fabric UCS
1. DC efficiency 100% 130–150% 130% 170–200%
10,000 sq ft, 1 MW
2. Cabling $2.7 Million $1.6 Million $1.6 Million
3. Physical server count
720 930 –1080 1200–1400
4. VM count 7,200 9300–10800 12000–28000
PowerOptimization
~40% Savings From Cabling
12,000 to 28,000 VMs In the Same Size DC!
Unified Data Center & UCS – A Case Study
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 80
Cisco UnifiedComputing
Traditional Blade Traditional Blade ServerServer
Cisco Unified Cisco Unified ComputingComputing
UCS optimizes Data Center Design
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 81
61206120 61406140
21042104
51085108
B250B250 B200B200
82598KR82598KR
M71KRM71KR
M81KRM81KR
Adapter / Mezz Cards:Adapter / Mezz Cards:• Emulex, Qlogic, Intel Oplin, Cisco
Compute bladesCompute blades• Intel Nehalem 2 socket EP x5500• Full width blade scales to 384Gig
Stateless blade enclosureStateless blade enclosure• 8 blade slots• No management modules
Stateless IO moduleStateless IO module• 4 x 10Gig uplinks (SFP+)• Transport for FC & Ethernet traffic using FCoE
Fabric Interconnect Fabric Interconnect • 20 & 40 port (downlink) versions• Upstream FC & DCE connectivity to existing SAN & LAN
UCS Physical Components
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 82
Overall System (Front)
Top of Rack Switch
Chassis
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 83
Physical Scalability
Uplinks
• 40 Chassis• 8 blades in each chassis• (1) 10G connection from each
FEX module• 20Gb of BW shared across 8 blades
• 10 Chassis• 8 blades in each chassis• (4) 10G connections from each
FEX module• 80Gb of BW shared across 8 blades
•Flexible BW allocation
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 84
IOM connections: chassis backplane view
Fabric A Fabric B
�Half-width servers: 1 mezz card (one A and one B path)� Full-width servers: 2 mezz cards (two A & B paths)
HA
Chassis
IOM1 IOM2
Blade 1
Blade 3
Blade 5
Blade 7
Blade 2
Blade 4
Blade 6
Blade 8
Path A
Path A Path A
Path B
Path B Path B
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 85
UCS Fabric Extender - VNtag
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 86
Ethernet End Host Mode
LAN LAN
Border ports
End Host Mode Switching ModeServer ports
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 87
End Host Mode
� A UCS Fabric Interconnect operating in End Host Mode is called an EH-node
� An EH-node appears to the external LAN as an end station with many adapters
� An EH-node has two types of ports (by configuration)•Border port (can be port channel) – connect to upstream L2 network
•Server port – connect to servers
• The EH-node does not participate in STP on the border ports•Reduces scale of STP control plane
•Active-Active use of redundant links towards upstream L2 network
•Traffic CANNOT be forwarded between one border port to another border port
� End-host mode is the default Ethernet switching mode, and should be used if either of the following are used upstream:
• Layer 2 switching for L2 Aggregation
• Virtual Switching System (VSS) aggregation layer
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 88
UCS Innovations Provide Multiple BenefitsApplies to both B-Series and C-Series
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 89
�Server
Identity (UUID)
Adapters
Number
Type: FC, Ethernet
Identity
Characteristics
Firmware/BIOS
Revisions
Configuration settings
�Network
Uplinks
LAN settings
vLAN
QoS
etc…
Firmware
Revisions
Storage• Optional Disk
usage• SAN settings
• LUNs• Persistent
Binding
• SAN settings• vSAN
• Firmware• Revisions
Configuration Points
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 9090
�Network
Uplinks
LAN settings
vLAN
QoS
etc…
Firmware
Revisions
Storage• Optional Disk usage• SAN settings
• LUNs• Persistent Binding
• SAN settings• vSAN
• Firmware• Revisions
Service Profile
�Server
Identity (UUID)
Adapters
Number
Type: FC, Ethernet
Identity
Characteristics
Firmware
Revisions
Configuration settings
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 91
Database
WWW
ESX
DataBase
� Attributes decoupled from hardware components
Not just identity:
FW, Boot Device, BIOS, etc
� Dynamic ProvisioningDeploy in minutes, not days
Simplified infrastructure repurposing
Seamless server mobility
Integrated with 3rd party tools
Stateless ComputingIntegrated physical mobility
Service Profile: DataBaseNetwork1: DB_vlan1Network1 QoS: PlatinumMAC : 08:00:69:02:01:FCWWN: 5080020000075740Boot Order: SAN, LANFW: DataBaseSanBundle
Service Profile: DataBaseNetwork1: DB_vlan1Network1 QoS: PlatinumMAC : 08:00:69:02:01:FCWWN: 5080020000075740Boot Order: SAN, LANFW: DataBaseSanBundle
Service Profile: ESX-HostNetwork1: esx_prodNetwork1 QoS: GoldMAC : 08:00:69:11:19:EQWWN: 5080020000074312Boot Order: SAN, LANFW: ESXHostBundle
Service Profile: WebServerNetwork1: www_prodNetwork1 QoS: GoldMAC : 08:00:69:10:78:EDBoot Order: LOCALFW: WebServerBundle
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 92
Stateless ComputingIntegrated Firmware Management
Unified ComputeSystem
Multiple Firmware� Creates dependencies
� Require integration testing
• Application certification
• HW Vendor certification
� Mitigation labor intensive
Software for Ethernet Interconnect
Software for FC Interconnect
System/Device Management Software
Power, Fan, Temp Monitoring
BIOS, BMC,Embedded Hypervisors
NICs and HBAs(Fibre Channel and Ethernet)
Other (HDD controller, etc…)
Integrated bundles• Define in service profile
• Apply automatically
• Remove human error
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 93
UCS C250 M1 Extended Memory Blade
NOTE:DDR3 10600 memory pricing as of 9/29/09
� 70%-80% lower memory costs
� Unmatched High End Capacity
� Industry Standard DDR3
Cost Savings with Cisco Extended Memory
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 94
Changing the economics
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 95
The Cisco Virtual Interface Card
� Converged Network Adapter designed for both single-OS and VM-based deployments
Virtualize in hardwarePCIe compliant
� High Performance2x 10GbLow latency High bandwidth support
� Flexible Configuration and Unified Management
Supports up to 128 vNICs Fibre Channel or Ethernet Managed via UCS Manager
� VMDirectPath with V-Motion: bypass vSwitch and Hypervisor for maximum performance
Available 2010 as software upgrade
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 96
UCS Certification & Integration Status
Business AppsBusiness Apps
SAP
•SAP Linux RHEK 4.8, 5.3 & SLES 10 sp3 •SAP Windows 2008 EE•SAP BoE
Oracle
•Oracle Apps/CRM
System Management
System Management
IBM •Tivoli Mon 6.2.1•Omnibus7.2.1•Provision Mgr 7.1•Network Mgr 3.8
BMC
•BladeLogic 7.7 SP
•BladeLogic 8.0 BCAN
HP
•SA 7.8 •NA 7.5
CA
•Spectrum v1.0•eHealth
Microsoft
•Sys Center 2008•SCVMM ProPack
DatabaseDatabase
Virtualization Software
Virtualization Software
Operating System
Operating System
Disk StorageDisk Storage
Oracle
•DB EE 10gR2/11gRi (SI & RAC) for Linux (OEL 5.3 & RHEL 5.3•Times-Ten
VMware
•vSpere 4, 4I (incl vCenter)
•Infrastructure 3.5 Update 4
Microsoft
•Windows Server 2003 R2, 2008, 2008 R2
Microsoft
•HyperV on Windows 2008
RedHat
•RHEL 4.8, 5.3, 5.4•RHEL 6.0
Oracle
•OEL 5.3 (B Series)
Microsoft
•SQL Server 2008, 2008 R2
IBM
•DB2
January 13, 2010
Sun
•Solaris 10 05/09 (UCS B200 M1 only)
Novell
•SUSE Linux Enterprise Server 11 SLES 10 sp3
Oracle
•OVM 2.1.5•OVM 3.0
Citrix
•XenServer 5.5
Complete In Process or
Planning
EMC•Ionix v1.0•Compliance MgmtAvamar
Other
NEC, Altiris, 6.9 Hitachi, Novell, Oracle, NetApp
Other
•Red Hat RHEL 5.4 w/ KVM•Novell PlateSpin
RedHat
•RHEL 4.8, 5.xRHEL 6.0
Oracle
•OEL 5.3
Sun
•Solaris x86
Novell
•SLES 11SLES 10 sp3SLES 11 sp1
Microsoft
•Exchange 2003, 2007, 2010
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 97
Cisco Unified Computing SystemThe Cisco Unified Computing System is designed to dramatically reduce datacenter total cost of ownership while simultaneously increasing IT agility and responsiveness.
Reduces total cost of ownership
� CAPEX: Up to 20% or greater reduction� OPEX: Up to 30% or greater reduction� Cooling and power efficient
Increases business agility
� Provision applications in minutes instead of days� Automation reduces service outages� Just-in-time resource provisioning
Investment protection
� Industry standards-based � Co-exist with existing data center infrastructure� Leverage existing management applications via API
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 98
� Unified Fabrics
� VN-Link and Network Interface Virtualization
� Unified Computing System
� Foundation for Private Cloud Infrastructure
� Innovations for Cloud and Virtualization
� Summary
Agenda
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 99
Inter Cloud
Transition to Unified Data CenterFoundation for Cloud Services
UtilityVirtualizationConsolidation MarketAutomation
Network and Data Center Consolidation
Data Center Virtualization and Unified Fabric Architecture
Unified Computing
Private and SP Cloud Services
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 100
xx
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 101
xx
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 102
xx
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 103
xx
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 104
How To Deal With ESX Instant Bursting ?
Best Practice Before UCS?More than 1 hour to add capacity
terremark wanted minutes!
105© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock Infrastructure Packages A New Way of Delivering IT
� Rapid deployment model of virtualized infrastructure
� Pre-integrated and validated solutions reduce total cost of ownership
� Service-level driven through predictable performance and operational characteristics
� Improved compliance/security and reduced risk
Compute
Network
Virtualization
Storage
Vblock Infrastructure Packages
Solution Packages
Operating Systems
Applications
Information
Accelerate time to results, Reduce TCO
106© 2009 Cisco | EMC | VMware. All rights reserved.
IT Transformation has begun….
Consolidation
Virtualization
Automation
Private Cloud
Public/Hybrid Clouds
Security & Compliance?
Standardization?
Integration?
SLA?
107© 2009 Cisco | EMC | VMware. All rights reserved.
IT Transformation has begun….
Consolidation
Virtualization
Automation
Private Cloud
Public/Hybrid Clouds
…. Vblock Infrastructure Packages accelerate infrastructure virtualization & private cloud adoption
Vblock Infrastructure Packages
108© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock: A New Way of Delivering IT to Business
� Production-ready– Integrated & tested units
of virtualized infrastructure– Best of breed virtualization, network,
compute, storage, security, and management products
� SLA driven – Predictable performance &
operational characteristics
� Reduced risk & compliance– Tested & validated solution with unified
support and end-to-end vendor accountability
Accelerate time to results, Reduce TCO
109© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock Infrastructure Packages A New Way of Delivering IT
Benefits:� Accelerate the journey to pervasive
virtualization and private cloud computing while lowering risk and operating expenses
� Ensure security and minimize risk with certification paths
� Support and manage Service Level Agreements
– Resource metering & reporting– Configuration & provisioning– Resource utilization
� Vblock is a validated platform that enables seamless extension of the environment
Secure, Extensible, SLA-driven, Infrastructure
110© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock Infrastructure Packages Scalable Platform for Building Solutions
� Vblock 2– A high-end configuration - extensible to meet
the most demanding IT needs– Typical Use case:
Business critical ERP, CRM systems
� Vblock 1– A mid-sized configuration - broad range of IT
capabilities for organizations of all sizes– Typical use case:
Shared services – Email, File & Print, Virtual Desktops, etc
� Vblock 0 – An entry-level configuration addresses small
datacenters or organizations
Vblock Infrastructure Packages for development and production deployments
111© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock Scaling
� Modular architecture enables graceful scaling of Vblock environment
� Consistent policy enforcement & IT operational processes
� Add capacity to an exisiting Vblock or add more Vblocks
� Mix-and-match Vblocks to meet specific application needs
Unified Fabric
Vblock 1
Vblock1
Vblock 1
Vblock 2
Graceful Scaling, Consistent Operations
112© 2009 Cisco | EMC | VMware. All rights reserved.
vBlock Architectural SolutionModular, Scalable, Repeatable, Predictable
� Simplifies expansion and scaling
� Add storage or compute capacity as required
� Can connect to existing LAN switching infrastructure
� Graceful, non-disruptive expansion
� Self-contained SAN environment with known standardized platform and processes
� Enables introduction of FCIP, SME, etc later for Multi-pod
vBlock BasevBlock Expansion vBlock Storage Expansion
vBlock Compute Expansion
113© 2009 Cisco | EMC | VMware. All rights reserved.
� Compute– Cisco UCS B-series
� Network– Cisco Nexus 1000V– Cisco MDS 9506
� Storage– EMC CLARiiON CX4
� Hypervisor– VMware vSphere 4
� Management– EMC Ionix Unified Infrastructure Manager– VMware vCenter– EMC NaviSphere– EMC PowerPath– Cisco UCS Manager– Cisco Fabric Manager
Vblock 1 Components
114© 2009 Cisco | EMC | VMware. All rights reserved.
C-Block Min Configuration
Recommended Minimum configuration
� Unified Computing System– 2*UCS 5100 Chassis + 2 Fabric Extenders– 4 Power Supplies– 8*B-200 series blades: 6*48GB; 2*96GB RAM – Total 64 Cores; 480 GB RAM– No local disk drives– 2*Fabric Interconnect 6120:
20 10GE/Unified Fabric– 4*10GE uplinks 6120– 8*4Gb Fiber Channel to MDS 9506– vSphere 4 Enterprise + including Nexus 1000v
� SAN– 2*MDS 9506– 24*2/4/8G Fiber Channel ports expandable to 98*2/4/8G FC
ports
– Option for SSM-16
� CLARiiON CX4-480: 45 TB– 2 Service Processors (16 Fiber Channel ports)– 8 Disk Array Enclosures (120 disk drives)– Mixture of drives types optimized for best price / performance
ratio� 20 SATA Drives @ 1 TB� 10 Flash Drives @ 200 GB� 90 Fiber Channel Drives @ 300 GB
� NS-G2 NAS Gateway (optional)– CIFS/NFS access
Vblock 1 Configuration detailsRecommended Maximum configuration� Unified Computing System
– 4*UCS 5100 Chassis + 4 Fabric Extenders– 8 Power Supplies– 32*B-200 series blades: 24*48GB; 8*96GB RAM – total 256 Cores; 1,920GB RAM– No local disk drives– 2*Fabric Interconnect 6120:
20*10GE/Unified Fabric– 4*10GE uplinks 6120– 8*4Gb Fiber Channel to MDS 9506– vSphere 4 Enterprise + including Nexus 1000v
� SAN– 2*MDS 9506– 24*2/4/8G Fiber Channel ports expandable to 98*2/4/8G FC
ports– Option for SSM-16
� CLARiiON CX4-480: 90TB– 2 Service Processors (16 Fiber Channel ports)– 16 Disk Array Enclosures (240 disk drives)– Mixture of drives types optimized for best price / performance
ratio� 40 SATA Drives @ 1 TB� 20 Flash Drives @ 200 GB� 180 Fiber Channel Drives @ 300 GB
� NS-G2 NAS Gateway (Optional– CIFS/NFS access
Balanced systems performance, capability & capacity
115© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock 1: Storage Performance & Capacity
Note: 5,000 users can be supported at IOPS utilization of 107%
Balanced systems performance, capability & capacity
116© 2009 Cisco | EMC | VMware. All rights reserved.
� Compute– Cisco UCS B-series
� Network– Cisco Nexus 1000V– Cisco MDS 9506
� Storage– EMC Symmetrix V-Max
� Hypervisor– VMware vSphere 4
� Management– EMC Ionix Unified Infrastructure Manager– VMware vCenter– EMC Symmetrix Management console– EMC PowerPath– Cisco UCS Manager– Cisco Fabric Manager
Vblock 2 Components
117© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock 2: Storage Performance & Capacity
Balanced systems performance, capability & capacity
118© 2009 Cisco | EMC | VMware. All rights reserved.
Use Case: Application ConsolidationAccelerate IT standardization & simplification
CustomEmail Web
V VV V
V VV V
V VV V
V VV V
DatabaseStorage Template
Compute Template
Fabric Template
Application Template
Virtual DesktopsStorage Template
Compute Template
Fabric Template
Application Template
Storage Template
Compute Template
Fabric Template
Application Template
Storage Template
Compute Template
Fabric Template
Application Template
Storage Template
Compute Template
Fabric Template
Application Template
Enable virtualization at scale, Simplify IT
119© 2009 Cisco | EMC | VMware. All rights reserved.
120© 2009 Cisco | EMC | VMware. All rights reserved.
121© 2009 Cisco | EMC | VMware. All rights reserved.
122© 2009 Cisco | EMC | VMware. All rights reserved.
123© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock: O/S & Application support
� Vblock accelerates virtualization of applications by standardizing IT infrastructure & IT processes
� Broad range of O/S support
� Over 300 Enterprise Applications explicitly supported
� Vblock validated applications– SAP– VMware View 3.5 – View 4 in-test– Oracle RAC– Exchange– Sharepoint
124© 2009 Cisco | EMC | VMware. All rights reserved.
Database
Use Case: Acquisition of 500 Person Sales ForceConsolidation and Rapid Provisioning via Templates
Storage Template
Compute Template
Fabric Template
Application Template
Create 500 virtual desktops to enable new team to access corporate information and applications
Increase database capacity to support sales consolidation effort
Create 500 new mailboxes
VirtualDesktops
Storage Template
Compute Template
Fabric Template
Application Template
Storage Template
Compute Template
Fabric Template
Application Template
125© 2009 Cisco | EMC | VMware. All rights reserved.
Vblock Use Case: VMware View
� Accelerates VDI adoption– Simplify desktop support– Improve Security, Data leakage
Protection and compliance– Reduce TCO
� Enterprise-Class tiered storage environment– Supports VMware View today– Graceful expansion to support
application workloads later
Accelerate virtual desktop adoption
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 126
� Unified Fabrics
� VN-Link and Network Interface Virtualization
� Unified Computing System
� Foundation for Private Cloud Infrastructure
� Innovations for Cloud and Virtualization
� Summary
Agenda
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 127
Network HA & Applications HAImplications in regard of the network technology us ed
Application Resilience
Network Resilience(stability, convergence time)
Time Evolution
L2STP
L2STP BP
VSS or VPC OTVVPLS
OTV+ TRILL
L3routing
HOTw/ total DC independance+ internal DC resilience
High-Availability
WARM
HOTw/ DC coupling
HOTw/ DC CPindependance
HOTw/ total DCindependance
= isolated L2=L2oL3
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 128
DCI VLAN extension key technical challenges
� L2 control-planeSTP domain scalabilitySTP domain isolationL2 Gateway redundancy
� Inter-site transportLong distance link protection with fast convergencePoint to Point & Multi-points bridgingPath diversityL2 based Load repartitionOptimized routing egress & ingressExtension over IP cloudMulticast optimization
� L2 data-planeBridging data-plane flooding & broadcasting storm controlOutbound MAC learning
Technology challenge:� L2 is weak� IP is not mobile
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 129
Innovations for Cloud & VirtualizationCompute resources part of the cloud, location transpare nt to the user
L2 Domain Elasticity:vPC, L2MP/TRILLOTV LAN extensions
VMotion
OTV
VN-link notifications
IP localization:HSRP anycastLISP mobility
VM-awareness:VN-linkPort Profiles
Storage Elasticity:FCIP, Write AccelerationFCoE, Inter-VSAN routing
Device Virtualization:VDCs, VRF enhancements
OTV
OTV OTV
OTV
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 130
Overlay Transport VirtualizationTechnology Pillars
Protocol LearningProtocol Learning
Built-in Loop PreventionBuilt-in Loop Prevention
Preserve Failure Boundary
Preserve Failure Boundary
Seamless Site Addition/Removal
Seamless Site Addition/Removal
Automated Multi-homingAutomated Multi-homing
Packet SwitchingPacket Switching
No Pseudo-Wire State Maintenance
No Pseudo-Wire State Maintenance
Optimal Multicast Replication
Optimal Multicast Replication
Multi-point ConnectivityMulti-point Connectivity
Point-to-Cloud ModelPoint-to-Cloud Model
OTV is a “MAC in IP” technique for supporting Layer 2 VPNs over
any transport .
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 131
Overlay Transport VirtualizationMAC learning protocol
� OTV uses a protocol to proactively advertise MAC reachability (control-plane learning). We will refer to this protocol as the “overlay Routing Protocol” (oRP).
� oRP runs in the background once OTV has been configured.
� No configuration is required by the user for oRP to operate.
Building the MAC tables
Core
IP A IP B
IP C
West East
South
oRP
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 132
Neighbor Discovery
Each Edge Device is adjacent to all the other Edge Devices from the OTV Control Plane perspective.
OTVOTV
Core
OTVOTV
IP A
IP B
West East
IP C
South
OTVOTV
Control Plane
Control Plane
Control PlaneMulticast enabled Core
•Edge Devices join a common Multicast Group•All signaling takes place over the multicast group•Multipoint optimized traffic replication
Non-multicast Core•Edge Devices register to an Adjacency Server•Adjacency list distributed to all participating devices•Point-to-point unicast peering for signaling
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 133
Eth 4
Eth 3
MAC TABLE
VLAN MAC IF
100 MAC 1 Eth 2
100 MAC 2 Eth 1
100 MAC 3 IP B
100 MAC 4 IP B
MAC 2MAC 2
MAC 1MAC 1
Overlay Transport Virtualization
OTV Data Plane: Unicast
Core
MAC 4MAC 4
MAC 3MAC 3
OTVOTV
IP A IP B
West East
L2 L3 L3 L2
OTV Inter-Site Traffic
MAC Table contains MAC addresses reachable through
IP addresses
IP A � IP BIP A � IP BMAC 1 � MAC 3MAC 1 � MAC 3
OTVOTV
Encap2
Layer 2Lookup
1
� No Pseudo-Wire state is maintained.�The encapsulation is done based on a destination lookup, rather than based on a circuit lookup.
� No Pseudo-Wire state is maintained.�The encapsulation is done based on a destination lookup, rather than based on a circuit lookup.
3 Decap4 MAC 1 � MAC 3MAC 1 � MAC 3
6MAC 1 � MAC 3MAC 1 � MAC 3IP A � IP BIP A � IP BMAC 1 � MAC 3MAC 1 � MAC 3
MAC TABLE
VLAN MAC IF
100 MAC 1 IP A
100 MAC 2 IP A
100 MAC 3 Eth 3
100 MAC 4 Eth 4
Eth 1
Eth 2
Layer 2Lookup
5
MAC 1 � MAC 3MAC 1 � MAC 3
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 134
Overlay Transport Virtualization
OTV Data Plane Encapsulation� OTV uses Ethernet over GRE encapsulation and adds an OTV shim to the
header to encode VLAN information.
� The VLAN field of the 802.1Q header is copied over into the OTV header.
� The overhead must be taken into account with respect to the MTU within the core. Nothing new, VPLS has its own overhead.
DMACDMAC SMACSMAC EthEth PayloadPayload
28 Bytesoverhead
6B6B 6B6B 2B2B 20B20B 4B4B 4B4B
DMAC SMACEther Type IP Header
Original FrameOriginal Frame 4B4B
CRCGRE
HeaderOTV
Header
802.1Q802.1Q
802.1Q802.1Q
+14 (18) L2 Bytesoverhead
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 135
Elements of the Evolving Data Center AccessAssembling the New Data Center Edge
Acc
ess
Laye
r
FCOE
10 GigE/FCOE
1G and 10GE Rack Mount Servers
1G and 10GE Blade Servers
Pass-ThruHP/IBM/Dell
10GE Blade(HP)
N4K - DCB Blade SwitchIBM
MDSNexus 7000
VM
VM
VM
VM
VMVM
NEXUS 1000v
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
UCS Compute Pod
VM
VM
VM
VM
VMVM
NEXUS 1000v
VM
VM
VM
VM
VMVM
NEXUS 1000v
Cor
e
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
blade1blade2blade3blade4blade5blade6blade7blade8
UCS Compute Pod
VM
VM
VM
VM
VMVM
NEXUS 1000v
Nexus 5000/7000
Nexus 2000
Virt
ual A
cces
s La
yer
Cor
e/A
ggre
gatio
n La
yer
VM
VM
VM
VM
VMVM
NEXUS 1000v
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 136
Questions ?
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 137
BRKDCT-2002 Recommended Reading
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 138
Complete Your Session Evaluation
� Please give us your feedback!!
Complete the evaluation form you were given when you entered the room
� This is session BRKDCT-2002
Don’t forget to complete the overall event evaluation form included in your registration kit
YOUR FEEDBACK IS VERY IMPORTANT FOR US!!! THANKS
© 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2002 139