Cisco Confidential Nex-Gen Data Center Solutions: Convergence, Scale, Open Nexus 2K-9K, MDS 9K, UCS Invicta Hawaii Technology Day -- February 2014 John Lawrence Data Center Consulting Systems Engineer
Jul 14, 2015
Cisco Confidential
Nex-Gen Data Center Solutions: Convergence, Scale, OpenNexus 2K-9K, MDS 9K, UCS Invicta
Hawaii Technology Day -- February 2014 John Lawrence Data Center Consulting Systems Engineer
Data Center Trends – State of the Network
Portfolio – Nexus 2K – 9K, MDS
Unified Fabric Differentiated Values: DCI, Core, ITD, RISE, Programmability / Automation Campus Core?
Agenda
UCS Invicta – All Flash Based Storage
Summary
Cisco Confidential
Data Center Trends
These Trends are Changing the Role of IT
NETWORK AT THE CENTER
TECHNOLOGY TRANSITIONS
Mobile MOBILE CLOUD NEW BREED OF APPS
DATA & ANALYTICS
INTERNET OF THINGS
BUSINESS IMPLICATIONS
Mobile GROWTH &
PRODUCTIVITY NEW BUSINESS
MODELS EXPERIENCE
EXPECTATIONS GLOBALIZATION SECURITY & COMPLIANCE
5 YEARS
3 YEARS
FASTER SERVER REFRESH CYCLE
~ 3 YRS
FASTER NETWORK REFRESH CYCLE
~ 5 YRS
Up to 18 Cores per
Socket and trending up
10G LOM/FlexLoM Shipping
New Server Platforms Enabling Higher I/0
Throughput
Big Data Increasing East
West Traffic
DATA CENTER IP TRAFFIC GROWTH
25% CAGR (2012-2017)
HYPERVISOR
VM VM VM VM VM VM
Virtual Machine Density Driving I/0 Performance
Avg.11 VMs/ Linux Server
WORKLOADS DRIVING SERVER PORT BANDWIDTH, VM DENSITY, BIG DATA
Cisco – Leading The Data Center Transformation
VM-Fabric Integration
Open Networking
2009 2014 2008
Unified Fabric
LAN SAN
Fabric Computing
Network
Compute Storage Access
Application Centric Infrastructure
Network
Apps Policy
The Next Wave
#1 in DC Networking #1 in Unified Computing #1 IT Infrastructure
InterCloud
IoE / IoT Process
People
Things Data
2015+
Cisco Confidential
UNIFIED FABRIC PORTFOLIO 2K -9K, MDS Portfolio
Cisco Nexus
5000/5600
Cisco Nexus 7000/7700
Cisco Nexus
3000/3100 Cisco
Nexus 2000 /2300
DC and Cloud Networking Portfolio – Nexus Family Ready to Lead the 10G/40G and 100G Transition and Beyond
Cisco Nexus 9000
APIC AVS ACI
Cisco Nexus 1000V
OPEN APIs/ Open Source/ Application Policy Model
HIGH PERFORMANCE FABRIC 1/10/40/100 GE
SCALABLE SECURE SEGMENTATION VXLAN
Ecosystem
DELIVERING TO YOUR DATA CENTER NEEDS
Resilient, Scalable
Fabric
Workload Mobility
Within/ Across DCs
LAN/SAN
Convergence
Operational
Efficiency—P-V-C
Architectural
Flexibility
Cisco Nexus 6000
What If You Could…
Simplify your data center operations and manageability through open /efficient architecture
Scale Applications with out sacrificing performance ?
Have centralized, application-driven policy for automation, management, and visibility?
You Can!!!!!
What ACI Brings You
1 • Operationally Simple • Lowest TCO • Zero-touch Provisioning
2
3
• Performance and Scale • Health Metrics • Visibility / Telemetry
• Open APIs / Open Source • Secure Multi-tenancy • Extensive Ecosystem
APPLICATION-CENTRIC POLICY MODEL
PHYSICAL + VIRTUAL
OPEN AND SECURE
Nexus 9000 Foundational Switching Platforms for the Next Decade
1011 0010
Industry Leading Price/Performance, Port Density: Fastest 10G/40G /100G Platform with Merchant+ Programmability/ Open APIs: Linux Containers, Python, Power Shell, Puppet, Chef… Ideal for DevOps!!
15% Better Power & Cooling–2.8X Better Reliability
Innovation Object Model, No Backplane, No Midplane, Health scores
$ Multi-million Savings 40/100G on Existing Cables using BiDi Optics. Non disruptive migration to 40G
Nexus 9000 1/10/40/100G
Standalone / ACI Ready
Cisco Nexus 9000 Series Switches High-Performance 10 Gbps/40 Gbps/100 Gbps Switch Family
FLEXIBLE FORM FACTORS CAN ENABLE VARIABLE DATA CENTER DESIGN AND SCALING
Nexus® 9300 Nexus® 9500
48 1/10G SFP+ & 12 QSFP+
SC
ALA
BLE
1
GE
/10
Gbp
s/40
Gbp
s/10
0 G
E
PE
RFO
RM
AN
CE
PERFORMANCE PORTS PRICE PROGRAMMABILITY POWER
96 1/10G-T & 8 QSFP+
12-port QSFP+ GEM
ACI Ready Leaf Line Card 48 1/10G-T & 4 QSFP+
ACI-ready Leaf line card 48 1/10G SFP+ & 4 QSFP+
Aggregation line card 36 40G QSFP+
C9500 8-Slot
Nexus 9300 Platform Architecture
Nexus® 9396PQ • 960G • 48-port 1/10 Gb SFP+ and
12-port 40 Gb QSFP+ • 2 RU Nexus 9396TX (future) • 960G • 48-port 1/10 GBaseT & 12-port 40 Gb QSFP+ • 2 RU
Nexus 93128TX • 1,280G • 96-port 1/10 G-T and 8-port
40 Gb QSFP+ • 3 RU
Nexus 9300 - Common • Redundant fan and power supply • Front-to-back and back-to-front airflow • Dual- core CPU with default 64 GB
SDD
Uplink Module
• 12-port 40 Gb QSFP+ • Additional 40 MB buffer • Full VXLAN gateway, bridging and
routing capability
Nexus 9500 Platform Architecture 8-Slot Modular Chassis
Nexus® 9508 Front View Nexus 9508 Rear View
8 line card slots Max 3.84 Tbps per slot
duplex
Redundant supervisor engines
3000 W AC power supplies 2+0, 2+1, 2+2 redundancy
Supports up to 8 power supplies
Redundant system controller cards
3 or 6 fabric modules (behind fan trays)
3 fan trays, front-to-back airflow
No mid-plane for LC-to-FM connectivity
Designed for Power and Cooling Efficiency Designed for Reliability
Designed for Future Scale
Chassis Dimensions: 13 RU x 30 in. x 17.5 in (HxWxD)
Nexus 9500 Platform Architecture First Modular Switch With No Backplane
Nexus 9508 Front View Nexus 9508 Rear View
Nexus 9508 Backplane-free Modular Design
High Density Connectors
1. You can leverage Existing Nexus / IP Networks 2. You can leverage Existing Classical Ethernet / IP Networks 3. Deploy ACI: New PoDs For Cloud Build Outs 4. Extend ACI Model. Preserve - IP networks, L4-7 Services, Hypervisors
Existing Nexus PoDs
(2k-7k)
ACI POLICY
ACI Fabric
Nexus 9500 / 9300
Nexus 9300
Nexus 7000 DCI
Investment Protection: Extend ACI to Installed Base
PROFILE
Nexus 9300
ESX Hyper-V OVS Bare Metal
Bare Metal
ESX Hyper-V OVS
AVS Nexus 9300
Nexus 7000: Industry’s most Comprehensive Data Center Feature Set
1011 0010
High performance connectivity for EoR and core LAN and SAN deployments 1/10/40/100GE, FCoE
Feature and service rich for diverse deployments OTV, FabricPath, MPLS, VxLAN, DFA, NAM, ITD, RISE, LISP, VDC
Rich Programmability Features for Operational Simplicity Openflow, Puppet and Chef support, JSON, REST APIs, Python,
Proven high availability for mission critical deployments Hitless ISSU, Stateful process restart
$ Multi-million in Savings for 40G using Existing Cables + BiDi Optics. Non disruptive migration to 40G
Nexus 7000 1/10/40/100G
Extending The Cisco Nexus 7000 Series Building On Cisco Nexus 7000 Series Proven Technology
Cisco Nexus F3-Series Modules
Cisco Nexus 7700 Platform Switches
Cisco Nexus® 7000 Series
Industry’s Most Proven Data Center Switching Platform
Cisco Nexus 7000 Series Switches
Common
Common
Common
Cisco Nexus 7700 Platform Switches Value Proposition Of The Cisco Nexus 7700 Platform
Cisco Nexus 7700 10-Slot
Cisco Nexus® 7700 18-Slot
26 R
U
14 R
U
Cisco Nexus 7700 6-Slot
9 R
U
Environmental
True front-to-back airflow
Smaller
33% more compact
Fabric 1.32 Tbps
Nexus 7718 Nexus 7710 Nexus 7706
Application Large Spine/Core Spine/Core/Agg/DCI Small Core/Agg/DCI
1/10G density 768 384 192
40/100G density 384/192 192/96 96/48
Cisco Nexus 7702 Compact Form Factor for Remote/Small Deployments
Deployment Flexibility § Small to medium DCI solution: Most comprehensive DCI feature
set in the industry - (OTV, LISP, MPLS, VPLS and is VXLAN/EVPN hardware ready)
• Compact Service Chassis: Ideal for high performance RISE and ITD services
• Comprehensive Layer 2/3 feature set: Ideal for small data center aggregation and campus cores
Operation and Feature Consistency • Supports all current and future Nexus 7700 Line cards, Sups
and Power supplies • Same proven common architecture, ASICs and Cisco NX-OS
software • Same software train across Nexus 7700 and 7000 Series,
ensuring consistency
• 3RU Form Factor based on N7700 architecture o One SUP and One N7700 I/O Module o Two Power Supplies o No fabric Modules o Front-to-Back Airflow
• Up to 48 x 1/10GE or 24* x 40GE or 12 x 100GE non-blocking ports
* With breakout cables this linecard can offer up to 76p 10GE + 5p 40GE
Nexus 7000 F3-Series Module Leadership, Features, and Continued Investment
MOST COMPREHENSIVE
Core/Agg, Spine/Leaf DCI and SAN Deployments
READY FOR
Multi-Tenancy, Programmable
Energy Efficient
ENVIRONMENTAL
SPINE AND BORDER LEAF AGGREGATION AND CORE DATA CENTER EDGE
Nexus 7000 F3 10GE
F3 48-port 10G Module N7K-F348XP-25
Comprehensive F3 Module Portfolio
Nexus 7000/7700 Series - 10, 40 and 100GE
Over 13,000 F3 Modules Shipped!
Nexus 5000 Series Innovation
1011 0010
High performance connectivity for EoR and TOR, LAN and SAN deployments
High 40G /10G Density, 100G Uplinks Unified Ports, Deep Buffers, VxLAN
Non disruptive migration to 40G Bidi Optics: high cost savings
Nexus 5000 1/10/40/100G
Advanced analytics toolkit with buffer and latency monitoring
Single point of management with FEX architecture
Nexus 56128P • 2RU
• Up to 96 ports 10G Ethernet / FCoE (Unified Port on Modules)
• 8 ports 40G Ethernet / FCoE
Nexus 5672UP • 1 RU
• 48 port 10G Ethernet (16 Ports Unified)
• 6 port 40G Ethernet / FCoE
NEXUS 5600 SERIES
Cisco Nexus 5648Q High Density 40GE in a Compact Form Factor
§ Deployment Flexibility § EoR or MoR FEX aggregation: Supports Nexus 2200/2300
FEX switches § HPC/HFT: Low latency 40GE - 1usec
• LAN and SAN Convergence: FCoE enables LAN and SAN network convergence
§ Rich Data Analytics • Microburst Monitoring for congestion mediation • SPAN on Latency to identify congested flows • SPAN on Drop for identifying congestion points • Network Latency Measurements using IEEE 1588 standard
2 RU, 24 Ports 40G QSFP Ethernet/FCoE ports • 2 GEM for additional 24 x 40G Ports • Larger Buffers • Four 1100W PSU (N+N) + 3 FANs (N+1) • Portside intake and exhaust Airflow
• Up to 192 x 1/10G or 48 x 40G Ethernet/FCoE (with GEM)
Nexus 5648Q
12 Port 40G Ethernet/FCoE GEM
ü Supported on all Nexus 5500 Series Chassis ü Only 4x 10G mode supported
4 ports QSFP+ GEM
4 ports QSFP+* Each QSFP+ can support 4x 10G ports
Nexus 5500 QSFP+ GEM 4p QSFP+
Nexus 3000 High-density, ultra-low-latency switching
1011 0010
Low Power Consumption, Low Latency High Performance, High Port Density
Flexible, Programmable VXLAN Ready
TOR, MSDC, and Fabric Leaf
Nexus 3000
Ultra Low Latency High-performance trading workloads
Robust NX-OS support
Nexus 3000 Family Enhancements Feature, Power and Price Optimized
HPC HFT MSDC
Nexus 3548-X
• New CPU and ASICs • Lower power consumption (~ 25%) • Multicast NAT: Simplifies co-location integration • Latency Monitoring: FIFO Traffic visibility and
troubleshooting Industry’s Lowest Latency Switch
Nexus 3132Q-X
• Lower Power consumption (~ 15%) • Option for 4 x 10GE SFP ports • Same port density, tables, memory & feature set • Cost optimized (12% lower)
CISCO ALGORITHM BOOST
TECHNOLOGY
Up to 32 x 40GE QSFP+ ports
Or 31 x 40GE and 4 x10GE ports
Nexus 2000: Architectural Flexibility with Lower TCO
1011 0010
Architectural Agnostic Solution ToR, EoR, MoR, DFA, and ACI
Choice of Parent Switch Nexus 9000, 7000, 6000, 5000 all support FEX!!
Multiple connectivity options for the LAN & SAN 1G/10G/40G and Unified Ports with FC/FCoE
Simplified Management at Scale Add FEX’s without adding management complexity
$ Multi-million in Savings for 40G using Existing Cables + BiDi Optics. Non disruptive migration to 40G
Nexus 2000 1/10/40G
Nexus 21xx/22xx
1st Generation 2nd Generation
Nexus 2300
Nexus 2300 Platform Fabric Extender Next Generation of the Cisco Nexus 2000
20 Million+ Ports Shipped 400,000+ Chassis Shipped 10,000+ Customers Building on Nexus 2000 FEX Family Success
Single Point of Management Scalability ACL Classifications
and QOS FCoE
Cisco Nexus® 2200 Platform
Nexus 2348TQ Nexus 2348UPQ Nexus 2332TQ 48 x 10G + 6 x 40GE Uplinks
Unified Ports Capable 48 x 10GBaseT
+ 6 x 40G Uplinks 32 x 10GBaseT
+ 4 x 40G Uplinks
• Higher Density with Native 40GE Uplinks
• Larger buffers to absorb traffic bursts
• Unified Port Capable (UP models)
• Lower Power - 10% more efficient
• Intra-rack Forwarding Capable – Reduce uplink traffic
• Supported on Nexus 5000, 6000 today
• Nexus 7000/9000 support – June/July 2015
NEW
Aug 2014 Nov 2014 March 2015
NEXUS USE CASE SUMMARY 40G
Use Cases 10G
Use Cases 10G FEX Agg (40G Uplink)
Server Connectivity
FEX to Parent Parent Uplink to Next Layer
10G ToR 1G FEX Agg 10G FEX Agg (10G Uplink)
10G/40G
10G
1G N2K N2K
FEX
10/40G
1G
10G/40G
10G
10G N2K N2K
40G/100G
40G
10G N2K N2K
10G
n/a
n/a
1G
10G
10G/40G
10G
10G
10G/40G
10G
40G
40G/100G
N2K: 2248 N2K: 2232 N2K: 2248PQ
MDS 9000: Multiprotocol Storage Networking
MDS 9000 FC/FCOE/FCIP/FICON
Optimized physical and virtual resources Reduced operating costs Decreased capital expenditures
1011 0010
Simplified management
Multi-Protocol Flexibility
Reliable end to end connectivity
Integration with Industry leading cloud solutions
Enterprise class features across the portfolio
CONSISTENT AND SIMPLIFIED Features, Management, and Programmability
Cisco Multi-Protocol Architecture – SAN, LAN, and Compute COMPUTE
Cisco UCS C-Series Rack Servers
Cisco UCS B-Series Blade Servers
Cisco UCS Fabric
Interconnects Cisco UCS
6248UP
Cisco UCS 6296UP
LAN / SAN
Cisco Nexus 9000 Cisco Nexus 7000
Cisco Nexus 6000
Cisco Nexus 5600
Cisco Nexus 5500
Cisco Nexus 3000
Cisco Nexus 2000
SAN
Cisco MDS 9500 Cisco MDS 9222i
Cisco MDS 9148
Cisco MDS 9710
Cisco MDS 48x16G line-rate
FC Module
Cisco MDS 9250i
10+ Years of Proven NX-OS Operating System Cisco Prime Data Center Network Manager (DCNM)
Cisco MDS 48x10G line-rate
FCoE Module
Cisco MDS 9706
Cisco MDS 9148S
Innovation
Platforms • MDS 9148S • MDS 9706 • MDS 9700 FCoE
Enabling Cloud-Scale Deployments • Increased scale for SAN • SAN overlay on Ethernet Fabrics • Migration of Massive Amounts of Data
Simplifying SAN Management • Hardware-based congestion control • Fabric Automation • Extensive monitoring and visibility
Driving Innovations for the Next Decade with a complete 16G Portfolio Deploy Small, Medium, Large SANs with Cisco MDS 9000 Family
Continued Innovations Over the Last Decade
Comprehensive Security
Virtual SAN (VSAN)
Integrated SAN Extension for DC/BR
Performance and Density
Single LAN/SAN
Management
FCoE Inter-VSAN
Routing
Network Diagnostics and Troubleshooting Tools
Integrated Multi-Protocol FC,
FICON, iSCSI and FCIP
Industry-Leading FC Performance, Reliability
40 G FCoE
Unified Port
2002
2013
Industry Leading
Performance
1.5-Tbps/Slot 384 Line-Rate 16G FC Ports
STORAGE DIRECTOR N+1 Fabric
INDUSTRY’S MOST RELIABLE
WITH MULTI-PROTOCOL CONNECTIVITY
UNMATCHEDFLEXIBILITY
INDUSTRY’S HIGHEST PERFORMANCE AND CAPACITY
Cisco MDS 9710 Multilayer Director Investment Protection for the Next Decade
Multi-Protocol Storage Networking
14 RU
• Up to 8 Line Cards • Up to 6 Fabric Modules • Dual Supervisors
3x THE PERFORMANCE OF ANY COMPACT DIRECTOR
INDUSTRY’S MOST RELIABLE COMPACT
DIRECTOR
• AND,15X the performance of current MDS 9506 director
• Grow without forklift – investment protection for future
1.5 Tbps/slot Switching Capacity
• Preserve IT operations and Knowledge – ease of migration with NX-OS and DCNM
• Eliminate loss of bandwidth N+1 Fabric Redundancy
• Eliminate Downtime In-Service Software Upgrade Dual, Redundant Supervisors Redundant power supplies/fans
• Maintain Performance Reduced Failure Domains
Evolves with Your Business for the Next Decade
9RU
Cisco MDS 9706 Multilayer Director Extending MDS 9710 Director Qualities to a Smaller Form Factor
Front-Back Airflow
Scale up to 192 Line Rate Ports – 16G FC or 10G FCoE
High-Performance, Easy to Deploy, Enterprise-class Fabric Switch
Cisco MDS 9148S Fabric Switch
VERSATILE EASY TO USE ENTERPRISE-CLASS • Line-rate 16/8/4/2G FC Ports • Industry-leading port range
Start with 12-port base Scale up with 12-port license Or, full 48-port option available
• Automated Provisioning • Quick Configuration Wizard • Same OS and Management across
Industry’s broadest SAN Portfolio
• Non-disruptive software upgrades • Up to 32 Virtual SANs (VSANs) • Inter-VSAN Routing (IVR), QOS,
PortChannels, N-Port ID Virtualization (NPIV), N-Port Virtualization (NPV), Comprehensive Security
• Hardware-based slow-drain detection and recovery
Back
Dual Power Supplies and Fans for Enterprise-Class Availability
Front
48 x 16G FC Line Rate Performance Expand from 12- to 48-ports in 12-port increments
1 RU
9250i Multiservice Fabric Switch
BUSINESS CONTINUITY/ DISASTER RECOVERY
DATA MIGRATION
FC FC
SAN SAN
FC SAN GATEWAY
FC
FCoE
Converged Fabric
SAN
FC SAN SWITCH
FC
FC
SAN
Migrate Data Between Heterogeneous Storage
1/10G FCIP/iSCSI (2 Ports)
16G FC, FICON (40 Ports)
10GE FCoE (8 Ports)
Production DC
Disaster Recovery DC
IP WAN
FC FCoE
SAN
FC FCoE
SAN
One SAN Appliance, Multiple Use Cases
√
End to End FC and FCoE Portfolio
48x16G Line-Rate Module
MDS 9710
MDS 9250i: Storage Services
48x10G Line-Rate FCoE Module
MDS 9148S 48x16G Line-Rate Switch
Nexus 6004 96x40GE Line-rate FCoE
Nexus 5672UP: 48x10GE (16 Ports Unified) 6x40GE FCoE/Eth
Nexus 56128: 96x10GE FCoE/Eth 8x40GE FCoE/Eth Upto 48 Unified Port
Nexus
Nexus 7700 48x10G Line-Rate FCoE/Eth Module
Nexus 7718
Nexus 7710 10G/40G Line-rate FCoE support on Nexus 7700 24 x40G Module
Q3CY13 Q4CY13 Q1CY14 Q2CY14 Q3CY14 Q2CY13 Q1CY13 Q4CY14
Q1CY15
Key Targeted Capabilities Being Introduced Across the Portfolio
MDS
MDS 9706 MDS 9396
96 x16G Line-Rate Switch
Nexus 2348 UPQ FEX (FCoE: FCS, FC )
Nexus 5624Q: 24 Port 40G FCoE
Nexus 5648Q: 48 Port 40G FCoE
Nexus 5672UP-16G: 48 Port 10G FCoE 6 x 40G FCoE 24 16G UP Ports
Nexus 5696Q 96x40GE Line-rate FCoE 8G FC-48Ports, Future 160
Multi-Hop FCoE with Separate LAN and SAN Cores Introducing Industry’s Highest-Density FCoE Module on a FC Director
Dedicated Storage Core FCoE-only
Dedicated Ethernet Core Nexus Directors
Converged Link
Dedicated FCoE
Ethernet
Ethernet Ubiquity and Cost-Advantage
Higher Speed ISLs Available Sooner
Same Management Model as FC – Separate LAN and SAN
Nexus 2300 Nexus 2300
. .
. .
LAN Converged Access
Nexus Fixed or Directors
MDS 48x10G FCoE Module
FCoE-Only Dedicated Storage Core
MDS 9700 Series
Cisco Confidential Cisco Confidential
Data Center Interconnect Optimized Work Load Mobility
Multi-DC Networking Elements
OTV
OTV
Location of compute resources is transparent to the user
VM-awareness: DFA, ACI
OTV
OTV
OTV
IP Mobility: LISP
Multi-tenancy/segmentation: Segment-IDs in VXLAN, LISP, FabricPath, and OTV
Storage Solutions & Partners: FCIP, I/O Acceleration EMC, NetApp
Network Services Elasticity: ACE, GSS, ASA, VSG
Nexus 7000 Optimizing Inter Data Center Solutions
Benefits: • EoMPLS • VPLS • LDP Graceful Restart • MPLS/VPLS on F3 * • MPLS TE over GRE Tunnel
MPLS on the Nexus 7000 L2 and L3 VPNs at 10/40/100GE
Nexus 7000 OTV Extend VLANs Across DCs
Benefits: • OTV on F3 Modules • VLAN Translation • F3 OTV IP Tunnel Depolarization • Selective Unicast Flooding • Scale: 1,500 VLANs from 256,
100% more MACs • Convergence Improvements
Nexus 7000 LISP Global IP Address Portability
Benefits: • LISP Multihop Support – More flexible
deployment models • Seamless workload mobility between
DC and cloud • Direct Path, connections maintained
during move • No routing re-convergence, no DNS
updates • Transparent to the hosts and users * NX-OS 7.2
Ethernet LAN Extension over any Network § Works over dark fiber, MPLS, or IP network § Multi-data center scalability
Simplified Configuration & Operation § Seamless overlay - No network re-design § Single touch site configuration
High Resiliency § Failure domain isolation § Seamless Multi-homing
Maximizes available bandwidth § Automated multi-pathing § Optimal multicast replication
Overlay Transport Virtualization (OTV) Simplifying Data Center Interconnect (DCI)
Many physical sites - One logical Data Center
Any Workload, Anytime, Anywhere Unleashing the full poten0al of compute virtualiza0on
Layer 2 Ethernet Extension
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Topology + end point routes
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Prefix RLOC 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 22.78.190.64 171.68.226.121
Flexible Distributed Database
Reduced Routes
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
After Before
End Point Routes consolidated to
LISP DB
Topology Routes End Point Routes
1. Topology independent addressing 2. On-demand route look up 3. Map and Encapsulate
What LISP is • Mobility à IP prefix and address family Portability • Scalability à On-demand Routing • Security à Tenant ID based segmentation
What LISP Provides
IP address = Location + Identity Identity decoupled from Location
Locator-ID Separation Protocol (LISP) A Next Generation Routing Architecture
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Prefix RLOC 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 22.78.190.64 171.68.226.121
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Prefix Next-hop 189.16.17.89 171.68.226.120 22.78.190.64 171.68.226.121 172.16.19.90 171.68.226.120 192.58.28.128 171.68.228.121
Nexus 7000 LISP Global IP Address Portability
DC 1 VLAN1
DC 2 VLAN2
DC 3 VLAN3
LISP Route Server
User
x.x.x.x y.y.y.y z.z.z.z
10.10.10.2
Features • IP address portability across subnets • Auto detection and re-route of traffic/session • Highly scalable technology
Benefits • Seamless workload mobility between DC and cloud • Direct Path, connections maintained during move • No routing re-convergence, no DNS updates required • Transparent to the hosts and users
Internet/Private
IP core
Device IPv4 or IPv6 Address Represents Identity and Location
Today’s IP Behavior Loc/ID “Overloaded” Semantic
10.1.0.1 When the Device Moves, It Gets a New IPv4 or IPv6 Address for Its New Identity and Location
20.2.0.9
Device IPv4 or IPv6 Address Represents
Identity Only.
When the Device Moves, Keeps Its IPv4 or IPv6 Address. It Has the Same Identity
LISP Behavior Loc/ID “Split” IP core
1.1.1.1 2.2.2.2
Only the Location Changes
10.1.0.1
10.1.0.1 Its Location Is Here!
Location Identity Separation Protocol What do we mean by “Location” and “Identity”
47
Unified Fabric: Evolutionary Approach Why VXLAN?
48
Customer Needs VXLAN Provides
Multi-tenancy with Scale (above 4K) Traffic & Address Isolation Scale up to 16M segments
Extend Layer 2 across Layer 3 Layer 2 networks to cross layer 3 boundaries
VM Mobility Seamless VM Mobility
Handling Workloads in the Data Center
Instantiate a virtual App
Distribute a workload
Move a workload
VXLAN
OTV
LISP
Cisco Confidential Cisco Confidential
Intelligent Traffic Director Simplified Architecture
Cisco Intelligent Traffic Director (ITD): Delivering Multi-Terabit Load-balancing
Cisco ITD provides the Industry’s most scalable Layer 4 load distribution solution!
• ASIC based multi-terabit load balancing at line rate (10/40/100G)
• Supported on every Nexus 7000/7700 port
• Load balance traffic to a group of servers or appliances.
• Capability to create clusters of devices such as firewalls, intrusion prevention systems (IPSs), web application firewalls, and Hadoop clusters
• Performs health monitoring and automatic failure handling
• Order of magnitude reduction in configuration and ease of deployment
ITD
Po-‐5 Po-‐6 Po-‐7
Redirect
Clients
ACL to select traffic
Select the traffic destined to VIP
Load balance
ITD Deployment example
ITD
Po-‐5 Po-‐6 Po-‐7 Po-‐8
Redirect
Clients
ACL to select traffic
Select the traffic destined to VIP
loadbalance
Note: the devices don’t have to be directly connected to N7k
ITD : Intelligent Traffic Director: Use Case Enabling Scalable and highly available data-centers
Application/Services scaling
Multi-Tbps Scale
VIP based L3/L4 Server Load-Balancing
Redirect Traffic to Web-cache, video-cache, WAE etc
Create Multi-Tbps Firewall
Significant CAPEX and OPEX reduction
Investment protection : Supported on all LCs and Sups on both N7000 and N7700
Cisco Confidential Cisco Confidential
Remote Integrated Service Engine (RISE) Simplified Architecture
Cisco Remote Integrated Service Engine (RISE)
Benefits: • Enhanced application availability via simplified
provisioning and efficient manageability. • Data path optimization: ADC off-load, low latency
policy engine. • Integrated multi-tenancy support: Clustering with
flexibility, scalability.
RISE Overview: • Logical integration of a service appliance with Nexus
7000 and 7700 platforms • Enables staging to streamline initial deployment
of the service appliance • Allows ongoing configuration updates to drive flows to
and from the service appliance • Allows data path acceleration and increased
performance • Integrated with N7K VDC architecture
Physical Topology Logical RISE Topology
Co nt ro
l Pla ne
Challenge: Services and switching are deployed independently which increases the complexity for deploying and maintaining networks
Ease of Management
Remote Integrated Service Engine (RISE) Enabling Tightly Integrated Data Center Services
Simplified Out-of-Box Experience
Reducing Initial Deployment of NS by 4x (30 to 8 steps)
Auto PBR- Simplifies One-arm mode config
Push VIP Availability into Routing Layer
Significant OPEX reduction
Internet
Seamless Nexus Integration Enables the Nexus 7000 to Direct Application Traffic
Simplifying the Out of Box Experience
0 10 20 30
F5 3600
ACE NG Nexus 7000
Console Config
Management Config
Licensing
Web GUI Config
Data Network Config
• Configure your RISE enabled ADC in less than 2 minutes
Minutes for initial configuration of ADC with RISE vs manual configuration
Reduce deployment time & complexity with fewer steps & points of contact
ADC with RISE
ADC Manually Configured
30 steps
8 steps
Cisco Confidential Cisco Confidential
Programmability and Automation Reduce Opex
Open and Modular Leverage Open Source Components Open Boot Loader, HAL, BSP Independent delivery of applications Standard Linux Tooling for delivery & installation
Programmability Model-driven ReST API Python Bindings Openflow Agents (Chef, Puppet)
3rd Party Apps Linux Environment Integrated Secure Container Standard Linux APIs Cisco APIs for Advanced Functionality
Architecture Goals
Cisco Confidential Cisco Confidential
Campus Core
• When requirements fit – Primarily for 10G Density and HA
• Key Requirements: 10G Scalability, ISSU/HA, 40/100G.
• Many DC features apply to the Campus
• 10G in the campus aggregation/core is becoming more common.
• Bandwidth and Performance requirements – N7K might be the only way to accommodate!
Distribution
Core
Nexus 7000 Series So what about the Campus?
NOTE: The Catalyst 6500 remains the primary Campus platform for Core and
Distribution.
One size doesn’t fit all
! Commonly, there is a single core network for both Campus and Data Center
! For many commercial customers, the DC is the core
! For larger networks, there are two separate cores.
! Use N7K in either core when requirements demand and features are met.
Data Center
Campus Distribution Blocks
SiSiSiSi
SiSi
SiSi SiSi
SiSi SiSi SiSi
Core
View of Campus and Data Center
Nexus 7000 Campus Design Considerations Using Virtual Device Contexts in the Campus Environment
! Objective: Consolidate vertical infrastructure that delivers orthogonal roles to the same administrative or operational domain.
! Benefits: § Reduced power and space requirements § Reduce OpEx: Maximize density of the platform § Agile provisioning of resources between VDCs § Logical design enables migration to physical
separation in future
! Considerations: § Number of VDCs (4 default / up to 8) § Use Firewall between Campus and Data Center VDC’s
DC Access / Aggregation
Campus Core VDC
DC Core VDC
FW between VDC’s
Campus Network
Cisco Confidential Cisco Confidential
UCS INVICTA
1E-09
0.000001
0.001
1 Second Millisecond Microsecond Nanosecond
Response Time
Flash Memory provides a Faster Time Zone for Applications
Slow Zone
Fast Zone
HDD
Flash
CPU
Trade-offs are Complex & Inefficient
Reference Architecture for 1,000 Desktops
1,000 Persistent Desktops will require: < 10TB of capacity
~80K backend IOPS
41 15K HDDs
25 7.2K HDDs
3 Flash Drives
TOTAL IOPS: 114,950 TOTAL CAPACITY: 63.2 TB
3 types of drives, 3 types of RAID
41 15K HDDs
25 7.2K HDDs
3 Flash Drives
Faster. Simpler.
114,950 IOPS 63.2 TB 155,000 IOPS 64 TB**
**Effective Capacity
UCS Invicta All Flash Storage
Cisco UCS with UCS Invicta Series Faster Applications = Faster Business Operations Analytics &
Intelligence
Batch Processing
Database Loads
Email OLTP
Image & Media Applications
Virtual Desktops
The Highest Performing Workload Engines On UCS Invicta
Workload Acceleration
Fast I/O
High Bandwidth
Low Latency
Data Reduction
Eliminate Redundant Data
Efficient Storage Utilization
Data Center Efficiency Reduce Energy Consumption
Reduce Floor Space Consumption
Reduce Management Overhead
The UCS Invicta Conquers Three Business Objectives
UCS Invicta Appliance – Primary Advantages & Use cases
Up to 1.2 Million IOPS** Up to 7.2 GBps** Bandwidth Up to 144 TB Raw
UCS Invicta Appliance VDI – Non Persistent
2MB – 5TB OLAP
SOD/EOD Reporting
Data Optimization
Multiple Workloads
Tuning-Free Performance
210,000 IOPS* 1.2 GBps Bandwidth Up to 24 TB Raw
*Read IOPS **refer to earlier slide “A Note on Numbers”
Invicta OS Eliminates Trade-Offs
Data Persistence
Fastest Performance Highest Protection
Write Protection Buffer
Block Translation Layer
RAID Layer
Flash Media
Invicta OS Designed to drive the high performance from Flash Media
1. Protect • Store in Write Buffer
2. Organize • Create Write Blocks
3. Optimize • Write Aligned for Flash Media & RAID Protection
Invicta OS Optimizing Flash for Faster Writes & Higher Endurance
4. Optimize Writes • Write speeds are symmetric to read speeds
5. Virtual Garbage Collection • Evaluated and Managed by the Invicta OS
6. Virtual Garbage Collection • Blocks are invalidated in large chunks to speed up drive level garbage collection
Invicta OS (dedupe) Designed to drive the high performance from Flash Media
1. Protect • Store in Write Buffer
2. Pattern Match & Organize • Deduplicate 4K Blocks • Create Write Blocks
3. Optimize • Write Aligned for Flash Media & RAID Protection
Invicta OS (dedupe) Optimizing Flash for Faster Writes & Higher Endurance
4. Optimize Writes • Write speeds are symmetric to read speeds
5. Virtual Garbage Collection • Evaluated and Managed by the Invicta OS
6. Virtual Garbage Collection • Blocks are invalidated in large chunks to speed up drive level garbage collection
Media optimization
Flash Cell Mechanical Hard Drive
Meets or Exceeds life of spinning media
The challenges with UNMANAGED NAND
Endurance SLC NAND @ 2Xnm – 30,000 cycles
MLC NAND @ 2Xnm – 3,000 cycles
TLC NAND @ 2Xnm – 1000-1500 cycles
Program / Erase
Pages independently programmable, BLOCK erasable
Time to Erase Block == MS NOT uS
Cost $/GB compared to 7200 RPM media
Invicta OS NAND management
Treats NAND Flash like NAND - not like disk
• Proprietary write logging layer ensures data integrity in the face of power loss • Implements a SYSTEM wide log structured indirection layer
• NEVER writes less than an entire Erase block • Smaller writes are padded to the Erase Block boundary • Writes are acknowledged to initiator immediately after being recorded into Nonvolatile memory • Leverages multi-core high frequency X86 cores w/GBs of memory
Data integrity layer provides both positional validation and traditional data validation upon read
• Media checksums alone fail to protect positional integrity • Granular recovery allows for individual RAID stripe repair
Cisco Services and Our Partners We Accelerate, Optimize and Sustain Success
Optimization Services Allow you to Optimize and Sustain your Advanced Technologies
Workshops Give you the FRAMEWORK to Accelerate the Adoption of Advanced Technologies
Maximize ROI Faster!
Advanced Services Provides subject matter expertise to Design and Deploy Advanced Technologies
Q&A