RES346:Software Defined Network and Cloud Computing Part 2: Use Cases: Google WAN, Openstack, NFV Steven Eychenne, IBM System Engineer, IBM Softlayer Yves Eychenne, IBM IBM Telecom Sector, Technical Leader, Europe 1
RES346:Software Defined Network and
Cloud ComputingPart 2: Use Cases: Google WAN, Openstack, NFV
Steven Eychenne, IBM
System Engineer, IBM Softlayer
Yves Eychenne, IBM
IBM Telecom Sector, Technical Leader, Europe
1
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
2
Introduction
In part 1 of Cloud, SDN & OpenFlow, we studied the concept of network virtualization, Software Defined Networks and associated standards such as OpenFlow and controllers like OpenDaylight.
In Part 2, we will study examples of Software Defined Networks
Google WAN network B4 project and Andromeda project as the first very large scale SDN systems in operation to support a large OTT cloud
Microsoft Azure SDN project
Enterprise Cloud Data Center: IBM OpenStack & SDN
(Cloud workload deployment & operation) as a practical tooling approach to leverage SDN inside a Private Enterprise Cloud.
Network Function Virtualization: Telecom operators are planning to virtualize important Network functions like Evolved Packet Core (EPC), IP Multimedia System (IMS), Content Network Delivery (CDN), etc. .
Main vendors and open source projects
3
A quick reminder on Cloud & Network
Cloud computing is a model
for enabling
ubiquitous,
convenient,
on-demand network access
to a shared pool of configurable computing
resources
networks
servers
storage
applications
services
that can be
rapidly provisioned
rapidly released
with minimal management effort
with minimal service provider interaction.
Source: P. Mell, T. Grance, The NIST definition of cloud computing,
Recommendations of the National Institute of Standards and
Technology Special Publication 800-145, National Institute of
Standards and Technology, 2009
4
Why the Cloud requires a virtualized
network
After computing and storage, the network is the last resource
to be virtualized both in the cloud and in the data center
The network should be virtualized within the data center,
between the data centers and across the service provider
access.
The Network management must be automated
Network
consolidated
IT
solution
5
Networks are the new virtualization battleground
Server Virtualization
•efficiency (consolidation)
•multi-tenancy (isolation)
•flexibility (scaling, migration)
•Hardware independence
(emulation)
hypervisor
VM VM VM
server resource pool(x86, Power, etc.)
in-band virtualization
storage resource pool(storage controllers, etc.)
Storage Virtualization
•efficiency (thin provisioning)
•multi-tenancy (isolation)
•flexibility (scaling, mapping)
Network Virtualization
•efficiency (multiplexing)
•multi-tenancy (isolation)
•flexibility (location independence)
•Hardware independence
(encapsulation)
VM
VM
virtualnetworks
network hypervisor
VM
VM
VM
VM
Network Virtualization
market now forming
6
Storage Virtualization in
the cloud requires SDN
What are the key concepts of Cloud &
Virtualization?
Compute & storage Virtualization Decoupling of hardware and software
Abstraction of the hardware but also of the workload
The end of hard coded “configuration”
The management of resource pool: the concept of elasticity
A high degree of automation in deployment and management
Network Virtualization Platform decoupled from infrastructure
A single router abstraction, for application / user
Network OS abstraction, for operator
Fully generalized virtualization of forwarding plane
Single physical device shared by multiple virtual services
Single logical service run across a pool of physical devices
The end of hard coded “configuration”
Automation and dynamic adaptation
7
Review of Part 1: The SDN or network controller
is the hypervisor of network virtualization
VM VM VM VM
Virtual Switch
SDN “enabled”
Forwarding Engine“Legacy” STP
& Fabric Networks
VM VM VM VM
Virtual Switch
Applications1. SDN Controller
• w/ Southbound interfaces
• w/ Northbound interfaces
2. Virtual Switches
3. OpenFlow Switches
4. Virtual Network OverlaySouthbound Interfaces
Northbound Interfaces
Network Controller
Virtual Networks
“Open Flow”OVERLAY
8
Review of Part1: Network Programmability
Models
Software Defined Networking is a network programmability model which tries to extract the control plane out of the network equipment.
Virtual Overlay configures a network (control and data plane) on top of an other network.
Other network programmability models configure only the control plane inside network equipment.
CLI1
Control Plane
Data Plane
CLI, Web HMI
Programmable APIs2
Control Plane
Data Plane
Applications
SNMP, …
Classic SDN5
Data Plane
Controller
Applications
OpenFlow
Virtual Overlays3
Virtual Control Plane
Virtual Data Plane
Control Plane
Data Plane
Applications
Overlay
Protocols
Hybrid “SDN”4
Control Plane
Data Plane
Controller
Applications
OpenFlow
SDN improves the programmability of the network as a whole.9
Review of Part 1: OpenFlow
• The data path portion resides on the switch, while high-level routing decisions are
moved to a separate controller.
• The OpenFlow Switch and Controller communicate via the OpenFlow protocol,
which defines messages, such as packet-received, send-packet-out, modify-
forwarding-table, and get-stats.
• The data path of an OpenFlow is a flow table abstraction; each flow table entry
contains a set of packet fields to match, and an action (such as send-out-port,
modify-field, or drop).
10
Software Defined Networking (SDN) moves the network control plane away from the switch to the software – for improved programmability, efficiency and extensibility as required by automated workload deployment
Virtualized Network
OS
OS
OS
OS
SDN API
Open Flow
Open Flow
Open Flow
Software Defined Control Plane
SDN Controller
& Analytics
Routing
API
Traffic Engineering
API
Flow Insertion API Firewall API
rou
tin
g
VP
N
…
mo
nit
ori
ng
Direct Access to Physical Network
Traditional Switches
Console Based HW Configuration
rou
tin
g
VP
N
…
mo
nit
ori
ng
OS
rou
tin
g
VP
N
…
mo
nit
ori
ng
OS
rou
tin
g
VP
N
…
mo
nit
ori
ng
OS
rou
tin
g
VP
N
…
mo
nit
ori
ng
OS
Network Services
Source: IBM 11
Review of Part1: SDN Benefits
Key SDN features
Control plane and data plane are separated
Controllers provide a logically centralized views of a network
Programmability of the network by network applications
Allowing
Control and automation of networks with software
Routers, switches, load balancers, ACLs/firewalls, NAT but also VMs and NIC
Fulfillment of specific QoS and security requirements
Advanced bandwidth management, DoS
Enabler for new storage architecture
Definition of data flows based on network bandwidth, path latency, and other criteria
Quick and global response to events
12
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
13
Why a case study about Google?
First company to have used SDN in large scale
Google is a key actor is shaping the concept and
market
Google WAN has a very large network
40% of worldwide web traffic at peak moments Source: https://engineering.gosquared.com/googles-downtime-40-drop-in-traffic
25% of internet traffic in North America on average Source: http://www.wired.com/wiredenterprise/2013/07/google-internet-traffic/
14
Google WAN – B4 project
Internet facing (user traffic) Smooth / daytime traffic patterns
Datacenter traffic (internal) Bursty traffic / bulk data transfers
All internal flows
SDN is used for WAN traffic between DC
WAN intensive Google applications: YouTube
Web Search
Google+
Photos and Hangouts
Maps
AppEngine
Android and Chrome updates
Source: Google15
The cost factors of traditional networks
The lack of control and determinism in traditional protocols
causes worst case over-provisioning
More routers and more fibers are provisioned than really needed
The complexity of traditional control is implemented as a
CPU hungry application running inside the routers
Large configuration workload to deal with non-standard
configuration APIs
Human cost to configure and control the system
Configuration is node centric, lacking a global view
Cost of the monitoring layer
http://www.logicworks.net/blog/2012/12/over-
provisioning-a-wake-up-call-from-the-cloud/
Source: Google16
Google imperatives for moving toward a
Software Defined WAN (B4 project)
Hardware selection is based on necessary features
Software selection is based on Traffic Engineering requirements (not protocol requirements)
Separate monitoring, management, and operation from individual boxes to implement a Network System view
Logically centralized network control more deterministic
more efficient
single source of truth
=> Google engineering has a long successful tradition in implementing logically centralized systems GFS (Google File System), MapReduce, Big Table are using that
precept
At the time the approach of very large scale logically centralized systems was perceived as risky, but it works and scales
Source: Google17
Google centralized Traffic Engineering
Global visibility of the network
Ability to program this global network:
In-house development of TE optimization algorithms
Better efficiency with explicit support of cost functions
Determinist behavior which helps reduce resource over-provisioning
The controller runs on COTS hardware (and no more on expensive CPUs inside the routers)
Algorithms used by Google for Traffic Engineering in SDN
Deadlock Resolution algorithm
Bin Packing algorithm
Scheduling / Calendaring algorithm
Google has also implemented Predictability, Adaptive TE Control Loops, Constraint Relaxation, Max-Min Fairness, …
18
Production view: Google return of
experience on its SDN Data Center WAN
Much faster iteration time
Deployed production-grade centralized traffic engineering in two months
Fewer devices to update
Simplified, high fidelity test environment
Much better testing ahead of rollout
Can emulate entire backbone in software
Smooth software upgrades and new feature introduction
Almost no packet loss and no capacity degradation
Most feature releases do not touch the switch
=> Most state does not have to be carried out by network protocols
Source: Google19
Testing
Testing with powerful semantic tools Virtual environment to experiment and test the complete system
end to end
Emphasis on consistency checks with the help of semantic tools
Validation checks can be performed after every update from the central server (in virtual environment)
Testing a simulated full size WAN Control servers run real binaries but switches are virtualized
Can simulate entire backbone
Everything is real but hardware
Can attach real monitoring and alerting servers
Generally high degree of stability One outage from software bug
One outage triggered by bad configuration push
Source: Google20
TE logic written in SDN
Source : Telecom Paris Tech RES 343 – IP TE21
22
Problem with a centralized control plain:
When it fails…
« [AWS ELB] architecture is similar to the one described by SDN proponents, where control is centralized and orders are dispatched through a single controller. In the case of Amazon, that single controller is a shared queue. … While recovery time duration may be tied to the excessive time it took to fail over to a new primary data store, the excruciating slowness with which services were ultimately restored to customer’s customers was almost certainly due exclusively to the inability of the control plane to scale under [recovery] load.
… One can extend the issues with this SDN-like model for load balancing to the L2-3 network services SDN is designed to serve. The same issues with shared queues and a centralized model will be exposed in the event of a catastrophic failure. Excessive requests in the shared queue (or bus) result in the inability of the control plane to adequately scale to meet the demand experienced when the entire network must “come back online” after an outage. Even if the performance of an SDN is acceptable during normal operations, its ability to restore the network after a failure may not be.
Source: https://devcentral.f5.com/articles/amazon-outage-casts-a-shadow-on-sdn#.UnKNW3dE3Eg
23
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
24
Google Andromeda project
25
Google Virtual Network for the data center (independent of
WAN SDN)
SDN controls or orchestrates the entire network: softswitch
and HW components (TOR, packet processor, Fabric) and
SDN
SDN manages QoS, latency, load balancing, high availability
Ref Amin Vahdat Keynote at ONS 2014
SDN is a key enabler for storage
26
Concept of disaggregation of memory and storage
Google Model: Put the data where it has to be and bring the
compute to it (and do not bring the data to the compute
node)
Google leverage “ Amdahl’s law:” 1 Mbps of IO for 1 MHz of
CPU
Amin Vahdat claims that only SDN will be able to handle that
type of traffic
Ref Amin Vahdat Keynote at ONS 2014
Disaggragation
http://www.datacenterknowledge.
com/archives/2013/10/18/storage-
disaggregation-in-the-data-center/)
Adromeda Control stack
27
Logically centralized
The Control Stack
Manages through APIs
Cloud load balancers
VM migration
Storage services
Bandwidth provisioning
(LAN&WAN)
Network configurations
ACLs, Firewalls, isolation
Ref Amin Vahdat Keynote at ONS 2014
Two uses cases of Google Andromeda
28
Data path
Cloud load balancing
Ref Amin Vahdat Keynote at ONS 2014
Looking Forward
29
Google F1 (NewSQL) which run
Google critical business: the Ad
server.
Google has created Big Data
MapReduce and Google is back to
a more traditional DB with F1
NewSQL
Spanner: underlying storage
successor of Bigtable and
Megastore
=>Spanner/GFS will use the new
SDN & Storage architecture Corbett, James C; Dean, Jeffrey; Epstein, Michael; Fikes, Andrew; Frost, Christopher; Furman, JJ; Ghemawat, Sanjay; Gubarev, Andrey;
Heiser, Christopher; Hochschild, Peter; Hsieh, Wilson; Kanthak, Sebastian; Kogan, Eugene; Li, Hongyi; Lloyd, Alexander; Melnik, Sergey;
Mwaura, David; Nagle, David; Quinlan, Sean; Rao, Rajesh; Rolig, Lindsay; Saito, Yasushi; Szymaniak, Michal; Taylor, Christopher; Wang,
Ruth; Woodford, Dale, "Spanner: Google’s Globally-Distributed Database", Proceedings of OSDI 2012 (Google), retrieved 18 September
2012.
one of |section= and |chapter= specified (help).
Shute, Jeffrey ‘Jeff’; Oancea, Mircea; Ellner, Stephan; Handy, Benjamin ‘Ben’; Rollins, Eric; Samwel, Bart; Vingralek, Radek; Whipkey, Chad;
Chen, Xin; Jegerlehner, Beat; Littlefield, Kyle; Tong, Phoenix (2012), "F1 — the Fault-Tolerant Distributed RDBMS Supporting Google's Ad
Business", Research (presentation), Sigmod: Google.
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
30
SDN in Microsoft Azure
31
The problem
manage flow processing for millions nodes (VM)
Networks in 2013 and growing fast
The solution
Concept of host SDN
Use cases
Disaggregated Memory and Storage
Load balancing
Ref Albert GreenbergKeynote at ONS 2014
SDN:
MS way for separating the Control Plane and Data Plane
32
Data plane needs to apply per-flow policy to millions of VMs
How to apply billions of flow policy to packets
Ex of ACLs policy
Control plane: apply the tenant ACLs to these switches
Data Flow: apply these ACLs to these flows
Scale to 100k VNET with Regional/local controllers
Ref Albert GreenbergKeynote at ONS 2014
Flow table within the host
Azure VM vSwitch
33
There is a controller per Application (LB , Network controller, VNET controller)
Each service (Routing, Nat, Acls) will be configured by a policy within the vSwitch
One Table per policy within the vSwitch
Each table is managed through an API by a Controller
Act on packet – avoid tunnels
Ref Albert GreenbergKeynote at ONS 2014
Use case: Load Balancer
34
The edge router addresses the Load Balancer VM (it is a software) through the Virtual IP (VIP )
The LB VM selects a Dynamic IP in the server Pool and sends the vSwitch of the host VM who performed the NAT at the host level
LB is then stateless and scalable
Ref Albert Greenberg - Keynote at ONS 2014
Use Case: Scaling Storage with SDN
35
Azure uses Software Defined
Storage of cheap commodity
hardware
Scaling storage I/O needs network
I/O
Azure uses Remote Direct
Memory Access (RDMA)
implemented on Network
Interface Controller (NIC) card
No CPU utilization at 40Gbps,
NIC does the work
Remark: Remember presentation of
VMDq, VMDc, Vt-c, SR-IOV in Part I.
Ref Albert Greenberg - Keynote at ONS 2014
Use case: Express Route service
36
Azure ExpressRoute enables you to
create private connections between
Azure datacenters and the customer
infrastructure
ExpressRoute connections do not go
over the public Internet, and offer
more reliability, faster speeds, lower
latencies and higher security., and could
yield significant cost benefits.
Connect to an Exchange Provider facility
or directly connect to Azure from your
existing WAN network (such as a MPLS
VPN) provided by a network service
provider.
Automated by SDN and managed
by Azure Host’s vSwitch
Ref Albert Greenberg - Keynote at ONS 2014
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
37
38
Why a cloud in an Entreprise Data Center?
1Based on internal IBM estimates; individual results will vary. 2, 3,
4Ibid
Improve time to
market for new
applications from
weeks to minutes1
True automation
across
infrastructure and
middleware that
can provision
images more
quickly
Cost optimized
Architected to meet
your security
requirements based
on IBM’s best
practice security
standards
Secure within your
firewalls
In the event of
failure, with minimal
human intervention
and downtime, the
virtual workload can
be re-provisioned
and made available
60 to 80 percent
reduction in IT staff
workload2
“Self-service
dashboard” that
allows you to
provision images in
minutes and to
maintain
middleware
Integrate and utilize existing technology and skills to private cloud
Reduce deployment efforts by 60 to 80 percent3
Increase utilization by 50 percent4
Allocate IT budget to business growth
Speed and
agility Security-rich
Highly
available
Simplified
management
39
Examples of services that will be
automated by a cloud
1VM: virtual machine
Server
A standard set of private cloud modules
OS & Middleware patterns,
OS & Database patternsSoftware patch
management
Remote cloud
management
Optional private cloud modular services(Add to your standard set as needed)
Build your private cloud by assembling a choice of
modules
Benefits
Start small and scale your cloud infrastructure as your business demands grow
Designed to be standardized, modular and scalable from many virtual machines
Assemble your cloud with flexible choices of modules
Data
protection
Network
Speed and agility
Software security compliance
Storage
Workload
balancing
Cloud management
software
Middleware and database
automation capability
OpenStack, the fastest growing open source-
based software, is feature rich and easy to
implement for public and private clouds.
Open source software for building private and public clouds
Standard hardware
OpenStack shared services
Your applications
OpenStack
Cloud Operating
System OpenStack
Dashboard
Compute Networking Storage
APIsGrowing community of technologists, developers,
researchers, corporations and cloud computing specialists
contributing highly functional, usable, quality code
April
2012
Over 2600
people
Sept
2014
Over 19000
people
Standardized application programming interfaces (APIs) to help simplify interoperability
Not locked into a single vendor’s strategy and licensing
Support for more resource types than other open platforms
Free code under the Apache 2.0 license – anyone can run it, build on it and contribute to it
Source: openstack.org
OpenStack is one of the leading open source to
build a private cloud
…Hosted enterpriseBig data /
analyticsWeb and mobile
OpenStack API OpenStack Horizon Portal
OpenStack cloud management software
Compute
hypervisor
Object
storage
servers, storage, network gateways
Internet
Network gateways
…
User
Block
storage
VPN
http://open.ibmcloud.com/home/index.html
OpenStack provides an environment built on
current open standards.
Other OpenStack components included:
Heat – for pattern orchestration
Ceilometer – for reporting, metering
OpenStack experimental projects
not available at this time:
Trove
Sahara
(Optional
service)
Block
storage
Network
Dashboard
Object
storageCompute Image
Provides
Auth for
Horizon
Swift
Nova
Glance
Keystone
Neutron
Cinder
Provides
UI* for
Provides
UI for
Provides
UI for
Provides
UI for
Provides
UI for
Provides
volumes for
Provides
Auth for
Provides
Auth for
Store disk files
in
Provides
Auth for
Provides
Auth for
Provides
Auth for
Provides network
connectivity for
Store
images in
Identity
*User interface (UI)
Software-defined networking (SDN) provides a
multi-tier network architecture for OpenStack
cloud.
Virtual
domain 2
Virtual
domain 1
VM1 VM VM VM
Hypervisor
SDN vSwitch
VM VM VM VM
Hypervisor
SDN vSwitch
VM VM VM VM
Hypervisor
SDN vSwitch
Virtual network 1
Virtual network 2
Virtual network 3
Virtual network 4
Op
en
Sta
ck
Ne
utr
on
AP
Is
IP Network
Automation
application
1VM – Virtual machine
Requirement Our offering provides
Scalable, security-rich multitenancy 16,000 virtual network with an architectural limit of 16 million
Multiple IP addresses per vNIC Minimum of 8 IP addresses to allow virtual machines (VMs)
to have IP aliasing
Ability to create and delete multiple virtual networks
that span a single or multiple remote servers quickly
Creation and deletion of virtual networks spanning across
servers and even physical networks
Activate and reconfigure network services Application programming interface (API)-based access
to Layer 3 network service for automated network
manipulation
Create, configure and delete a subnet Automated network services manipulation through APIs
Create, configure and delete routing tables Built-in Layer 3 gateway with configurable routing
Create, configure and delete Network Address
Translation
NAT and PAT functionality via external IP gateway
and floating IP address support
Enable or disable inbound or outbound Internet
access to specific virtual networks
External IP gateway provides access control
Attach multiple vNICs to a single VM VM connectivity provides to multiple virtual networks
Network API access Access provided to both OpenStack Neutron API
and RESTful API
SDN helps OpenStack cloud solution to meet
many key requirements.
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
45
46
Deutsche Telekom TeraStream - A model for simplification of IP networks
and improve time to market for the introduction of new cost effective
services
The traditional technology model used to produce IP connectivity to other Internet service providers is seen as too
costly to handle a substantial IP traffic growth. The IP traffic growth is expected to come from the massive
deployment of new technologies in the access area such as FTTx, usage of GPON, improvements in DSL capacity, or
models providing GbE (Gigabit Ethernet) connectivity for end-users. I
www.opennetsummit.org/pdf/F2013/presentations/axel_clauberg_hakan_millroth.pdf
Network Function Virtualization
Network Functions Network Functions
Data Center Network Functions
Firewall, Intrusion Protection System, Load balancers
Carrier grade Network functions
IMS (VOIP), DPI, Session Border Control, Evolved Packet Core, GGSN, Carrier grade NAT….
How to deploy a Virtualized Network Function (VNF) in a virtualized environment? In the cloud?
It can be run as a software on standard HW (and not an appliance or proprietary HW)
It will be deployed in a Network Cloud (a cloud with some extensions)
Network Functions can be seen as a type of Workload for automated deployment and elasticity management
47
SDN and NFV
Software Defined Networking (SDN) separates the control plane from the data plane within the network, allowing the intelligence and state of the network to be managed centrally while abstracting the complexity of the underlying physical network. (OpenFlow.)
Evolving network services from an appliance model to one that leverages virtual compute, storage, and networking, Network Functions Virtualization (NFV) promises to improve both the agility of when and where to run network functions as well as the cost structure.
New generation applications such as Hadoop, video delivery, and virtualized network functions require networks to be agile and to adapt flexibly to application requirements.
http://www.opendaylight.org/project/technical-overview48
Network Functions moving to standard
hardware and the Cloud
VAS PE router
DPI
Firewall
GatewaysGGSN
Signalling Router
Policy controller
Video optimization
Probe
EPC
RNC
“Network” Cloud
49
Network Function Virtualization and SDN
Source: Network Function Virtualization White Paper, AT&T, BT et. al., Oct 2012
Network Function Virtualization
– Network Function as Software only
implementation
– Leverages standard IT virtualization
technology to consolidate many network
equipment types into industry standard high
volume servers, switches and storage.
– Breaks function specific development cycle
Software Defined Networking
– provides network connectivity abstraction
and management
– Separates the control plane from the
network elements and data plane
50
NFV Use cases
ETSI Use Cases will be used in this section
Virtual CPE
Forwarding graphs
LTE Evolved Packet Core
Service Provider Home Service
Source: http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf
51
ETSI Use Case: Virtualization of the Customer
Premise Equipment (CPE) for Enterprise Benefits:
Faster time to install the services,
Lower CAPEX (less expensive switches, centralized management)
Easier maintenance and release of new versions / new services
52
ETSI Use case: Virtualization of CSP Home
services
Benefits of virtualizing the Home Services in a Network Cloud:
Better cost by eliminating expensive Residential Gateway and Setup Box (one per TV)
Easier maintenance and deployment of new services
Easier support of multi screens and mobility services
53
ETSI Use Case: LTE – Evolved Packet Core
Benefits of virtualizing the
EPC in a Network Cloud:
More cost effective
management of the « signaling
storm » generated by tablets
and smartphone
Accommodate a sudden large
increase of a service, for
instance, voice calls after a
natural disaster.
Network topology can be
more dynamically
reconfigured to optimize
performances
54
Role of Orchestration in NFV Overall orchestrator level -
equivalent to Software Defined Environment (SDE) workload orchestrator
The VNF has some configuration functions and tools
SDN Controller provides abstraction of connectivity inside the VMs of VNF and support forwarding graphs between VNFs. SDN APIs enable the global orchestrator to perform end to end configuration and manage resource re-allocation in real time (cloud elasticity)
55
AT&T Domain 2.0 project
56 www.ece.cmu.edu/~ece739/papers/att_whitepaper.pdf
Telefonica’s future proof network architecture
vv
COTS HW
LOCAL PoPs REGIONAL DATA CENTRES
Control Plane can be
Centralised
Data Plane must be
Distributed
OS + Hypervisor
MPLS/SDN/Optical
Infrastructure
Service Domain
Network Domain
CDN Video
P-CSCF
EPC BRAS
CG-
NAT
DPI
SDP CSFB
IMS
DHCP PCRF
DNS UDB
COTS HW
OS + Hypervisor
MPLS/SDN/Optical
SRVCC
HW and SW
decoupling
HW and SW
decoupling
GGSN
PE
Security
NGIN
M/SMS
C
There will be two kinds of Virtualized Network
Infrastructure: local PoPs and regional Data Centres
Networks PoPs and Data centres intra and inter communications will
be critical to guarantee a differential e2e user experience
Network Virtualization: A journey from innovation to telco transformation Enrique Blanco
Nadales Global CTO
Examples of NFV and SDN vendors in Telecom
(Vyatta)
Alcatel Lucent: Cloudband
59 http://www.alcatel-lucent.com/solutions/cloudband
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
60
Key Standard Organizations
Orchestration OpenStack: http://www.openstack.org/
SDN Open Networking Fundation
https://www.opennetworking.org/blog/
OpenDaylight SDN controller https://wiki.opendaylight.org/view/Main_Page
Related: Open vSwitch project http://openvswitch.org/
Network Function Virtualization ETSI NFV Group
http://www.etsi.org/technologies-clusters/technologies/nfv
61
OpenStack is a global collaboration of developers & cloud computing technologists working
to produce an ubiquitous Infrastructure as a Service (IaaS) open source cloud computing
platform for public & private clouds.
The OpenStack Foundation
Platinum Sponsors Gold Sponsors
http://openstack.org
OpenStack Compute (core)Provision and manage large networks of virtual machines
OpenStack Object Store (core)Create petabytes of secure, reliable storage using standard HW
OpenStack Dashboard (core)Enables administrators and users to access & provision cloud-based resources through a self-service portal.
OpenStack Image Service (shared service)Catalog and manage massive libraries of server images
OpenStack Identity (shared service)Unified authentication across all OpenStack projects and integrates with existing authentication systems.
150 Orgs
2600 Individuals
850 Orgs
5600+ Individuals
Exponential growth in participation
Code available under Apache 2.0 license. Design tenets – scale &
elasticity, share nothing & distribute everything
62
What is Project Open Daylight?Daylight is an open source project under the Linux Foundationwith the mutual goal of furthering SDN adoption and innovation
through the creation of a common industry supported framework.
Platinum
Gold
Silver
Members
Netw
ork
Ap
ps
&
Orc
hest
rati
on
Co
ntr
oller
Pla
tfo
rmP
hysi
cal &
Vir
tual
Netw
ork
Devic
es
Members as of June 25, 2013 and growing
Base Network Service Functions
Management GUI/CLI
OpenDaylight APIs (REST)
DOVE Mgr
Service Abstraction Layer (SAL)(plug-in mgr., capability abstractions, flow programming, inventory, …)
OpenFlow
1.0 1.3LISP
TopologyMgr
Stats Mgr
Switch Mgr
Host Tracker
Shortest Path
Forwarding
VTN Coordinator
Affinity Service
OpenStackNeutron
OpenFlow Enabled Devices
VTN Manager
LISP Service
NETCONF BGP
Additional Virtual & Physical Devices
SNMP
DDoS Protection
Open vSwitches
OVSDB PCEP
OpenStack Service
NetworkConfig
63
Base, Edition, Virtualization Edition, Service Provider Ed
http://www.opendaylight.org
Open Networking Foundation
Open Networking Foundation (ONF) is a user-driven organization dedicated to the promotion and adoption of Software-Defined Networking (SDN) through open standards development.
Their signature accomplishment to date has been the introduction of the OpenFlow™ Standard, which enables remote programming of the forwarding plane.
The OpenFlow Standard is the first SDN standard and a vital element of an open software-defined network architecture.
Over 90 companies are members of the Open Networking Foundation.
http://www.opennetworking.org
ONF is a user-driven organization which defines the OpenFlow standard64
ETSI NFV ISG objective
“Leverage standard IT virtualisation technology to consolidate many network equipment types onto industry
standard high volume servers, switches, and storage”
NFV Model
DPIBRAS
GGSN/SGSN
Session Border
ControllerFirewall CG-NAT
PE RouterORCHESTRATED, AUTOMATIC
& REMOTE INSTALL
Traditional Network Model
DPIBRAS
GGSN/
SGSN
Firewall
CG-NAT
PE Router
VIRTUAL
APPLIANCES
STANDARD
HIGH VOLUME
SERVERSSTB
Network Function Virtualization (NFV)Initiative launched by 13 major telco operators 23rd Oct 2012
AT&T, BT, CenturyLink, China Mobile, COLT, DT,
KDDI, NTT, Orange, TeIecom Italia, Telefonica, Telstra
and Verizon
Implication for IndustryFor the first time, the telecom industry is saying “we want off
the shelf IT technology”, not specialized equipment
Creates a new role of Network Integrator – Systems
Integration / Program Management for Network Cloud
Business Opportunities for Operators – new
Operators’ network spend = 5x operators’ IT spend
Transform cost of operations using common infrastructure
Reduce time to market by enabling new suppliers
Reduce time to market by simplifying acceptance test
Business Risk for Operators
Impact on vendors and supply chain
Discipline required to managing multiple vendors using
common infrastructure
End-to-end architecture and design
http://www.etsi.org/65
Cisco ONE + Cisco Application Centric Infrastructure - an
evolutionary approach and end to end view
http://www.sdncentral.com/news/insieme-know-think-
know/2013/10/http://www.sdncentral.com/news/insieme-
know-think-
know/2013/10/http://www.sdncentral.com/news/insieme-
know-think-know/2013/10/
http://www.cisco.com/web/europe/ciscoconnect2013/pdf/SP_1__Network_Simplification.pdfhttp://files.shareholder.com/downloads/CSCO/2773261670x0x671513/3e966c22-d020-46ee-95d1-ee7bea827939/CSCO_NFV_TechTalk_FINAL.pdf
66
CISCO provisions the network through
Application policies managed in a central
SDN controller, the APIC
JUNIPER: also evolutionary and end to end
http://www.slideshare.net/junipernetworks/contrail-launch-capitalize-on-sdn-and-cloud-now67
VMWARE / NSX (NICIRA)
VMWARE vSwitch VMware bought NICIRA
Now VMware NSX
http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf http://cto.vmware.com/introducing-vmware-nsx-the-
platform-for-network-virtualization/
68
Alcatel Lucent
69
Agenda
Introduction and review of Part 1
Use Case 1: Google B4 and Andromeda projects
Use Case 2: Microsoft Azure
Use Case 3: Enterprise Cloud Data Center: IBM
OpenStack & SDN
Use Case 4: Network Function Virtualization : A specific
case to Telecom industry, the Network Cloud Data
Center
Standard Organizations & Key vendors
Conclusion
70
What Is Software Defined Network (SDN)?
“…In the SDN architecture, the control and
data planes are decoupled, network
intelligence and state are logically centralized,
and the underlying network infrastructure is
abstracted from the applications…”
Source: www.opennetworking.org
What is OpenStack?
Open-source software for building public
and private clouds; includes Compute (Nova),
Networking (Quantum & Neutron) and
Storage (Cinder) services.
Source: www.openstack.org
What is a Network Overlay?
Overlay networks are created on existing
network infrastructure (physical and/or virtual)
using a network protocol. Examples of overlay
network protocol are: GRE, VPLS, OTV, LISP
and VXLAN.
What Is OpenFlow?
“…open standard that enables researchers to
run experimental protocols in campus
networks. Provides standard hook for
researchers to run experiments, without
exposing internal working of vendor devices…”
Note: OpenFlow is not mandatory for SDN
Note: Applicable to non-SDN networks, as well.
Note: Applicable to SDN and non-SDN networks
Note: SDN is not mandatory for network programmability
nor automation!
A Summary
71
Thank You
Yves EYCHENNE
Telecom Sector,
Industry Technical Leader, Europe
Steven EYCHENNE
Consultant in IS/IT Architecture
72
References for Google the use case
Sources for the Google use cases:
http://www.ietf.org/proceedings/84/slides/slides-84-sdnrg-4.pdf
http://www.opennetsummit.org/archives/apr12/hoelzle-tue-openflow.pdf
http://cseweb.ucsd.edu/~vahdat/papers/b4-sigcomm13.pdf
Recommended viewing on Youtube:
http://www.youtube.com/watch?v=n4gOZrUwWmc
http://www.youtube.com/watch?v=VLHJUfgxEO4
http://www.youtube.com/watch?v=ED51Ts4o3os
73
References for the Microsoft use case
Recommended viewing on Youtube:
http://www.youtube.com/watch?v=8Kyoj3bKepY&list=PLhigroIs
bIud2X-3UpNqXFWXnnqlUZYr0
74
References for Network Function
Virtualization
NFV is managed by the ETSI ISG NFV group
http://portal.etsi.org/NFV/NFV_White_Paper2.pdf
ETSI Use Cases will be used in this section http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV0
01v010101p.pdf
OpenDaylight and NFV
http://www.opendaylight.org/project/technical-overview
Recommended viewing on Youtube
ATT Vision
http://www.youtube.com/watch?v=tLshR-BkIas&list=PLhigroIsbIud2X-3UpNqXFWXnnqlUZYr0
Orange NFV vision: http://www.youtube.com/watch?v=PZE4xsbS8To
Axel Clauberg, Deustche Telekom, TeraStream project
http://www.youtube.com/watch?v=L1VW6OzZEVk
75
Additional References Research Publications
Saurav Das, YiannisYiakoumis, Guru Parulkar, Nick McKeown, Preeti Singh, Daniel Getachew, Premal Dinesh
Desai, "Application-Aware Aggregation and Traffic Engineering in a Converged Packet-Circuit
Network", OFC/NFOEC 2011.
Technology News, Blogs, or Forums
SDN Central http://www.sdncentral.com/
Kate Greene, “TR10: Software-Defined Networking”, MIT Technology Review, March/April 10
Emerging Technologies, 2009
Standford ONRC ON.LAB http://onrc.stanford.edu/
Videos and Open Networking Summit
Open Networking Summit, 2011, 2012, 2013, 2014
Martin Casado, "Origins and Evolution of OpenFlow/SDN", Nicira Networks
PDF Slides: http://opennetsummit.org/talks/casado-tue.pdf
Scott Shenker, "The Future of Networking, and the Past of Protocols", ICSI/Berkeley/ONF
PDF Slides: http://opennetsummit.org/talks/shenker-tue.pdf
Nick McKeown, "How SDN will Shape Networking", Stanford/ONF
PDF Slides: http://opennetsummit.org/talks/mckeown-tue.pdf
76