Converged Packet-Optical Software Defined Networking Tom Tofigh AT&T Marc De Leenheer ON.Lab
Converged Packet-Optical Software Defined Networking Tom Tofigh AT&T Marc De Leenheer ON.Lab
Outline
SDN for Service Providers Background
Use cases
Packet/Optical Use Case Problem statement and conceptual solution
Implementation
Demonstration
State of the Industry & Future Work
Unprecedented
Traffic Growth
Orders of
magnitude
increase in
users, devices,
apps
Video, Mobile
traffic
exploding
CAPEX
continues to
rise
“DATA” ERA
“VOICE” ERA
TRAFFIC
OPERATOR COST
REVENUES
* Graph Source - Accenture Analysis
NEW SERVICES
IP Video: 79% of all IP traffic in 2018
AT&T spends $20 Billion per year on CAPEX
Service Provider Networks
2016 traffic = triple of 2011 More mobile devices than people
Time
Growth
Explosive Growth
Scale Open Monetize
Reduce CAPEX and OPEX
Deliver new and customized services rapidly DevOps model Bring in cloud-style agility,
flexibility, Scalability
Lower operational complexity, increase visibility
• Open APIs • Multi-vendor • Multi-technology • Open Source
Turning Growth into Opportunity
Merchant Silicon
Loader OS
Agent
Closed
SDN Network Operating System
Control Apps Mgmt Apps Config Apps
Whitebox Legacy
Key Enabler: Software Defined Networking
Service Provider Networks WAN core backbone
Multi-Protocol Label Switching (MPLS) with Traffic Engineering (TE)
200-500 routers, 5-10K ports
Metro Networks Metro cores for access networks
10-50K routers, 2-3M ports
Cellular Access Networks LTE for a metro area
20-100K devices, 100K-100M ports
Wired access / aggregation Access network for homes; DSL/Cable
10-50K devices, 100K-1M ports
Core Packet-Optical
Metro Packet-Optical
Wired Access
Wireless Access Access
Central Office Built like a Data Center
Network Interface
Network Interface
Enterprise Access
Wired Access
Wired Access
Enterprise Access
Network Interface
Network Interface
Network Interface
Network Interface
POP Built like a Data Center
Network Interface
Network Interface
Network Interface
Network Interface
Service Provider Network of the Future
Wireless Access
Wireless Access
Wireless Access
Wireless Access
High Throughput:
~500K-1M paths setups / second
~3-6M network state ops / second
High Volume:
~500GB-1TB of network state data
Difficult challenge!
ONOS
Apps Apps
Global Network View / State Global Network View / State
high throughput | low latency | consistency | high availability
SDN Control Plane: Key Performance Requirements
Scalability, High Availability & Performance
Northbound & Southbound Abstractions
Modularity
ONOS: SDN Network OS for Service Providers
NB – Application Intent Framework
Southbound Core API
Protocols
Adapters
Apps
Protocols
Adapters
Protocols
Adapters
Protocols
Adapters
ONOS Instance 1
ONOS Instance 2
ONOS Instance 3
ONOS Instance N
Distributed Core (performance, scale-out, availability, state management, notifications)
ONOS: Distributed Network OS
Distributed Core
Southbound
“Provision 10G path from Datacenter 1 to Datacenter2 optimized for cost”
Intents translated and compiled into specific instructions for network devices.
Application Intent Framework: APIs, Policy Enforcement, Conflict resolution
Distributed Core
Southbound Core API
OpenFlow NETCONF Southbound Interface
Flexible and intuitive northbound abstraction and interface for user or app to define what it needs without
worrying about how.
Application Intent Framework
SDN Use Cases for Service Providers Converged multi-layer packet/optical networks
Central Office Re-architected as a Data center (CORD)
Seamless SDN and IP peering with SDN-IP
Segment routing with SDN control
And many more
Mobile backhaul (IP RAN)
Network Functions as a Service (NFaaS)
IP multicast
…
Central Office Re-architected as a Data center
I
O
I
O
Metro Core Link
I
O
Access Link
Fabric
Spine Switches
Leaf Switches
vBNG
vCPE
vOLT
NFVI orchestration XOS
20
K-1
00K
su
bscrib
ers
/CO
Central Office Re-architected as Datacenter
DHCP
LDAP
RADIUS
Co
ntr
ol
Data
ONT Simple Switch
Subscriber Home
PON OLT
MACs
SDN Control Plane ONOS
Key components • Commodity hardware • SDN Control Plane (ONOS) • NFVI Orchestration (XOS, Openstack) • Open Leaf Spine Fabric • Simple on-prem CPE + vCPE • Virtualized Access (PON OLT MAC + vOLT) • Virtualized Functions • Virtualized BNG
Commodity hardware
Applications
Local video streaming service at mobile edge
On-demand provisioning of vBBUs at cell sites near big sport match
On-demand provisioning of video caching application(VM) for local video caching service
Functions like DNS and DPI also need to be deployed locally for traffic classification
Other traffic of spectators is treated same as before; traverse from and to the Centralized Core
Local communications hosted by distributed EPC
Virtualized EPC can also be deployed to host local and internal communications
Communication between security staffs
Remote monitoring of Security CAM
Centralized DC
SGW-D
DB
HSS
. Mobile CORD
PGW-D
. Mobile CORD
Local Traffic
Mobile CORD
GP
ON
(A
ccess)
RO
AD
M
(Co
re)
Key Building Blocks to Measure: Access, Fabric, ROADMs and VNFs
PON
OLT
MACs
Analytics Platform
(XOS + Services)
Apps Apps Apps Customer
Care Security Diagnosis
ONOS + XOS
LOG
Analytic CORD
SDN Network
External Network
External Network
External Network
External Network
ONOS 1 ONOS 2
SDN-IP 1 SDN-IP 2
BGP speaker 1
BGP speaker 2
ONOS Control plane
BGP routes
ONOS intents
OpenFlow entries
Seamless Peering: SDN-IP
Outline
SDN for Service Providers Background
Use cases
Packet/Optical Use Case Problem statement and conceptual solution
Implementation
Demonstration
State of the Industry & Future Work
Problem Statement Today IP packet and transport networks are separate.
They are planned, designed and operated separately by different teams.
This leads to significant inefficiencies.
They are subject to under-utilized networks with significant pre-planning and highly over-provisioned for worst case.
A lot of the path planning in these networks is off-line.
Given these considerations, WAN links are typically provisioned to 30-40% average utilization. This allows the network service provider to mask virtually all link or router failures from clients. Such overprovisioning delivers admirable reliability at the very real costs of 2-3x bandwidth over-provisioning and high-end routing gear. S. Jain, et. al., “B4: Experience with a Globally-Deployed Software Defined WAN,” SIGCOMM 2013.
ROADM
ROADM
ROADM
ROADMROADM
ROADM
ROADM
IP
ROADMs
IP Routers
Logical Tunnels Full Mesh MPLS
IP
IPIP
IPIP
IP
IP
IP
IP
A E
B
C
D
P3
P1 P2
P4
P5
R1 R2 R4 R7
R3 R5 R6 100s of
wavelengths per ROADM
Multi-Layer Network without Converged Control Plane
ROADM
ROADM
ROADM
ROADM
ROADM
ROADM
IP
IPIP
IPIP
IP
IP
IP
IP
IP IP
ROADM
100G
50-100G
50-100G 100G
100G
100G
50-100G
50-100G
A E
B
C
D
P3
P1 P2
P4
P5
R1 R2 R4 R7
R3 R5 R6
Multi-Layer Network without Converged Control Plane
Peak rate provisioning is necessary in optical transport
ROADM
ROADM
ROADM
ROADM
ROADM
ROADM
IP
IPIP
IPIP
IP
IP
IP
IP
IP IP
ROADM
Primary
Light Paths
Protected
A E
B
C
D
P3
P1 P2
P4
P5
R1 R2 R4 R7
R3 R5 R6
Light Paths
Multi-Layer Network without Converged Control Plane
Multiple protection modes are applied (up to 4 times BW)
Static provisioning in transport networks
Optical circuit re-routed
BW Calendaring
1. Centralized Control of packet and optical
2. Multi-layer optimization based on availability,
economics and policies
Datacenter 1
Packet Network
Optical Network
Control Apps Mgmt Apps Config Apps
ONOS
Datacenter 2
Conceptual Solution: Multi-Layer SDN Control
Benefits of Converged Control Plane Much faster bandwidth provisioning
Drastically improve network utilization
Perform dynamic restorations in response to packet and transport network failures
Agile development and rapid deployment of new services
Implementation Code is king
Less is more
Vendor neutral
Scalable
Work focused on the three SDN layers
Data plane
Control plane
Applications
Implementation – Data plane Packet switches today Control of forwarding plane via OpenFlow
Open and standardized
Ideal scenario ROADMs have similar open and standard interface
Reality Many ROADMs use legacy protocols, such as TL1
Vendor-specific, proprietary
So we built an emulation platform (LINC-OE) Partnered with Infoblox
ROADM Emulation Basics Emulates optical layer topology from predefined table
Includes characteristics of optical cross connect and Packet to Optical Link Interface (Add/Drop)
Ports, links and switches are remotely reconfigurable by Mininet
Supports OpenFlow 1.3+ Optical Add/Drop match actions
Supports failure scenarios of links, ports, and ROADM
Work in progress
Emulates channel signal/power measurement
Regenerator support
DROPs ADDs
ROADM
Module O
P
M
ROADM
Module
ROADM
Forwarding Model for ROADMs
Match/action abstraction for ROADMs
ROADM has three functions: add, drop, and forward
Match is really about wavelength provisioning
Add
Match Packet port and traffic type
Action Transponder uses lambda and output to optical port
Forwarding
Match Optical port and lambda
Action Output to optical port (easy to extend when considering regenerators)
Drop
Match Optical port and lambda
Action Transponder uses lambda and output to packet port
ROADM
ROADM
packets packets
IP Forwarding Constraints
Match Action Match Action Egress Values
Tuple on port3
Encaps IP DESTxxx
IP, DESTxxx
Capacity 10GbE
LAG No Cost 10
IP Forwarding Constraints
Match Action Match Action Egress Values IP, Destxxx on port3
Pop IP Packet
Tuple Forward to
port12
Capacity 10GbE
LAG No
Cost 10
IP
Lambda Forwarding Constraints
Match Action Egress Values l66 on port2 Forward WDM
port6 Capacity 100Gb
REGEN No Cost 250 Loss 2 NF 4.25
PMDest 0.05 Dispest 25
Lambda Forwarding Constraints
Match Action Match Action Egress Values TPND port 5 PushOCH
Lambda 66
Lambda 66
Forward to WDM port1
Capacity 100Gb
MUXPDR Yes
Cost 250
Loss 2.5 NF 4.75
PMDest 0.1 Dispest 40
Lambda Forwarding Constraints
Match Action Match Action Egress Values l66 on port3
Forward TPND port3
l66 on TPND port3
POP OCH and egress
Capacity 40Gb
LAG NA
Cost 100
Forwarding Model for Packet and ROADM Layer
ROADM
IP
ROADM
ROADM
Transport Network Metering Model
Metering
Port
Packets dropped Total packets Queue overflow status Queue Overflow count
IP
ROADM
IP
ROADM
ROADM
OAM
BFD
Delay
Jitter
Loss
Metering
OTU-Frame/OCH/TP
ND
Errored seconds Severely errored seconds
OAM
OTU-Frame/OCH/TP
ND
LOS
LOF
Protection
WDM LOL LBC
Metering
Port
Packets dropped Total packets Queue overflow status Queue Overflow count
OAM
BFD
Delay
Jitter
Loss
Metering
OTU-Frame/OCH/TP
ND
Errored seconds Severely errored seconds
OAM
OTU-Frame/OCH/TP
ND
LOS
LOF
Protection
WDM LOL LBC
Metering
OTU-Frame/OCH/TP
ND
Errored seconds Severely errored seconds
OAM
OTU-Frame/OCH/TP
ND
LOS
LOF
Protection
WDM LOL LBC
Red items implies access only if a regenerator is used
Implementation – Control Plane Southbound protocol for ROADMs
ONF Optical Transport Working Group
OpenFlow 1.3+ experimenter messages
Southbound abstractions simplify adding new protocols
Converged topology
Control both packet and optical layers
Allows adding additional layers, e.g., OTN
Discovery
Automatic L3 topology discovery (LLDP)
Static configuration of L0 topology
L0 discovery work in progress
Control Apps Mgmt Apps Config Apps
SDN Network Operating System
Implementation – Control Plane Path calculation takes place on the multi-layer graph
Constraints and resource management
Wavelength continuity, bandwidth, latency, …
Restoration
Optical link failure causes disappearing packet links
Packet layer restoration is tried first
If unsuccessful, perform optical layer restoration
Easily add multi-layer protection and restoration mechanisms
Control Apps Mgmt Apps Config Apps
SDN Network Operating System
Implementation – Applications
Distributed Core
Southbound
Application Intent Framework: APIs, Policy Enforcement, Conflict resolution
Distributed Core
Southbound Core API
OpenFlow NETCONF Southbound
Interface
Multi-Layer GUI
Bandwidth on Demand
Bandwidth Calendaring
ONOS Multi-Layer Reference Platform LINC-OE
First open source L0 emulator
Based on OpenFlow 1.3+
Infoblox and ON.Lab
ONOS HA, high performance, open source network OS
ON.Lab and partners
Open source platform
Benefits Rapid prototyping, agile
Adherence to common interfaces
Scalability testing
Common optical control plane for interoperability between vendors
Demo GUI
DO try this at home
https://wiki.onosproject.org/display/ONOS/Packet+Optical
Lessons Learned
Feasibility Converged packet optical control plane is possible
Offers scalability, HA, and performance
Benefits Significant improvement in network utilization
Drastic reduction in CAPEX and OPEX
DevOps model for transport networks
Deeper insights OpenFlow packet switches commercially available, resistance from L0 vendors
Abstractions are critical: intent framework, multi-layer graph
Outline
SDN for Service Providers Background
Use cases
Packet/Optical Use Case Problem statement and conceptual solution
Implementation
Demonstration
State of the Industry & Future Work
Merchant Silicon
Loader OS
Agent
Closed
SDN Network Operating System
Control Apps Mgmt Apps Config Apps
Whitebox Legacy
Vertical Integration: Packet Switches
Packet switches are undergoing this transformation right now!
Vertical Integration: ROADMs
WSS
fiber demux
transponder
mux
add/drop pass through
Hardware
HAL OS
Agent
Control Apps Mgmt Apps Config Apps
Whitebox Legacy
SDN Network Operating System
Control and config of WSS and transponders
ROADM controller
Signal Monitoring and Adjustment Metering and alarms
Control and config of WSS and transponders
Signal Monitoring and Adjustment Metering and alarms
Why is this de-aggregation not happening?
controller
What Makes Optical Devices Different? “We need specialized mix of L0, L1, and L2 functions”
“Physical impairments are too complex to monitor and manage externally”
“Our analog transmission system is custom designed”
“It’s impossible to control all configuration and forwarding at scale”
“You can’t achieve sub-50ms failovers”
And so on…
None of this is fundamental!
De-aggregation is inevitable
Open Optical Hardware Hardware Abstraction Layer
Hides optical impairments, thermal instability, power balancing, etc.
Can autonomously fix problems or perform maintenance
OS
Server-like environment for switches
Manages various hardware sensors
Boot loader, tools, switch management, etc.
Agent
Open and standardized interface for forwarding, configuration, and observability
Hardware
HAL OS
Agent
Whitebox Legacy
controller
Inviting all vendors to join us!
Disaggregated ROADMs
6"
W#
W#
MW# MW#
API#
API#API#
W#G#
ROADM#(Wavelength#Switching,..)#
Pluggable#Op?cs#
Transponder#
•
–
•
•
•
Martin Birk, Mehran Esfandiari, Kathy Tse, “AT&T's direction towards a Whitebox ROADM,” ONS 2015.
Spine switch Spine switch
Leaf & Spine Fabric
L0 Device Controller
WOLU
WROADM
WOLU WOLU
WROADM
WOLU WOLU WOLU
Leaf switch Leaf switch Leaf switch
L0 Device Controller L0 Device Controller
WOLU
WROADM
WOLU WOLU WOLU
SDN Controller
WOLU – White box Optical Line Unit WROADM – White box ROADM
Architecture
Optical circuit re-routed
Optical Network with disaggregated ROADMS
Control Apps
Mgmt Apps
Config Apps
ONOS
Conceptual Solution: Disaggregated SDN Controlled Transponders and ROADMS
Metro
Transponder Transponder
λ
2
Leaf-Spine Fabric
Channelized IP Traffic
Channelized IP Traffic
Leaf-Spine Fabric
Control Apps
Mgmt Apps
Config Apps
ONOS
λ
It's Happening Now!
5 CALIENT Technologies – All Rights Reserved – Company Confidential
CALIENT’s S-series Optical Circuit Switch (OCS)
Optimized for Datacenters and Software Defined Networks
§ Upto320UserPorts–640SingleModeFiberTermina ons
• 320x320,160x160op ons
§ 10,40,100Gbit/sperportandbeyond
§ 25mstypicalsetup me(<50msMax)
§ Lessthan50nslatency
§ Lessthan3.5dBInser onLoss
§ Ultralowpower(<45w),smallsize(7RU)
§ TL1,SNMP,OpenFlowAPIs
Vendor-Specific Domains Second problem with Optical Transport Industry
Transport networks suffer from vendor lock-in
Domain consists of equipment from a single vendor
Each domain requires vendor-specific NMS/EMS
No data plane interoperability
Profound impact on service providers
Complex management & orchestration tools
Problem identification & resolution
Expensive
Is this fundamental? Vendor A Vendor B
Service Provider Orchestration
NMS B NMS A
Why Vendor-Specific Domains? “We monitor network state and performance in NMS”
“We built intelligent alarm and event handling between boxes and NMS”
“Our NMS is the only system that can control our transmission”
“Failures are handled faster and more efficiently by our NMS”
And so on…
None of this is fundamental!
Vendor-specific domains will disappear
Vendor-Neutral Domains
X
X
X
X
X
Vendor B
Vendor A
Control Apps Mgmt Apps Config Apps
SDN Network Operating System
Data plane interoperability is key
Common southbound abstractions
Vendors can still innovate and diversify their hardware
Proof of Concept
On-DemandOp calBandwidth
AdvancedMul -LayerRestora on
FujitsuTL1providerOFprovider CienaTL1provider HuaweiPCEPprovider
MenloPark,CA
Richardson,TX Plano,TXO awa,Ontario
ONOS
Op callayer
IPLayer
DomainA DomainB DomainC
Mul -LayerNetworkOp miza on
Proof of Concept Open Networking Summit 2015 https://youtu.be/gsfYwJyYfI4
Looking to work with vendors that offer OpenFlow/NETCONF support
Something better than proprietary TL1
Experiments on data plane interoperability
Drive adoption of DevOps model for transport networks
Hardware deployments
Future Work
Cap
Grow
Drain
ONOS ONOS ONOS
MPLS Network
MPLS Network
MPLS Network
Optical Network Optical
Network
Optical Network
Segment Routing (for MPLS network)
Optical control Segment Routing (for MPLS network)
Optical control Segment Routing (for MPLS network)
Optical control
Whitebox switches
Whitebox switches
Whitebox switches
Whitebox switches Whitebox
switches
Whitebox switches
Cap MPLS backbone – don’t grow the legacy MPLS backbone of proprietary routers
Grow packet edge and optical core with SDN control plane and make the best use of packet-optical technologies
Drain the MPLS backbone as most traffic transitions to new packet edge and optical core network
New SDN Edge
Route Big Flows to optical network
Cap-Grow-Drain Strategy Cap-Grow-Drain
= Bring SDN to backbone without
fork lift upgrade
Summary Demonstrated converged packet/optical control plane for
service providers
Scalability, HA, performance
Potential to dramatically decrease CAPEX & OPEX
Innovative services using DevOps model
Need the right abstractions
Intent framework
Multi-layer graph
Optical circuit re-routed
BW Calendaring
1. Centralized Control of packet
and optical
2. Multi-layer optimization
based on availability,
economics and policies
Datacenter 1
Packet Network
Optical Network
Control Apps
Mgmt Apps Config Apps
ONOS
Datacenter 2
Call to Action Open and standardize hardware interfaces
Achieve control plane interoperability
Eliminate vendor-specific domains
Achieve L0 data plane interoperability
Remove vendor-specific approaches (EMS & NMS)
If existing vendors don’t take action, others will step in!
X
X
X
X
X
Control Apps Mgmt Apps Config Apps
SDN Network Operating System
Hardware
HAL OS
Agent
Whitebox Legacy
controller
Join the journey @ onosproject.org
Software-defined Transformation of Service Provider Networks