Delivering Carrier Grade OCP for Virtualized Data Centers Presented by :
Delivering Carrier Grade OCP for Virtualized Data Centers
Presented by :
Today’s Presenters
Moderator
Simon StanleyAnalyst at LargeHeavy Reading
Suzanne Kelliher DCEngine Product Line [email protected]
Agenda
• Trends in the adoption of SDN and NFV in telecom networks
• Shift to OCP for compute, storage, and networking • OCP in carrier‐grade central office environments • Industry initiatives including CORD• Real‐world examples • Q&A
Source: Cisco VNI Global IP Traffic Forecast, 2015–2020
• Demand growth & shifts:– Rapidly growing user traffic– Migration to cloud services
• Result: Demand for– Network flexibility and agility – Cost effective data centers
• Technology developments:– Network virtualization– Comprehensive open source solutions– Increasing processor performance– 25GE server connections
Market Drivers
Network Virtualization
Server HardwareWhite Box Switches
vSwitch
VirtualApplications
Network Services
• Enables rapid service provisioning and lower capex/opex
• Software Defined Networking (SDN)– Abstracts and automates provisioning– Separates data and control planes
• Network Functions Virtualisation (NFV)– Virtual Network Functions (VNFs) on
common hardware
• Expanding VNF Ecosystem• Open initiatives
• OPNFV, OpenFlow etc.
SDN Adoption Across the Network
Transport Networks Data Center (DC)
Transport SDN Control and Applications
DC SDN,NFV…
Service Orchestration
Open APIs
CentralOffice (CO)
CORD,vOLT, vEPC…
Premises
SD‐WAN, vCPE…
VNFsPacket Layers 2‐3OTN Layer 1
DWDM Layer 0
CORD (Central Office Re‐architected as a Datacenter)
• Launched by AT&T and Open Networking Lab (ON.Lab) in 2015 to re‐design the traditional telco central office
• Primary goals: lower central office capex and opex and enable rapid services creation.
• Uses SDN, NFV and cloud software technologies combined with commodity (COTS) hardware
• First PoC was a residential implementation
• Three focus areas• Residential (R‐CORD)• Mobile (M‐CORD)• Enterprise (E‐CORD)
vOLT & vBNG Implementation in NFV Infrastructure
Source: AT&T, Carrier SDN Networks, May 2015
• Formed in 2011 by Facebook to share specifications and best practices for creating the most energy efficient and economical data centers
• Multiple projects receiving and approving contributed designs– Initial focus on Open Rack, Open CloudServer, and compatible server and storage sleds for data centers
– Scope expanded to include HPC, Networking, Telco and other areas• CG‐OpenRack‐19 Specification Accepted By OCP Telco project in December 2016 http://www.opencompute.org/wiki/Telcos#Specs_and_Designs
Open Compute Project (OCP)
Developing Data Center Architecture
20042008
• 2U server• PCI‐X NIC• 2x GbE• 20 servers/rack
• 1U server• PCIe NIC• 4x GbE• 30‐40
servers/rack
2012
• 10U blade/ modular server
• 10 GbE• 40‐64
servers/rack
2016
• OCP with 2U sleds
• 2x10GbE 2x25GbE
• 68 servers/rack
2020?
• Next gen rack scale with 1U sleds
• 2x25GbE 2x50GbE
• 136 servers/rack
What proportion of your systems use NEBS/ETSI compatible platforms?
All of them22%
Most of them36%
About half of them13%
Less than half of
None of them6%
Source: Heavy Reading COTS, ATCA & White Box User Survey: 2016 Market Outlook
Carrier Grade Telecom Platforms
2002Proprietary
2010ATCA
2016OCP
• SDN and NFV being implemented across the network• Network virtualization requires a new class of platform for cloud and telecom applications.
• OCP and other rack‐scale solutions are already dominating large data center deployments.
• CG‐OpenRack 19 extends OCP benefits to applications requiring carrier grade telco platforms.
Heavy Reading Conclusions
• What is your key requirement for open compute hardware?– Hardware management tools– Performance– Storage density– Cost– Software integration
Audience Poll #1
14Radisys Corporation
OCPs Impact
“ The move to Open Compute, has saved Rackspace around US$40 Mn for Virtual Cloud Server Platforms. This number
continues to grow.”
‐ Sr. Director, Aaron Sullivan, Rackspace – “The Next Platform” February 19, 2016.
“ We’re becoming a software and networking company. As a result, our central offices are going to look a lot more like data centers as we evolve our networking
infrastructure. The Open Compute Project is innovating rapidly in this area, and we’re thrilled to be
collaborating with the community of engineers and developers that are driving the evolution. In joining
OCP, AT&T’s stated goal is to virtualize and control more than 75% of the network using software architecture by 2020 via the use of cloud, SDN and NFV technologies.”
– Andre Fuetsch, SVP Architecture & Design, AT&T.
“ Thanks to OCP and related efficiency work, we have saved US$ 2 Bn in infrastructure
costs over the course of the last 3 years and in the last year alone, we’ve saved enough energy to power nearly 80,000 homes. The carbon savings associated with that energy efficiency are equivalent to taking 95,000
cars off the road.”
‐ Facebook VP of Engineering, Jay Parikh (referring to a period between 2011‐2015).
15Radisys Corporation
CG-OpenRack-19 Specification
CG-OpenRack-19 Achieves OCP Acceptance
OCP-ACCEPTED™
CG-OpenRack-19 Specification
+ =
Radisys contributed the Carrier Grade
Open Rack concept to OCP in the form of a Rack + Sled
interop specification
DCEngineis a
commercially available
product family compliant with
this specification
A collaborative community focused
on redesigning hardware to
efficiently support the growing demands of
compute infrastructure.
16Radisys Corporation
CG-OpenRack-19 High Level Architecture
Up to 38RU of vendor defined ½ or full shelf sleds
Power
Usable Compute / Storage Capacity
Switching
Standard 19” Rack
17Radisys Corporation
CG-OpenRack-19 High Level Architecture
Power
Usable Compute / Storage Capacity
Switching
Standard 19” Rack
Full ShelfFull ShelfFull ShelfFull ShelfFull ShelfFull ShelfFull ShelfFull Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
Data Plane SwitchDev Mgmt SwitchApp Mgmt Switch
PSU 12V, 1UPSU 12V, 1UPSU 12V, 1UPSU 12V, 1U
½ Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
Data Plane Switch
Up to 38RU of vendor defined ½ or full shelf sleds
18Radisys Corporation
CG-OpenRack-19 High Level Architecture
4 x optical fiber ports via blind mate rear connector to sled
Vertical 12VDC bus bar in frame mates with power connector located on sled
Up to 38RU of vendor defined ½ or full shelf sleds
½ Shelf
Power
Usable Compute / Storage Capacity
Switching
Standard 19” Rack
Full ShelfFull ShelfFull ShelfFull ShelfFull ShelfFull ShelfFull ShelfFull Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
Data Plane SwitchDev Mgmt SwitchApp Mgmt Switch
PSU 12V, 1UPSU 12V, 1UPSU 12V, 1UPSU 12V, 1U
½ Shelf
½ Shelf
½ Shelf
½ Shelf
½ Shelf
Data Plane Switch
Full Shelf
Up to 38RU of vendor defined ½ or full shelf sleds
19Radisys Corporation
Rack-Level Simplified Connection Map
• Each compute sled has 2 servers
• Dual 10Gb dataplaneconnections per compute server
• IPMI management uses the device management network
• Storage servers have 2x dataplane connections to primary switch
20Radisys Corporation
Anatomy of Carrier Grade OCPCG-Openrack-19 Specification : OCP Accepted DesignTM
• Physical• Suitable for CO retrofit and new telco data center environments• 19” rack width and standard “RU” spacing for greatest flexibility• 1000 to 1200mm cabinet depth, supporting GR-3160 floor spacing dimensions
• Content/workload• Heterogeneous compute and storage servers
• Management• Ethernet based OOB management network connecting all nodes via a TOR management switch• Optional rack level platform manager
• Networking/Interconnect• One or more Ethernet TOR networking switches for I/O aggregation to nodes• Fiber cables, blind-mate with flexible interconnect mapping.• Environment, power, seismic & acoustic CO environmental requirements applicable• Safety and other certification standards also applicable• NEBS optional (L1/L3)
21Radisys Corporation
OCP Alignment and Design Principals
• Why OCP?• Open: Community-driven; multi-vendor; no lock-in; fast-moving• Efficient: Performance optimized for IT data center environment; simple essential core building
blocks; power delivery & conversion; thermal efficiency• Scale: Web-scale-out ready; simple management & maintenance; mass upgrades• Impact: Decomposition & normalization of web-scale computing
• Why CG-OpenRack-19 for Service Providers?• Open: Open spec and designs starting from OCP baseline; multi-vendor and multi-user collab from
day one; aligns with existing standard telco and COTS geometries and interfaces• Efficient: Inherits key OCP principals; performance optimized for CO data center environment; self-
contained sleds for thermal and emissions isolation; half-rack sled width for best brawny server designs
• Scale: Leverages OCP web-scale principals; standard blind-mate optical interconnect allows faster build-out, maintenance and multi-generational upgrading
• Impact: Brings OCP into carrier environment, tracking but decoupled from Web-co driven changes
22Radisys Corporation
What does OCP-ACCEPTED™ status mean to me as a Service Provider?
• Break Open the Black Box of Proprietary Infrastructure• Gain Control and Choice• Reimagine the Hardware and Software• Make Solutions More Efficient, Flexible and Scalable• Customize• Save $
Hardware
Application Software
Management Software
TEMs Model
DCEngine
Radisys Platform Mgmt
VNFs
Control Software
Vendor Locked Radisys And/Or Open Source
Orchestration and Control
Open Sourced inCG-OpenRack-19 Spec
+OpenSource
Radisys Model
23Radisys Corporation
CG-OpenRack-19 Next Steps
• Framework/Interop Specs• Current spec focuses mainly on sled-level interop, which is most critical for supplier ecosystem
development; next focus on Rack and Management aspects• Updating of specs as new innovations take place in community
• Product Contributions• Radisys will contribute DCEngine designs• Ecosystem vendors/partners will contribute other designs
• Standards Alignment• Exploring ETSI NFV alignment
• Ecosystem Incubation and Promotion• Radisys uses multi-vendor ecosystem in current solutions• Expanding to include more options• Enabling new partners/competitors to expand market footprint• Customers also key part of ecosystem
• What do you see as the biggest inhibitor to your moving to open compute and open source?– Team skills and experience– Support– Security– Product capabilities– OSS/BSS Integration
Audience Poll #2
MT09.06.16Radisys Corporation
DCEngine
26Radisys Corporation
Radisys DCEngine: DevOps Ready / Rapid Deployment Clusters
26
+ =TelecomOCP Rack
Radisys Professional Services and
Software Tools
Commercial & Open Source
Software
RadisysDCEngine+
Test Automation
Deploy &Upgrade
Rack Management
Scripting Tools
Inventory & Lifecycle Mgmt
Radisys LuminousManagement &
Automation Tools
PatchAutomation
For example, …
And others available uponrequest.
27Radisys Corporation
DCEngine 42RU Rack Core
• Rack Core• 600x1200mm & 800x1000mm rack footprint
options• Power: AC (3PH Wye & Delta) & DC (-48V &
400V) up to 24kW per rack• In-rack UPS option• Cooling: Sled-local front-to-back airflow, liquid
cooled door option• 2x Mgmt Switches: 1GE to server BMC & CPU• 2x Data Switches: 40/100GE uplinks, 10/25GE
downlinks to each server• Interconnect: 4 lanes (10/25G) per sled• Rack level management via rack agent
• Standard Configurations• Compute : 8x Compute (12 sleds) + 8x Storage• Balanced : 6x Compute (12 sleds) + 10x Storage• Storage : 16x Storage Shelves
28Radisys Corporation
DCEngine 16RU Rack Core
• 16U Rack Core• 600mm wide x 1000mm deep• Single phase AC power
• PSU shelf with 4 x 2500W units• Management Switches
• Switch #1 : Connects 1G to each server BMC• Data Switches
• 1 or 2 switches (3.2Tbps each)• 10/40/100G uplinks, 10/25G downlinks to sleds
• Standard Configurations• 4x compute shelves + 2x storage shelves
29Radisys Corporation
DCEngine Compute & Storage Sleds
• Half width sleds• Compute-focused
• Dual server with some storage• 2x IA or ARM CPU server boards• 1-8TB of storage per server
• Storage-focused• Single server with solid state storage• Dense high-performance NVMe based
storage (~up to 500TB)• Specialty
• Single server with PCIe-based specialty co-processing (e.g., DSP, GPU) for targeted applications
• Full width sleds• Storage-focused
• Single server with lots of cheap mass storage• Up to 24x12TB = 288TB
• Sled interconnect• All sled types have same interconnect
options to switches: 10G or 25G serial lanes
30Radisys Corporation
DCEngine Data Plane Switch
OCP and/or Whitebox
Linux
iCOS Cumulus
High Density, Line Rate in 1 Rack Unit3.2 Tbps Data Plane Switch32 QSFP28 Ports 100GbE Ports(32x 40Gbe or 128x 10/25GbE
Optimized for CloudVM Switching, VXLAN encap/decap, 10GbE path back to controfor analysis/analytics
Redundant and Hot Swappable Components
Standards Based SDN Interfaces
Ex. OpenFlow 1.3.1 iCOS
Allow Management and Configuration via Standard Tools
Ex. ONIE
Open, CustomizableCan run third party code
Flexible Linux Based NOSother options available upon request
iCOS is a trademark of BroadCom. Cumulus is a trademark of Cumulus.
31Radisys Corporation
DCEngine Platform Management Software
DCE Platform Software Framework
32Radisys Corporation
Roll Out Fast With Radisys
32
Racks in Your Cloud
in Days
If we pre-load and test software …
Even less time.
Multiple Sites in Parallel
Means Fast TTR
Unprecedented Turn Up Speed for Data Centers: 30 Rack Integration ~ 1 week
Day 1• Commissioning
• Unpacking• Installation• Packing
disposal
Day 2-3• Spine/Network
• Cabling • Connectivity• POD Power-
on
Day 4• Software
Validation• Network access• Cluster Ready
and handed over to you
33Radisys Corporation
DCEngine Summary
• Open Source derived platform• OCP concepts, adapted for telecom applications• DevOps platform, rapid build out for new applications• Range of management tool options for open market, integrated as part of DevOps environment
• Operational leadership • 3-4 days to install & hand-over vs weeks or even months with Dell & HP• Scheduled maintenance & advanced repair services using Radisys resources• Innovations to reduce service costs, replacements & operating expenses
• Technology leadership• High compute & storage density• Rapid refresh cycles from commercially available technologies
34Radisys Corporation
M-CORD Trials
MME, SGW, PGW
Mobile Edge Services
Caching, SON, Billing
Disaggregated/Virtualized RAN
Disaggregated/Virtualized
EPC
BBU, RRUfront haul fabric
35Radisys Corporation
M-CORD Trials
MME, SGW, PGW
Mobile Edge Services
Caching, SON, Billing
Disaggregated/Virtualized RAN
Disaggregated/Virtualized
EPC
BBU, RRUfront haul fabric
SDN Control Plane ‐ ONOSSDN Control Plane ‐ ONOS
NFV Orchestration
w/XOS
NFV Orchestration
w/XOS
SDN Fabric
Commodity servers, switches, network access
+White Box White Box White Box White Box
White Box
White Box
White Box White Box White Box White Box White Box
White Box White Box White Box White Box White Box
36Radisys Corporation
M-CORD Trials
OpenControl
Interfaces
Cloud‐Agile Service
Customization
Dynamic radio resource
optimization
NetworkSlicing
Deep Observability
MME, SGW, PGW
Mobile Edge Services
Caching, SON, Billing
Disaggregated/Virtualized RAN
Disaggregated/Virtualized
EPC
BBU, RRUfront haul fabric
Programmable Data Plane
SDN Control Plane ‐ ONOSSDN Control Plane ‐ ONOS
NFV Orchestration
w/XOS
NFV Orchestration
w/XOS
SDN Fabric
Commodity servers, switches, network access
+White Box White Box White Box White Box
White Box
White Box
White Box White Box White Box White Box White Box
White Box White Box White Box White Box White Box
37Radisys Corporation
R-CORD Trials
ClientETHTDMCPE
Router
ONUONT
IP
NoVLAN
DefaultVLAN (0)
Splitter
OLT
Orchestration+ Controller
Switch Fabric
Compute Storage
vOLT vSG Classifier vRouterSFC
CSP Infrastructure
Radisys
RADIUSNFV OSS BSS
NFV
38Radisys Corporation
Proof Point: Production application at Verizon
http://schd.ws/hosted_files/mesosconna2016/7a/Mesoscon_2016_cneth.pdf
Modular Sled ArchitectureUp to 152 Xeon Processors Up to 3.0 PB Storage
39Radisys Corporation
Radisys Credentials
39
25+ Years
Telecom SoftwareExperts
a new operator centric company essential for the
Agile, DevOps world.
+ =Telecom Hardware Experts
25+ Years
Carrier Scalehardware design expertise
Telecom and Datacom software expertise Open source hardware - no “closed systems” hidden agenda like Dell and HP
World class supply chain management End to end network protocol expertise Nimble, vendor agnostic systems integration
Deep and rich 3rd party hardware ecosystem
Agile/DevOps centric mindset First mover advantage in open source telecomOCP, ONOS, CORD, …
Operational excellence and nimbleness Open source tool chains and software for system automation
telco datacenter transformation experts With deep open source software competency
Trusted and Proven Hardware Partner
Open Software and Integration Expertise
The Best Choice for Open TelecomSolutions in the DevOps Era+ =
Questions and Answers
Moderator
Simon StanleyAnalyst at LargeHeavy Reading
Suzanne Kelliher DCEngine Product Line [email protected]
Thank you for attending!
Upcoming Light Reading Webinarswww.lightreading.com/webinars.asp
www.radisys.com