Top Banner
© 2016 NETRONOME SYSTEMS, INC. Linley Data Center Conference February 9, 2016 Open vSwitch Implementation Options Nick Tausanovitch VP of Solutions Architecture
15

Open vSwitch Implementation Options

Apr 16, 2017

Download

Technology

Netronome
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC.

Linley Data Center Conference February 9, 2016

Open vSwitch Implementation Options

Nick Tausanovitch VP of Solutions Architecture

Page 2: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 2

The Modern Data Center Landscape

Modern Public and Private Cloud Data Center applications are driving: ▶  Rapid sprawl of Virtual Machines (VM’s) and containers to scale data centers ▶  The need for SDN control of Network Virtualization overlays ▶  SDN controlled per-VM and per Tenant policies for zero-trust security ▶  Continuous SW changes to accommodate networking feature adds

Enter the Era of Server-Based Networking…

Page 3: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 3

The New Data Center Infrastructure Conundrum

…But server-based networking using software-based virtual switches creates a new set of challenges

The Low Throughput creates a data bottleneck and starves applications,

limiting performance

Many CPU cores are needed resulting in a CPU Tax that dilutes effective

compute resources

The added Latency of software switches

precludes use in many real time applications

Page 4: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 4

Introducing the Agilio™ Server Networking Platform

10/40GbE Production Solutions Now

Agilio Server-Based Networking

Software Agilio-CX Intelligent Server Adapters

!  Agilio Accelerates the virtual switch Data Path

!  Agilio Offloads the virtual switch processing from servers

!  The Agilio functionality is flexible and extensible

Page 5: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 5

NFP-4000 Chip (Used on CX-4000 Adapters)

Network Flow Processor used on CX-4000 Intelligent Server Adapters ▶  Highly Parallel Multithreaded Processing Architecture for high throughput

▶  H/W Accelerators to further maximize efficiency (throughput/watt)

▶  Purpose built Flow Processing Cores maximize flexibility

Comprehensive feature set with Agilio Software ▶  RX/TX with SR-IOV and stateless offloads

▶  Extensive, flexible tunneling support (e.g. VXLAN, MPLS, GRE)

▶  Flexible Match/Action with transparent offload of OVS

▶  High Scale and very granular security policies

External DDR3 support for Millions of Flows

Easy function extensibility with P4 and C Sandbox

Page 6: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 6

Agilio™ Open vSwitch Implementation

Agilio Software Architecture for Open vSwitch

Page 7: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 7

Software Models Provide Maximum Flexibility

P4 and C Sandbox Data Path Programming

Flow Configuration and API programming with Agilio Software Agilio Data Path Extensibility with C Sandbox

•  Agilio Software with simple API-level programming provides a market ready solution with rich features and robust roadmap

•  Extension of Agilio software with custom features can be achieved incrementally with C Sandbox programming

•  Fuller control of data plane functionality while providing abstraction of the underlying hardware with P4

Open Source P4 and C Programming Tools available at http://www.open-NFP.org

Page 8: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 8

Open vSwitch Benchmarking Scenarios

Page 9: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 9

OVS Benchmark Testing Overview

Data collected for key endpoint NIC use cases

▶  OVS offload with Network-VM, VM-Network, VM-VM data flows

▶  OVS-based L2, L3 forwarding and actions

▶  VXLAN Tunnel Endpoint Processing

▶  Standard netdev and dpdk poll mode drivers

Collected data across a range of parameters ▶  Packet size

▶  Number of Wildcard Rules

▶  Number of Micro-Flows

Key metrics collected and analyzed ▶  Packets-per-second throughput

▶  CPU core usage

▶  Latency *Netronome Trafgen source/sink user space application is being open sourced and will be made available on www.open-nfp.org

Benchmark Testing Setup with Netronome Trafgen*

Test Gen Server

DPDK PMD

Trafgen (Source)�

1x40G link�

DPDK PMD

Trafgen (Source)�

Server Adapter

DUT Server

DPDK PMD

Trafgen (Sink)�

DPDK PMD

Trafgen (Sink)�

DUT Server Adapter

Page 10: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 10

OVS Benchmark Throughput Results

OVS L2 Forward to VMs

Packet Size

Mill

ion

Pac

kets

Per

Sec

ond

OVS VXLAN + L2 to VMs

Packet Size

Mill

ion

Pac

kets

Per

Sec

ond

40GbE Network-to-VM Traffic with 64,000 Micro-flows matching against 1000 Wildcard Rules

Page 11: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 11

Server CPU Core Allocations

•  12 CPU Cores Dedicated to OVS •  12 CPU Cores for Application

•  1 CPU Core Dedicated to OVS •  23 CPU Cores for Application

Software (Kernel and User) OVS

Agilio OVS

Page 12: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 12

Per Server CPU Core Efficiency

Throughput with single Server CPU Core

Mill

ion

Pac

kets

Per

Sec

ond

•  50X Efficiency Gain vs. Kernel OVS •  20X Efficiency Gain vs. User OVS

Page 13: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 13

Summary and Conclusion

X86 Cores Available to Run APPs

Dat

a D

eliv

ered

to

AP

Ps Agilio

OVS

DPDK OVS Kernel OVS

•  Eliminate vSwitch Data Bottlenecks •  Reduce the OVS server CPU Tax

•  Improved packet latencies

•  Maintain software innovation velocity

Agilio Intelligent Server Adapters to enable the next stage of the server-based networking revolution…

Page 14: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC. 14

Efficiency Drives Massive TCO Savings

Server TOR

Server Server Server Server Server

Server Server Server Server Server

Rack Throughput: 120Mpps VNF’s Per Rack: 240

Traditional NIC with Software OVS

Rack Throughput: 440Mpps VNF’s Per Rack: 880

Agilio-CX with Accelerated OVS

74% Lower CAPEX

75% Lower OPEX

Data Center TCO*

3.7x More VNFs

3.7x More Throughput

Per Rack Capacity

Server Server Server

Server TOR

Server Server Server Server Server

Server Server Server Server Server Server Server Server

Server Server

20 S

erve

rs w

ith 4

0GbE

20 S

erve

rs w

ith 4

0GbE

*Based on Data Output from Netronome ROI Calculator available online at http://netronome.com/products/ovs/roi-calculator

Page 15: Open vSwitch Implementation Options

© 2016 NETRONOME SYSTEMS, INC.

Thank You