PLNOG 13: Michał Dubiel: OpenContrail software architecture

Post on 10-Jun-2015

322 Views

Category:

Internet

7 Downloads

Preview:

Click to see full reader

DESCRIPTION

Michał Dubiel – TBD Topic of Presentation: OpenContrail software architecture Language: Polish Abstract: OpenContrail is a complete solution for Software Defined Networking (SDN). Its relatively new approach to network virtualization in data centers utilizes the overlay networking technology in order to achieve full decoupling of the physical infrastructure from the tenant’s logical configurations. This presentation describes the software architecture of the system and its functional partitioning. A special emphasis is put on a compute node components: the vRouter kernel module and the vRouter Agent. Also, selected implementation details are presented in greater details along with an analysis of their impact on an overall system’s exceptional scalability and great performance.

Transcript

Architektura systemu OpenContrail

Michał DubielKraków 2014

Plan

• Cloud operating system– Why?

• Network virtualization– Why it is important – OpenContrail solution

• OpenContrail architecture– Goals, assumptions– Functional partitioning– Components

CLOUD OPERATING SYSTEM

• Compute power• Storage• Networking

Operating System analogy

• Resources in a typical server– CPU cores– Memory– Storage– Networking

• Resources in a datacenter– Hardware machines– Storage appliances– Networking equipment

OpenStack

source: openstack.org

Up to now quite missing

source: openstack.org

Network virtualization - OpenContrail

NETWORK VIRTUALIZATION

• Virtual endpoints domination• Solutions

Rack, servers, VMs

VMVMVMVM

hypervisor

VMVMVMVM

hypervisor

VMVMVMVM

hypervisor

Server rack

To spine switch

A wider viewClos network

Observations

• Majority of network endpoints are virtual

• Virtual networks dominate

• Isolation between them has to be provided

• While using the same physical network

• Automatically

Solutions

• Vlans– Default OpenStack approach– Limited, not flexible

• Overlay networking– OpenContrail as a Neutron plugin– Flexible– Scalable

VLANs

• VM’s interfaces placed on bridges– Each bridge for a virtual network

• Difficult to manage• 4096 VLAN tags limit– Can be extended using Shortest Path Bridging

• Physical switches have to contain the VN state

Overlay networking

• “Old” technology, new for data-centers• Physical underlay network– IP fabric– No state of the virtual networks

• Virtual overlay network– Holds state of the virtual networks– Dynamic tunnels (MPLSoGRE, VXLAN, etc.)

VM migration example

VM1 VM2

Server 1

VM3

VM4 VM5

Server 2

VM6

VM7 VM8

Server 3

VM9

Physical switch

Virtual networks:

1 2

3

S3 VM9 Payload Physical network:

VM migration example

VM1 VM2

Server 1

VM3

VM4 VM5

Server 2

VM6

VM7 VM8

Server 3

VM9Physical switch

Virtual networks:

1 2

3

S2 VM9 Payload Physical network:

Overlay networks advantages

• “Knowledge” about network only in the software (vRouter)

• Any switch works for IP fabric network– No configuration– Only speed matters– Low price

• OpenContrail implementation is standards-based (MPLS, BGP, VXLAN, etc.)

OPENCONTRAIL ARCHITECTURE

• Goals• Nodes• Components

Architecture goals

• Scalability• Compatibility• Extensibility• Fault tolerance• Performance

“Think globally, act locally”

• The system is physically distributed– No single point of failure– Scalability– Performance

• Logically centralized control and management– Simplicity– Ease of use

Architecture overview

Source: www.opencontrail.org

Configuration node

Source: www.opencontrail.org

Configuration node components

• Configuration API Server– Active/Active mode– Receives REST API calls– Publishes configuration to the IF-MAP Server– Receives configuration from other API Servers

• Discovery Service– Active/Active mode– A Registry of all OpenContrail services– Provides REST API for publishing and querying of

services

Configuration node components (2)

• Schema Transformer– Active/Backup mode– Receives high-level configuration from IF-MAP Server– Transforms high-level constructs (eg. virtual network) to

low-level (eg. routing instance)• IF-MAP Server– Active/Active mode– Publishes system configuration to Control nodes, Schema

Transformer – All configuration comes from API Server (both high and low

level)

Configuration node components (3)

• Service Monitor– Active/Backup mode– Monitors service virtual machines (firewall, analyzer,

etc.)– Calls nova API to control VMs

• AMPQ Server (RabbitMQ)– Communication between system components

• Persistent storage (Cassandra)– Receives and stores system configuration from the

Configuration API Server

Configuration flow (user)

1. User Request 2. Original API Server 3. RabbitMQ4. All API Servers5. Local IF-MAP Server6. Schema Transformer

Configuration flow (transformed)

1. Schema Transformer2. Configuration API Server3. RabbitMQ4. All API Servers5. Local IF-MAP Server6. Control nodes and DNS

Controller node

Source: www.opencontrail.org

Control node components

• Controller– Active/Active mode– Receives configuration from IF-MAP Server– Exchanges XMPP messages with vRouter Agent– Federate with other nodes and physical switches via

BGP/Netconf • DNS Service– Active/Active– Receives configuration from IF-MAP Server– Exchanges XMPP messages with vRouter Agent– Front-end only, backend using host native ‘named’

Compute nodeNova

SchedulerContrail Control

node

Nova vif driver

KVM

VM VM VM

Contrail Agent

Contrail vRouter

Kernel space

Nova compute

Libvirt

NetLink/dev/flowpkt

TCP

QEMU

TUN/TAP

Compute node components

• vRouter Agent– Communication via XMPP with the Control node– Installation of forwarding state into vRouter– ARP, DHCP, DNS proxy

• vRouter– Packet forwarding– Applying flow policies– Encapsulation, decapsulation

Agent <-> vRouter communication

• NetLink– Routing entry, next-hop, flow, etc. synchronization– Uses RCU

• /dev/flow– Shared memory for flow hash tables

• pkt tap device– Flow discovery (first packet of a flow)– ARP, DHCP, DNS proxy

Analytics node

Source: www.opencontrail.org

Analytics node components

• API Server– REST API for querying analytics

• Collector– Collects analytics information from all system nodes

• Query Engine– Map-reduce over collected analytics– Executes queries

• Rules Engine– Controls which events are collected by the Collector

Any questions?

top related