May 14, 2015
Openstack Quantum:Virtual Networks for OpenStack
Dan Wendlandt – [email protected]
Outline What? Why? How?
What is Quantum? A standalone Openstack service Provides network connectivity between a set of
network “interfaces” from other service (e.g., vNICs from compute service, interfaces on a load-balancer service).
Exposes API of logical abstractions for describing network connectivity + policy between interfaces.
Uses a “plug-in” architecture, so multiple technologies can implement the logical abstractions.
Provides a “building block” for sophisticated cloud network topologies.
What is Quantum NOT? Something that provides all network-related
processing behavior. Initial focus is on connectivity. Other advanced services like load-balancers,
firewalls, etc can “plug” into a network offered by Quantum.
IP address management (see next talk on IPAM)
Orchestration of multiple network-related building blocks to provide higher-level abstractions to tenants (see talk on Donabe)
Example Architecture: Single ServiceOpenstack Dashboard
Quantum Service Nova Service
XenServer #1
Quantum Pluginnova-api
Hypervisor
vswitch
nova-scheduler
nova-compute
Tenant API
Tenant API
Internal PluginCommunication
Internal novaCommunication
Admin API
Example Architecture: Two Services
Quantum Service
Quantum Plugin
Tenant API
VM VM VM VM
vswitch
vswitch
physicalswitch
FWFW FW
Internal PluginCommunication
Network Edge:Point at which a service “plugs”
into the network.
Firewall Service
Tenant API
Compute Service
Tenant API
Virtual Network Abstractions (1)
Services (e.g., nova, atlas) expose interface-IDs via their own tenant APIs to represent any device from that service that can be “plugged” into a virtual network. Example: nova.foo.com/<tenant-id>/server/<server-id>/eth0
Tenants use Quantum API to create networks, get back UUID: Example: quantum.foo.com/<tenant-id>/network/<network-id>
Tenants can create ports on a network, get a UUID, and associate config with those ports (APIs for advanced port config are TBD, initially ports give L2 connectivity): Example: quantum.foo.com/<tenant-id>/network/<network-id>/port/<port-id>
Tenants can “plug” an interface into a port by setting the attachment of a port to be the appropriate interface-id. Example: set quantum.foo.com/<tenant-id>/network/<network-id>/port/<port-
id>/attach to value “nova.foo.com/<tenant-id>/server/<server-id>/eth0” .
Virtual Network Abstractions (2)
Note: At no time does the customer see details of how a network is implemented (e.g., VLANs).
Association of interfaces with network is an explicit step.
Plugins can expose API extensions to introduce more complex functionality (e.g., QoS). Extension support is queriable, so a customer can “discover” capabilities.
API extensions that represent common functionality across many plug-ins can become part of the core API.
Core API for diablo is simple, focused on connectivity. Core API will evolve.
Example Scenario:
Nova i-2610.0.0.26
Data CenterNetwor
k
Private Net #1
Private Net #2
Provider View
Tenant
View
Nova i-2310.0.0.23
Nova i-2610.0.0.26
Nova i-22
10.0.0.22
Nova i-2410.0.0.24
Nova i-2410.0.0.24
Nova i-2610.0.0.26
Nova i-2410.0.0.24
Compute Service
GW Instance-110.0.0.1
NAT Gateway Service
GW Instance-110.0.0.1
Live Demo…
Why Quantum? API gives ability to create interesting network
topologies. Example: create multi-tier applications
Provide way to connect interconnect multiple Openstack services (*-aaS). Example: Nova VM + Atlas LB on same private
network. Open the floodgates to let anyone build
services (open or closed) that plug into Openstack networks. Examples: VPN-aaS, firewall-aaS, IDS-aaS.
Allows innovation plugins that overcomes common cloud networking problems Example: avoid VLAN limits, provide strong QoS
How? Quantum Design Goals Decoupled from nova and other services
Communication between quantum and another service should happen via well-defined Rest API (not direct python calls, no nova RPC, not shared understanding of database schemas)
Be able to run without nova. Flexible enough to support plugins for many
different “network edges”: Bridge / Open vSwitch on Linux Vmware DVS / Nexus 1000V Physical switches Physical switches with VEPA / VNtag
How? Inside Quantum
Plugin interface maps to “core” tenant API + admin API.
“Network agents” running on nova hypervisor fit within this model.
Plugin might manage just the network edge (e.g., a vswitch), or all network devices.
Tenant API Admin API
Auth (talk to Keystone)
API Limits
Plugin
Communicate with external devices in a plugin-specific way to implement logical abstractions from the tenant API.
Edge Bindings Services that expose interface-IDs must tell quantum
where that interface is currently “plugged” into the network.
We call this an “edge binding” Impl still fuzzy: Quantum may support an admin API that
allows other services to register <interface-id, interface-location> pairs with Quantum.
Many different “types” of interface-location data: XenServer: VIF-UUID Cisco 1000v: veth0 device Physical Hosting: physical switch ID + port number
Openstack deployers must make sure all services able to “speak” a interface-location type supported by the switch.
There will be a “default” type supported by an open source plugin (VLAN based, like nova today?)
Simple Plug-in Example with VLANs Similar to what Nova does for private
networks: One VLAN per “network”. Hypervisor NIC is VLAN trunk, all switches are
trunked. When an interface-ID is associated is associated
with a network, plugin uses the edge binding to find the interface-location (a port on a vswitch) and puts that port on the correct VLAN.
Plans for Diablo timeframe “experimental” Quantum plug-in Plug-in Agnostic:
Create API, including way for plugin to register extensions.
Store “ownership” + integrate with keystone for auth.
Implement “edge bindings” database + API. Plugins:
At least one (hopefully more!) open-source plugin that anyone can use to experiment with Quantum.
Services: Perform “edge bindings” integration with nova and
at least one other service.
This is Just the Beginning…. Our goals within Diablo time frame are well
scoped. Quantum is a building block, not the entire
solution for all networking problems. Goal is to make sure Quantum design for
Diablo does not preclude doing things we will likely consider important in the future.
Many important questions remain: How should knowledge of the network
topology and resources/capacity be used to influence workload placement decisions by the scheduler?
What should be included in a broader set of core APIs (QoS, packet stats, ACLs, etc) in future iterations?
Is L2 VPN (e.g., to customer site) a part of this core API, ok something the “plugs” into a virtual network?
How to expose attributes of the physical network (e.g., redundant NICs) via the logical model?
<Insert your question here…>