Building Fast, Flexible Virtual Networks on Commodity Hardware Nick Feamster Georgia Tech Trellis: A Platform for Building Flexible, Fast Virtual Networks on Commodity Hardware, Mundada, Bhatia, Motiwala, Valancius, Muhlbauer, Bavier, Nick Feamster, Rexford, Peterson, ROADS 2008 Building a Fast, Virtualized Data Plane with Programmable Hardware, Bilal Anwer and Nick Feamster (In Submission)
22
Embed
Building Fast, Flexible Virtual Networks on Commodity Hardware Nick Feamster Georgia Tech Trellis: A Platform for Building Flexible, Fast Virtual Networks.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Building Fast, Flexible Virtual Networks on
Commodity Hardware
Nick FeamsterGeorgia Tech
Trellis: A Platform for Building Flexible, Fast Virtual Networks on Commodity Hardware, Mundada, Bhatia, Motiwala, Valancius, Muhlbauer, Bavier, Nick Feamster, Rexford, Peterson, ROADS 2008
Building a Fast, Virtualized Data Plane with Programmable Hardware, Bilal Anwer and Nick Feamster (In Submission)
2
Concurrent Architectures are Better than One (“Cabo”)
• Infrastructure: physical infrastructure needed to build networks
• Service: “slices” of physical infrastructure from one or more providers
The same entity may sometimes play these two roles.
3
Network Virtualization: Characteristics
• Multiple logical routers on a single platform• Resource isolation in CPU, memory, bandwidth,
forwarding tables, …
• Customizable routing and forwarding software• General-purpose CPUs for the control plane• Network processors and FPGAs for data plane
Sharing
Customizability
4
Requirements
• Scalable sharing (to support many networks)
• Performance (to support real traffic, users)
• Flexibility (to support custom network services)
• Isolation (to protect networks from each other)
5
VINI
s
c
BGP
BGP
BGP
BGP
• Prototype, deploy, evaluate new network architectures– Carry real traffic for real users– More controlled conditions than PlanetLab
• Extend PlanetLab with per-slice Layer 2 virtual networks– Support research at Layer 3 and above
6
PL-VINI• Abstractions
– Virtual hosts connected by virtual P2P links
– Per-virtual host routing table, interfaces
•Drawbacks– Poor performance:
• 50Kpps aggregate
• 200Mb/s TCP throughput– Customization difficult
Control
XORP(routing protocols)
UML
eth1 eth3eth2eth0
PlanetLab VM
Click
PacketForwardEngine
DataUmlSwitch
element
Tunnel table
Filters
UDP tunnels
7
Trellis
• Same abstractions as PL-VINI– Virtual hosts and links
– Push performance, ease of use
• Full network-stack virtualization• Run XORP, Quagga in a slice
– Support data plane in kernel
• Approach native Linux kernel performance (15x PL-VINI)
• Be an “early adopter” of new Linux virtualization work
kernel FIB
virtualNIC
application
virtualNIC
user
kernel
bridge
shaper
EGREtunnel
bridge
shaper
EGREtunnel
Trellis virtual host
Trellis Substrate
8
Virtual Hosts
• Use container-based virtualization– Xen, VMWare: poor scalability, performance
• Option #1: Linux Vserver– Containers without network virtualization– PlanetLab slices share single IP address, port space
• Option #2: OpenVZ– Mature container-based approach– Roughly equivalent to Vserver– Has full network virtualization
9
Network Containers for Linux
• Create multiple copies of TCP/IP stack
• Per-network container– Kernel IPv4 and IPv6 routing table– Physical or virtual interfaces– Iptables, traffic shaping, sysctl.net variables
• Trellis: marry Vserver + NetNS– Be an early adopter of the new interfaces– Otherwise stay close to PlanetLab
10
Virtual Links: EGRE Tunnels
• Virtual Ethernet links• Make minimal assumptions about
the physical network between Trellis nodes
• Trellis: Tunnel Ethernet over GRE over IP– Already a standard, but no Linux
implementation
• Other approaches: – VLANs, MPLS, other network
circuits or tunnels– These fit into our framework
kernel FIB
virtualNIC
application
virtualNIC
user
kernel
EGREtunnel
EGREtunnel
Trellis virtual host
Trellis Substrate
11
Tunnel Termination
• Where is EGRE tunnel interface?• Inside container: better performance• Outside container: more flexibility
– Transparently change implementation– Process, shape traffic btw container and tunnel– User cannot manipulate tunnel, shapers
• Trellis: terminate tunnel outside container
12
Glue: Bridging
• How to connect virtual hosts to tunnels?– Connecting two Ethernet interfaces