ExoGENI : rack administrator primer

Post on 25-Feb-2016

49 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

ExoGENI : rack administrator primer. Ilya Baldin RENCI , UNC-CH Victor Orlikowski Duke University. Hello and Welcome!. First things first IMPORT THE OVA INTO YOUR VIRTUAL BOX LOGIN as gec20user/gec20tutorial START ORCA ACTORS sudo / etc / init.d /orca_am+broker-12080 clean- restart - PowerPoint PPT Presentation

Transcript

ExoGENI: rack administrator primer

Ilya BaldinRENCI, UNC-CH

Victor OrlikowskiDuke University

First things first1. IMPORT THE OVA INTO YOUR VIRTUAL BOX2. LOGIN as gec20user/gec20tutorial 3. START ORCA ACTORS

sudo /etc/init.d/orca_am+broker-12080 clean-restart sudo /etc/init.d/orca_sm-14080 clean-restart sudo /etc/init.d/orca_controller-11080 start

4. WAIT AND LET IT CHURN – THIS IS ALL OF EXOGENI IN ONE VIRTUAL MACHINE!

WILL TAKE SEVERAL MINUTES!

Hello and Welcome!

3

13 GPO-funded racks built by IBM plus several “opt-in” racks◦ Partnership between RENCI, Duke and IBM◦ IBM x3650 M4 servers (X-series 2U)

1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 8-core CPU Dual 1Gbps adapter (management network) 10G dual-port Chelsio adapter (dataplane)

◦ BNT 8264 10G/40G OpenFlow switch◦ DS3512 6TB or server w/ drives totaling 6.5TB sliverable storage

iSCSI interface for head node image storage as well as experimenter slivering

Cisco (WVN, NCSU, GWU) and Dell (UvA) configurations also exist

Each rack is a small networked cloud◦ OpenStack-based with NEuca extensions◦ xCAT for baremetal node provisioning

http://wiki.exogeni.net

Testbed

ExoGENI at a glance5 upcoming racks at TAMU, UMass Amherst, WSU, UAF and PSC not shown

Rack layout

ExoGENI software stack

7

CentOS 6.X base install Resource Provisioning◦ xCAT for bare metal provisioning◦ OpenStack + NEuca for VMs◦ FlowVisor

Floodlight used internally by ORCA GENI Software◦ ORCA for VM, bare metal and OpenFlow ◦ FOAM for OpenFlow experiments

Worker and head nodes can be reinstalled remotely via IPMI + KickStart Monitoring via Nagios (Check_MK)◦ ExoGENI ops staff can monitor all racks◦ Site owners can monitor their own rack

Syslogs collected centrally

Rack software

OpenStack◦ Currently Folsom based on early release of RHOS◦ Patched to support ORCA

Additional nova subcommands Quantum plugin to manage Layer2 networking between VMs

◦ Manages creation of VMs with multiple L2 interfaces attached to high-bandwidth L2 dataplane switch

◦ One “management” interface created by nova attaches to management switch for low-bandwidth experimenter access

Quantum plugin◦ Creates and manages 802.1q interfaces on worker nodes to attach VMs to VLANs◦ Creates and manages OVS instances to bridge interfaces to VLANs◦ DOES NOT MANAGE MANAGEMENT IP ADDRESS SPACE!◦ DOES NOT MANAGE THE ATTACHED SWITCH!

Elements of rack software (OpenStack)

Manages booting of bare-metal nodes for users and installation of OpenStack workers for sysadmins

Stock software Upgrading the rack means◦ Upgrading the head node (painful)◦ Using xCAT to update worker nodes (easy!)

Elements of software (xCAT)

Flowvisor◦ Used by ORCA to “slice” the OpenFlow part of the switch for experiments

via API Typically along <port><vlan tag> dimensions

◦ For emulating VLAN behavior ORCA starts Floodlight instances attached to slices

Floodlight◦ Stock v. 0.9 packaged as a jar ◦ Started with parameters that minimize JVM footprint◦ Uses ‘forwarding’ module to emulate learning switch behavior on a VLAN

FOAM◦ Translator from GENI AM API + RSpec to Flowvisor

Does more, but not in ExoGENI

Elements of software (OpenFlow)

Control framework◦Orchestrates resources at user requests◦ Provides operator visibility and control

Presents multiple APIs ◦ GENI API

Used by GENI experimenter tools (Flack, omni)◦ORCA API

Used by Flukes experimenter tool◦Management API

Used by Pequod administrator tool

Elements of software (ORCA)

Virtual network exchangeVirtual colo

campus net to circuit fabric

Cloud hosts with network control

Building network topologies

Computed embedding

Slice owner may deploy an IP network

into a slice (OSPF).

OpenFlow-enabled L2 topology

slice

Brokering Services

Site Site

User

Topology requests specified in NDL

User facingORCA Agents

1. Provision a dynamic slice of networked virtual infrastructure from multiple providers, built to order for a guest application.

2. Stitch slice into an end-to-end execution environment.

An actor encapsulates a piece of logic◦ Aggregate Manager (AM) – owner of the resources◦ Broker – partitions and redistributes resources◦ Service Manager – interacts with the user

A Controller is a separable piece of logic encapsulating topology embedding and presenting remote APIs to users

A container stores some number of actors, connects them to ◦ The outside world using remote API endpoints◦ A database for storing their state

Any number of actors can share a container A controller is *always* by itself

ORCA Actors and Containers

Tickets, leases and reservations are used somewhat interchangeably◦ Tickets and leases are kinds of reservation

A ticket indicates the right to instantiate a resource A lease indicates ownership of instantiated resources AM gives tickets to brokers to indicate delegation of

resources Brokers subdivide the given tickets into other smaller

tickets and give them to SMs upon request SMs redeem tickets with AMs and receive leases which

indicate which resources have been instantiated for them

ORCA tickets, leases and reservations

Slices consist of reservations Slices are identified by GUID◦ They do have user-given names as an attribute

Reservations are identified by GUIDs◦ They have additional properties that describe

Constraints Details of resources

Each reservation belongs to a slice Slice and reservation GUIDs are the same across all actors◦ Ticket issued by broker to a slice◦ Ticket seen on SM in a slice, becomes a lease with the same GUID◦ Lease issued by AM for a ticket to a slice

ORCA reservations and slices

ORCA Reservation flow

ORCA actor configuration◦ORCA looks for configuration files relative to $ORCA_HOME

environment variable◦ /etc/orca/am+broker-12080◦ /etc/orca/sm-14080

ORCA controller configuration◦ Similar, except everything is in reference to

$CONTROLLER_HOME◦ /etc/orca/controller-11080

ORCA Configuration files (1/3)

Actor configuration config/orca.properties – describes the container config/config.xml – describes the actors in the

container config/runtime/ - contains keys of actors config/ndl/ - contains NDL topology descriptions of

actor topologies (AMs only)

ORCA Configuration files (2/3)

Controller configuration config/controller.properties – similar to orca.properties

describes the container for controller geni-trusted.jks – Java truststore with trust roots for

XMLRPC interface to users xmlrpc.jks – Java keystore with the keypair for this

controller to use for SSL xmlrpc.user.whitelist, xmlrpc.user.blacklist – lists of

user urn regex patterns that should be allowed/denied◦ Blacklist parsed first

ORCA Configuration files (3/3)

ExoGENI ORCA Actor deployment

AMs and brokers have ‘special’ inventory slices◦ AM inventory slice describes the resources it owns◦ Broker inventory slice describes resources it was given

AMs also have slices named after the broker they delegated resources to◦ Describe resources delegated to that broker

Inventory and resource delegation

Inspect existing slices on different actors using Pequod Create an inter-rack slice in Flukes Inspect slice in Flukes Inspect slice in◦ SM◦ Broker◦ AMs

Close reservations/slice on various actors

Hands-on

http://www.exogeni.net http://wiki.exogeni.net

Thank you!

top related