Top Banner
OpenFlow Supporting Inter-Domain Virtual Machine Migration Bochra Boughzala * , Racha Ben Ali , Mathieu Lemay , Yves Lemieux and Omar Cherkaoui * * University of Quebec at Montreal Email:[email protected], [email protected] Ericsson, Montreal, Canada Email: [email protected], [email protected] Inocybe, Canada Email: [email protected] Abstract—Today, Data Center Networks (DCNs) are re- architected in different new architectures in order to alleviate several emergent issues related to server virtualization and new traffic patterns, such as the limitation of bi-section bandwidth and workload migration. However, these new architectures will remain either proprietary or hidden in administrative domains, and interworking protocols will remain in-process of standardiza- tion for a time longer than the usually required time to market. Therefore, interworking cloud DCNs to provide the federated clouds is a very challenging issue that seems to be potentially alleviated by a software-defined networking (SDN) approach such as Openflow. In this paper, we propose a network infrastructure as a services (IaaS) software middleware solution based on Openflow in order to abstract the DCN architecture specifities and instantly interconnect DCNs. As a proof of concept we implement an experimental scenario dealing with virtual machine migration. Then, we evaluate the network setup and the migration delay. The use of the IaaS middleware allows automating these operations. OpenFlow solves the problem of interconnecting heterogeneous Data Centers and its implementation offers interesting delay values. I. I NTRODUCTION Emergent Data Center Networks (DCNs) are based on specific architectures and topologies which make them hard to interwork and interconnect. In fact, traditional DCNs are usually re-architected to alleviate several issues raised by the introduction of server virtualization and the emergence of new applications traffic patters in the clouds. Among these issues we cite the limitation of the bi-section bandwidth and its inefficient utilization by spanning tree protocols and the challenging support of a live migration of virtual machines. Therefore, traditional architectures, topologies and protocols are redesigned to alleviate these issues. For instance, a scale- out architecture of commodity switches with fat free topology are able to provide full bi-section bandwidth when properly combined with a multi path protocol such as Equal Cost Multi Path (ECMP). Furthermore, an identifier/locator split address- ing topology provides : (1) a scalable addressing scheme to a high number of virtual machines (VM) by summarizing hierarchical locators of physical nodes, (2) an agile VM migration by simply remapping the VM identifier to its new locator. However, the specifities of these solutions are either pro- prietary or remain hidden inside administrative domains or take a long time to be standardized which is a major obstacle in rapidly deploying innovative services. On example is a mobile thin client applications that require a very low delay and therefore, VM inter domain migration to the nearest cloud domain to the user is the solution to minimize this latency and provide a good user experience for the thin-client. However this solution cannot be implemented if several specifically re- architected cloud DCNs cannot interwork for that purpose. Another example is the cloud bursting during unplanned traffic surges, where the excessive workload can be migrated to other underutilized clouds in different time zones for instance or to benefit from clouds located in regions with low energy costs. This is oftenly refered as ”follow the sun and follow the wind”. Consequently, providing a federated clouds by interworking and properly interconnecting cloud DCNs will provide a pleathora of new innovative services. One attractive solution that we adopted in this paper is the design of a network infrastructure as a service (IaaS) software middle- ware to provide this interworking. In fact, our software-based approach is based on Openflow software-defined networking (SDN) in order to provide the required flexiblilibity to adapt to DCN specificities with a very rapid deployment between administrative domains. Furthermore, DCN equipment vendors usually provide pro- prietary closed solutions that are targeted to provide optimal performance within the same administrative domain without any interworking with other cloud domains built using equip- ments from other vendors. Therefore, it is critical to abstract DCN resources in order to interconnect them even though they are very diverse and hidden to external domains. Using Openflow we propose a generic solution for this abstraction based on IaaS generic framework. We evaluate our solution using a test-bed experimentation of two DCN topologies inspired by the already proposed architecture models. To describe the Data Center topology and its internal architecture, Clos models and its special instance fat-tree are usually used. Although each model solves a given problem, all models designs share the same purpose of introducing new connectivity features and optimizing Data Center performance, 978-1-4577-0261-7/11/$26.00 ©2011 IEEE
7

OpenFlow supporting inter-domain virtual machine migration

May 14, 2023

Download

Documents

James Lapalme
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: OpenFlow supporting inter-domain virtual machine migration

OpenFlow Supporting Inter-Domain VirtualMachine Migration

Bochra Boughzala!, Racha Ben Ali†, Mathieu Lemay‡, Yves Lemieux† and Omar Cherkaoui!!University of Quebec at Montreal

Email:[email protected], [email protected]†Ericsson, Montreal, Canada

Email: [email protected], [email protected]‡Inocybe, Canada

Email: [email protected]

Abstract—Today, Data Center Networks (DCNs) are re-architected in different new architectures in order to alleviateseveral emergent issues related to server virtualization and newtraffic patterns, such as the limitation of bi-section bandwidthand workload migration. However, these new architectures willremain either proprietary or hidden in administrative domains,and interworking protocols will remain in-process of standardiza-tion for a time longer than the usually required time to market.Therefore, interworking cloud DCNs to provide the federatedclouds is a very challenging issue that seems to be potentiallyalleviated by a software-defined networking (SDN) approach suchas Openflow.

In this paper, we propose a network infrastructure as aservices (IaaS) software middleware solution based on Openflowin order to abstract the DCN architecture specifities and instantlyinterconnect DCNs.

As a proof of concept we implement an experimental scenariodealing with virtual machine migration. Then, we evaluate thenetwork setup and the migration delay. The use of the IaaSmiddleware allows automating these operations. OpenFlow solvesthe problem of interconnecting heterogeneous Data Centers andits implementation offers interesting delay values.

I. INTRODUCTION

Emergent Data Center Networks (DCNs) are based onspecific architectures and topologies which make them hardto interwork and interconnect. In fact, traditional DCNs areusually re-architected to alleviate several issues raised by theintroduction of server virtualization and the emergence ofnew applications traffic patters in the clouds. Among theseissues we cite the limitation of the bi-section bandwidth andits inefficient utilization by spanning tree protocols and thechallenging support of a live migration of virtual machines.

Therefore, traditional architectures, topologies and protocolsare redesigned to alleviate these issues. For instance, a scale-out architecture of commodity switches with fat free topologyare able to provide full bi-section bandwidth when properlycombined with a multi path protocol such as Equal Cost MultiPath (ECMP). Furthermore, an identifier/locator split address-ing topology provides : (1) a scalable addressing scheme toa high number of virtual machines (VM) by summarizinghierarchical locators of physical nodes, (2) an agile VMmigration by simply remapping the VM identifier to its newlocator.

However, the specifities of these solutions are either pro-prietary or remain hidden inside administrative domains ortake a long time to be standardized which is a major obstaclein rapidly deploying innovative services. On example is amobile thin client applications that require a very low delayand therefore, VM inter domain migration to the nearest clouddomain to the user is the solution to minimize this latency andprovide a good user experience for the thin-client. Howeverthis solution cannot be implemented if several specifically re-architected cloud DCNs cannot interwork for that purpose.Another example is the cloud bursting during unplanned trafficsurges, where the excessive workload can be migrated to otherunderutilized clouds in different time zones for instance orto benefit from clouds located in regions with low energycosts. This is oftenly refered as ”follow the sun and followthe wind”. Consequently, providing a federated clouds byinterworking and properly interconnecting cloud DCNs willprovide a pleathora of new innovative services. One attractivesolution that we adopted in this paper is the design of anetwork infrastructure as a service (IaaS) software middle-ware to provide this interworking. In fact, our software-basedapproach is based on Openflow software-defined networking(SDN) in order to provide the required flexiblilibity to adaptto DCN specificities with a very rapid deployment betweenadministrative domains.

Furthermore, DCN equipment vendors usually provide pro-prietary closed solutions that are targeted to provide optimalperformance within the same administrative domain withoutany interworking with other cloud domains built using equip-ments from other vendors. Therefore, it is critical to abstractDCN resources in order to interconnect them even thoughthey are very diverse and hidden to external domains. UsingOpenflow we propose a generic solution for this abstractionbased on IaaS generic framework. We evaluate our solutionusing a test-bed experimentation of two DCN topologiesinspired by the already proposed architecture models.

To describe the Data Center topology and its internalarchitecture, Clos models and its special instance fat-tree areusually used. Although each model solves a given problem,all models designs share the same purpose of introducing newconnectivity features and optimizing Data Center performance,

978-1-4577-0261-7/11/$26.00 ©2011 IEEE

Page 2: OpenFlow supporting inter-domain virtual machine migration

scalability and in some cases energy consumption using elastictree. In most of fat-tree topology models, there are at leastthree levels and the number of ports of each switch is usuallydenoted as a fat-tree parameter ”k”. Furthermore, in all thesemodels the DCN is usually a multi-rooted tree. Switches withdifferent port densities and speeds compose the different levelsof each tree. Core switches are placed as root nodes. Aggregateswitches are placed at the underlying root level and then at thelowest level of the tree we find the top of rack (ToR) switchesthat are directly connected to the rack of physical servers.On top of this general topology model, several DCN schemeswere proposed for interconnecting network elements withinthe Data Center, each satisfying a different set of requirements.Portland [1] and VL2 [2] are among the most popular DCNschemes referenced in the literature. PortLand [1] uses a fat-tree as a DCN topology. In this fat-tree illustrated in Figure 1,links redundancy increases and therefore aggregate bandwidthincreases as the root is neared at the aggregate level.

Fig. 1. Fat Tree

PortLand aims to avoid switch configuration by modify-ing all switches to forward the traffic based on pseudo-MAC headers that depend on positions (PMAC). By ad-dressing the VMs in this topology using the PMAC patternpod.position.port.vmid, longest prefix matches can be used toreduce the forwarding state in switches. However, VMs doesnot need to know about PMACs since traditional actual MACs(AMACs) are translated to PMACs and vice-versa at theToR switches performing this MAC rewriting. A centralizedfabric manager maintains the PMAC-to-AMAC mappings anda global connectivity matrix. VL2 [2] is based on a Clostopology with multi-rooted trees. The roots of the trees arecalled intermediate switches. All switches in VL2 are keptunmodified. Valiant Load Balancing (VLB) and Equal CostMulti Path (ECMP) based on IP-in-IP encapsulation are usedto balance the traffic fairly on the available links. VL2 alsointroduces host agents and a directory system acting as acentralized network manager and controller. In these DCNschemes, the scale out topology is not always efficient inenergy consumption, therefore elastic trees [3] illustrated inFigure 2 were proposed to automatically shut off/on linksdepending on network usage. A power consumption reductioncan be achieved using this scheme; however this has tobe traded off against performance. In [4] authors providea platform called Ripcord for rapidly prototyping, testing,and comparing different DCN schemes. Ripcord provides acommon infrastructure, and a set of libraries to allow quicklyprototyping of new schemes.

Fig. 2. Elastic Tree

Considering the different requirements for different serviceproviders, heterogeneous DCN architectures (VL2, PortLand,ElasticTree, etc.) will co-exist in different DCNs making hardtheir interconnection. Therefore, in our work we provide somedirections to provide this connectivity between heterogeneousDCNs adopting different designs. In our approach, we iden-tify the common characteristics of these heterogeneous DCNschemes and abstract them to a generic level so that they canbe easily represented using high level policy rules that aretranslated to low level OpenFlow rules. These rules are pushed,preferably proactively, to the DCN elements to build the inter-DCN connectivity. This abstraction of connectivity, regardlessof topologies, OSI layers and its related protocols, networkvendors, etc., is achieved thanks to an OpenFlow based DCNmodel described in the next Section

A. OpenFlow-based DCNsOpenFlow (OF) [5] is a practical approach that opens the

forwarding plane hardware resources (forwarding table inOF1.0) of different vendor’s switches to be controlled usinga remote separate OF controller. The communication betweenthe controller and the switch forwarding plane is done usingthe OpenFlow protocol over a secure TCP channel. SinceDCN schemes are designed to bypass existent control planeprotocols (simple layer 2 forwarding with spanning tree loopavoidance), OpenFlow looks as an attractive and easy solutionto prototype and implement the new connectivity featuresof these DCN schemes. Besides, since the OF controller iscentralized, it has a global view of the whole network andtherefore end-to-end forwarding paths either optimized or cus-tomized for a specific requirement can be easily established.For instance, in order to support a VM migration betweentwo heterogeneous DCNs without connectivity interruption,specific forwarding paths can be re-routed by directly pushingthe suitable OF rules in the suitable switch forwarding tables.In this case, we assume that these direct OF rules related tointer-DCN VM migration cannot raise conflicts with existentrules related to specific DCN scheme connectivity. This canbe achieved using a specific policy rule conflict resolver. Afeature that is usually missing in DCN schemes is the isolationof the DCN connectivity between multiple tenants of the cloud.OpenFlow, using a Flowvisor [6], is able to provide basicvirtualization of the DCN that can be sliced into differentflowspaces. Furthermore, in a virtualization context, OpenFlowenabled forwarding elements can also include software (even-

Page 3: OpenFlow supporting inter-domain virtual machine migration

tually hardware accelerated) virtual switches at the hypervisorlevel.

Therefore, an OpenFlow-based DCN is composed of thefollowing elements.

- The OpenFlow controller can be hosted on a separateserver reachable on an IP network in the control plane.The controller dictates the behavior of the OF switchesconnected to it by either populating their flow tableseither proactively when establishing basic DCN connec-tivity or reactively when a new flow arrives. Particularly,in the reactive mode, when a packet arrives to an OFswitch that has not established yet an OF rule thatmatches that packet, this latter is sent to the controllerto tell what to do with it. The controller then pushes theOF rule into the appropriate switches so that subsequentpackets of the same flow will not be send to the controlleragain. The concept of a flow has a very wide definitionand its granularity spans from a very fine grained flowsuch as a particular TCP connection to a coarse grainedflow such as a VLAN or an MPLS label switched path.A widely used open source OF controller is NOX [7].

- The OpenFlow Switches (OFS) are the clients of thecontroller. They join the OF network by connecting toits controller over a secure TCP channel and exchangingHello messages. The OpenFlow protocol specifies themessage exchanges between OFS and OF controller. Tomaintain connectivity on the absence of network traffic,the controller and the switch will exchange an ’echorequest’ and an ’echo reply’ every 5 seconds. For everynew arriving flow of packet, the switch encapsulates thefirst packet of the flow inside an OpenFlow packet (calledpacket-in) and sends it to the controller. So the controllerwill respond by a ’packet-out’ and a ’flow-mod’ messagesto set up the new flow entry corresponding on this flowof packets.

- The OpenFlow Virtual Switches (OVS) [8] are softwarevirtual switches that reside in the hypervisor space andunderstand the OpenFlow protocol. Instead of connectingphysical machines they connect virtual machines on thesame hypervisor. A widely used open source virtualswitch is the Open vSwitch which is a virtual switch foropen source Hypervisors such as qemu, kvm or xen.

- The Flowvisor [6] acts as an OF proxy server between theswitches and multiple controllers each controlling a spe-cific flowspace of the network. Slices can be partitionedusing flowspaces and controlled by different controllers.We may also have an overlapping flowspace that canbe defined for a single controller to monitor a part orthe whole physical network for instance. In case of asliced network, each slice or virtual network has its owncontroller.

OpenFlow appears as an attractive and flexible solutionto define the forwarding logic of switches in DCNs usingdifferent schemes such as VL2 and PortLand. However, it mayreveal some scalability limitations due to the huge number

of data plane forwarding rules that has to be established.Therefore, a complexity evaluation of the number of for-warding rules is evaluated in a further section of this paper.This OpenFlow environment is evaluated through an experi-mentation based on several topologies with different level ofcomplexity. We evaluate OpenFlow by using it on a set ofactivated switches representing a Data Center built on a treetopology.

B. Virtual Machine Migration

The interest to use virtualization technologies is the abilityto do several operations making the resources managementmore flexible. The virtual machine migration is a cloudoperation that we are focusing on. However there are othervirtual machine-based operations such as cloning a virtualmachine to avoid doing the same installations, restoring virtualmachine in case an incident happens. A virtual machine (VM)migration can be performed inside the same data center orbetween remote Data Centers [9]. This operation seeks mainlyon ensuring load balancing and optimizing resources usage.These two goals are Data Center-oriented. Since the VM isrunning as a server, it is providing a service; so another reasonjustifying the VM migration is to be the nearest to the clientsin order to offer a better quality of service by reducing thedelay. In this case the objective of migrating a VM is client-oriented. There are cases where the VM migration can becritical, for example when the VM is running an HTTP serverwith several TCP connections or when it is running a videosequence. In such a situation we have to keep it runningwith the same IP address while migrating to not lose theestablished user connections. We have also to buffer its contexton the host physical machine and send it to the target one.The fact of moving the VMs leads to the VM location issue.Solutions have been proposed to locate the VMs and the mostcommon property of these solutions is the use of a mappingsystem between a fixed address and a variable address. InVL2, these two types of address are AA (Application Address)and LA (Location Address). In Portland,AMAC (Actual MAC)and PMAC (Positional Pseudo MAC) are used to resolve theVM location issue. A mapping table is maintained at thefabric manager – a centralized controller of the network. Notethat in VL2 the problem is handled in the layer 3 by anIP-in-IP encapsulation and Portland handles this problem inthe layer 2 based on MAC addresses. However, in the twoimplementations the fixed address is used to identify the VMand the variable address is used to locate the VM since it isable to move.

C. Our Approach

We aim to abstract the Data Center structure in order tobe able to do inter Data Center operations. For example themigration becomes easier to do since it is constrained by nospecificity about the way how Data Centers are made. Basedon rules, we can set the network configuration to establish theconnectivity required. In this implementation we just exploitthe advantages of OpenFlow: its lightness, simplicity and

Page 4: OpenFlow supporting inter-domain virtual machine migration

flexibility. We will show the configuration flexibility, so nomatter if we are using Portland or VL2, we can define theappropriate rules for establishing the connectivity requiredfor an inter-Data Center operation launched on demand. Oursolution use OpenFlow with the abstraction structure requiredto be generic and independent of how the Data Center workson the inside. Even if we do not know how the Data Center ismounted we must be able to discover the appropriate rulesin an easy way. To define these rules, we use a networkcontrol application based on the IaaS framework. The IaaSapplication creates the OpenFlow rules to establish the networkconnectivity for a specific operation. The switches receivethese rules, so they update their flow tables. Obviously theymust be able to understand and execute the rules sent by theIaaS application. In order to make the OpenFlow rules easierand faster to discover, we design a resource description thatcontains all the resources residing in the Data Center: virtualmachines, OpenFlow switches, Open vSwitches, hosts (thehypervisors hosting the virtual machines)... The Data Centertopology is relieved from this description. We have all thelinks connecting each couple of interfaces of all the DataCenter devices. We defined two levels of rules; global rulesand specific rules.

- Global rules are topological; they are expressed in theData Center resources description. They describe thenetwork structure and they are related to the Data Centertopology. An example of a global rule is to definehow many levels of switches are involved to connectone server to another. When starting an operation, theserules are used to generate the specific rules that will betranslated into valid OpenFlow rules (i.e. flow entries) tomake this operation works. So, the global rules are highlevel rules we have to learn to not depend on specificimplementation rules (such as VL2 or Portland) to aspecific Data Center. The global rules are topologicallyrelated to the Data Center and they are used to create thespecific rules.

- Specific rules describe how operations will settle inhardware in the Data Center. These rules are used to beinstantiated in the network as OpenFlow rules. Then, theOpenFlow rules are pushed into the appropriate switchesthat will handle the traffic generated by the launchedoperation.

II. IAAS AS DISTRIBUTED CONTROLLER

In our architecture, high-level rules are translated to Open-flow rules and are pushed to data plane elements. In thiscontext, NOX, a widely used Openflow controller, can beused to push rules down to all data plane elements. However,NOX is based on a centralized control logic requiring directconnections to data plane elements to build global networkviews. For this reason, maintaining and controlling a hugenumber of network states of the large number of data planeelements that scale out the Data Center Network can beoverwhelming for the performance of a control plane basedsolely on NOX. Therefore, we propose to use a distributed

middleware framework based on an IaaS design, that can eithercontrol the data plane elements directly using the OF proto-col or indirectly by carefully generating and parameterizingseveral NOX controllers. In any case, the IaaS middlewarewill distribute the control logic of all data plane elements(OF-based) over multiple controllers. Each controller will beresponsible of a dynamically adjustable partition of data planeelements and may use appropriate aggregation to providefewer and generalized network states when details are notrequired.

A. The size of Openflow rule spaceOpenflow can be used to abstract the control of the data

plane of multiple network components. However, it facessome challenging limitations regarding the scalability of itsgeneric and flexible flow-based forwarding since it is basedon matching a large number of packet fields of multipleprotocols at different layers (twelve-tuple in OF1.0 and morein OF1.1). More specifically, the Openflow protocol supportstwo types of data plane-level Openflow rules: (1) Exact matchrules; and (2) Wildcard rules. Exact match rules are usuallypushed by the controller in the SRAM of the data plane andthe lookup is performed by applying efficient hash functions.However, worst case lookup speed can be very poor due tohash collisions accentuated by the huge number of exact matchrules that are added for each L4 ’micro-flow’ connection.Worse yet, the lookup performance in a virtualized Data Centerwith multiple VMs per server is much more challenging.For instance a highly virtualized server hosting up to 10VMs will multiply by 10 the average number of concurrentflows per VM. Therefore, the number of exact match rules inthe aggregate and core switches will be extremely high thussignificantly jeopardizing data plane forwarding performance.In contrast, wildcard rules sacrifices the flexibility of the finegranularity of exact match rules by matching only specific bitsin the twelve tuple fields. Since we can define the whole flowspace using few wildcard rules, these latter can scale well inaggregate and core switches. Wildcard rules are pushed by thecontroller in the TCAM which provides fast one-clock-cyclelookups. However, due to its size limitation, its cost and itspower-consumption, the use of TCAM has to be optimizedcarefully.

Furthermore, as we said before, wildcard rules sacrificesthe flow granularity of Openflow and therefore prevent theflexibility in implementing some new data center connectivityfeatures such as multi-path or QoS. In fact, for instance asin [10], we may want to wildcard the source address to useonly 10% of the TCAM space that a per-source/destinationrules will use. However, this will prevent benefiting from aload balancing of flows with different source addresses overdifferent multiple paths.

Besides the OF data plane scalability/flexibility tradeoff, thecontrol plane of OF is assumed to be logically centralizedthus expressing another scalability issue regarding the numberof network states maintained for a large number of dataplane elements controlled by the same logically centralized

Page 5: OpenFlow supporting inter-domain virtual machine migration

MPLS Backboneor

InternetCore Routers

Aggregate Switches

Edge/ToR switches

Core Routers

Aggregate Switches

Edge/ToR switches

DataCenter1(OpenFlow Network)

DataCenter2(OpenFlow Network)

OpenFlow Connection

OpenFlow ConnectionNOX

OpenFlow Controller

NOXOpenFlow Controller

IaaS Engine

Activ

ate –

Instan

tiate

- Rec

onfig

ure

Activ

ate

– In

stant

iate

- Rec

onfig

ure IaaS Engine

Intra Cloud VM MigrationInter Cloud VM Migration

Distributed IaaS

Fig. 3. InterCloud Virtual Machine Migration

controller. In contrast, distributed control planes require a lotof state to be synchronized across many devices. Therefore,as in traditional hierarchical-routing based packet networks(OSPF or IS-IS based for instance), partitioning and aggre-gation are among the keys to a scalable control plane. EachOF controller will maintain the network states of a welldefined OF domain. Partitioning into multiple OF domainswill depend on several design requirements that we willtry to partially automatize in our distributed IaaS controller.So that, for instance, the virtual network connecting serversfrequently involved in long distance VM migrations will beconfined in the same OF domain. The reason is to restablishthe connectivity between migrating VM and its correspondantnodes as quickly as possibly using the centralized controllerrather waiting for a propagation of a distributed state acrossthe involved devices. Moreover, specific QoS treatment can beapplied to the migrated flows.

B. IaaS controller to distribute OF rules

Infrastructure as a Service is generally defined by theNIST as the capability provided to the customer to provisionprocessing, storage and networking where he is able to deployand run his own software including OSes and applications. Thecustomer does not manage the underlying cloud infrastructurebut has control over OSes, storage, deployed application andpossibly a limited control over a selected number of network-ing components. The two major technologies that enable IaaScloud computing are virtualization and elastic computing butthey are generally considered at the server level only. In ourwork, we extend these technologies to the network level, thusproviding network virtualization and elastic networking basedon a related work.

III. EXPERIMENTATION AND RESULTS

Firstly, we calculate the required time to do the VMmigration and then we calculate the time of setting up thenetwork. To perform a VM migration we must have twohypervisors with an access to a storage server where the virtualmachine images and their configuration files are available.We used two Xen hypervisors with an NFS server. The timerequired to migrate a virtual machine already running on ahypervisor to the destination is 40 seconds. For setting up theOpenFlow based data center, we implement our solution andevaluate it using Mininet, a linux based virtual machine wichprovide a scalable platform for emulating OpenFlow networksvia lightweight virtualization. Using Mininet we can createnetworks on a single physical machine. We can activate asmuch of OpenFlow switches as we want and we are alsoable to generate hosts and link them to the switches. We alsodefine a topology on which the network will be built. We triedmultiple strategies with different topologies. We rely on ourgeneric resource description and our IaaS application that willactivate NOX instances with the required components. It willalso automatically define the OpenFlow rules and push themon the switches. The IaaS application input is a descriptorresources file containing all the Data Center devices. Thetopology is described on this file; it is composed by all thelinks existing between the several equipments.

In the first topology, we generate an OpenFlow networkbuilt on two levels of switches. The network contains fourhosts and three switches: two aggregate switches and a corerouter. The lowest switch level is a based on Open vSwitches.

At the beginning the network is not configured and all thehosts can’t reach each other. To establish the connectivitybetween two hosts passing by the core router we have topush 6 rules on the switches to be able to forward the packetsin the two ways. Connecting two hosts without passing by

Page 6: OpenFlow supporting inter-domain virtual machine migration

OpenFlow Switch

VM VMVM VM

Open vSwitchOpen vSwitch

Fig. 4. Two-levels based network

the core router implies that the hosts are linked to the sameaggregate switch; In this case, we have to push only 2 rules inthe involved switch. So to configure full network connectivitywe need to insert 28 rules. Since we aim to establish therequired connectivity between the two involved hypervisors bythe migration (the host hypervisor and the target hypervisor),we assume that the network is already configured and wehave only to push the appropriate rules to connect two specificphysical machines. In that case we have to push only 6 rules.

Switch Type Number of Instances Number of EntriesCore 1 2Aggregate 0 0Top Of Rack 2 4

TABLE ITHE NUMBER OF FLOW ENTRIES INSTALLED IN A 2 LEVEL HIERARCHICAL

TOPOLOGY

We launched the application and calculated the requiredtime to the switches to apply the recently pushed rules. Atthe end of this experimentation we obtained 16 seconds.

VM VMVM VM

Open vSwitchOpen vSwitch

VM VMVM VM

Open vSwitchOpen vSwitch

OpenFlow Switch

OpenFlow Switch OpenFlow Switch

Fig. 5. Three-levels based network

In the second case, we increase the level of the complexityof the network. We built a network based on a topology with3 levels of switches. The network contains 8 hosts and 7switches organized as follow: 4 top of rack switches (OpenvSwitches), 2 aggregate switches and a core router. If wewant to connect two hosts passing by the 3 levels of switcheswe have to define 10 rules just for one flow of packets. Toconfigure the whole network we need to define 128 rules.

Using our IaaS application for automating the networksetup, we pushed the rules and it takes 26 seconds to establishthe connectivity between two specific hosts.

The rules instantiated in the flow tables are injected ina pro-active way so the connectivity will be preconfigured

Switch Type Number of Instances Number of EntriesCore 1 2Aggregate 2 4Top Of Rack 4 4

TABLE IITHE NUMBER OF FLOW ENTRIES INSTALLED IN A 3 LEVEL HIERARCHICAL

TOPOLOGY

to support the migration operation. By using a centralizedcontroller we can be more agile since it reacts faster andinstead of waiting for the machine to migrate to its destinationand sends ARPs packets, which leads to a latency duration,we do it pro-actively with the OpenFlow instructions and theIaaS application. Mainly, we are pushing flow entries : a setof specific rules. These rules are beyond VL2 or Portlandimplementation. The rules abstract the way the migration ishandled. Note that virtual machine migration is not only inter-Data Center, it can be also intra-Data Center but they areabstract enough to handle the two types of operations. Wenote that the number of rules depends on the number ofswitches and the numbers of servers. The main problem wecan encounter is to have invalid and conflicting rules. The timethat takes to establish the connectivity increases in a parallelway to the number of rules.

We determine how long it takes for updating the flow tablesin two cases. We determine the number of rules in each ofthose cases. We fix the time required to push the rules inthe involved switches and we give the ratio of the networksetting duration compared to the migration duration (average(17/40,26/40)= 0,53).

The table below resumes the results we obtained throughour experimentation. Note that in the two cases, we have k=3.

- k: number of ports per switch- n: number of servers- m: number of switches- l: number of levels of switches- r: total number of rules- t: setup time (in seconds)

Topology #1 Topology #2n 4 8m 3 7l 2 3r 6 10v 2 2t 17 26

TABLE IIIRESULTS COMPARISON

IV. FORMULATION

In order to perform the problematic formulation we definesome parameters. In the list below we present the definitionof the main parameters describing a Data Center Network:

- f: average number of flows per VM

Page 7: OpenFlow supporting inter-domain virtual machine migration

- v: average number of VMs per server- k: number of ports per switch for PortLand- s: average number of servers per ToR edge switch (= k/2

for PortLand)- de : number of ports per (ToR) edge switches (2 uplink

+ s downlink) for VL2- da : number of ports per aggregate switch (da/2 uplink

+ da/2 downlink) for VL2- di : number of ports per intermediate (core) switch (1

internet uplink + di downlink) for VL2- ne : number of (ToR) edge switches- na : number of aggregate switches- ni : number of intermediate (core) switches- Ee : average number of entries in each (ToR) edge switch- Ea : average number of entries in each aggregate switch- Ei : average number of entries in each intermediate (core)

switch

VL2 PortLandns sdadi/4 k3/4ne dadi/4 k2/2na di k2/2ni da/2 k2/4Ee 2vs 2(k2/2 " 1) + vk/2Ea ni + ne = (da/2) + (dadi/4) 2(k2/2 " 1) + vk2/4Ei ne = dadi/4 k2/2 " 1

TABLE IVANALYTICAL FORMULATION OF THE AVERAGE NUMBER OF ENTRIES IN

EACH SWITCH (PORTLAND VS. VL2)

If we consider the second topology, we can determine thefollowing parameters: s = 2 v = 2 da = 3 di = 2 and k = 3.Based on this value, we can pick up this final results:

VL2 PortLandns 3 6, 75ne 1, 5 4, 5na 2 4, 5ni 1, 5 2, 25Ee 8 10Ea 3 11, 5Ei 1, 5 3, 5

TABLE VNUMERICAL APPLICATION OF THE FORMULATION

V. CONCLUSION

Data Centers are huge and complex networks and theirconfiguration is even more complex. However, we can simplifythis task thanks to the OpenFlow protocol and the IaaSframework. We adressed the need to make the configurationrules for the Data Centers interconnection generic. We provedthat we are able to configure an OpenFlow Data Center on thefly regardless of the topology it has.

In this paper, we proposed an OpenFlow based solutionfor remote Data Centers interconnection. In our study, wedefined our OpenFlow solution that seems to be a good

solution for inter Data Center connectivity in order to enableinter cloud operations. It offers effective internal configurationabstraction of each Data Center. We showed that the solutionin addition of being generic, it is feasible. It takes into accountthe real constraints of an inter-clouds operation. We gavean experimental scenario that demonstrates the feasibility ofthis solution in enabling inter Data Center virtual machinemigration and in enhancing cloud based services. Setting upOpenFlow rules in the network takes 20 ms while virtualmachine migration requires 40 ms; this ratio is interestingsince it shows that the setup of the network takes half of theduration required to do the VM migration.

REFERENCES

[1] R. Niranjan Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri,S. Radhakrishnan, V. Subramanya, and A. Vahdat, “Portland: a scalablefault-tolerant layer 2 data center network fabric,” ACM SIGCOMMComputer Communication Review, vol. 39, no. 4, pp. 39–50, 2009.

[2] A. Greenberg, J. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri,D. Maltz, P. Patel, and S. Sengupta, “VL2: A scalable and flexible datacenter network,” ACM SIGCOMM Computer Communication Review,vol. 39, no. 4, pp. 51–62, 2009.

[3] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma,S. Banerjee, and N. McKeown, “ElasticTree: Saving energy in datacenter networks,” in Proceedings of the 7th USENIX conference onNetworked systems design and implementation. USENIX Association,2010, pp. 17–17.

[4] B. Heller, D. Erickson, N. McKeown, R. Griffith, I. Ganichev, S. Whyte,K. Zarifis, D. Moon, S. Shenker, and S. Stuart, “Ripcord: a modularplatform for data center networking,” in Proceedings of the ACMSIGCOMM 2010 conference on SIGCOMM. ACM, 2010, pp. 457–458.

[5] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation incampus networks,” ACM SIGCOMM Computer Communication Review,vol. 38, no. 2, pp. 69–74, 2008.

[6] R. Sherwood, G. Gibb, K. Yap, G. Appenzeller, M. Casado, N. McK-eown, and G. Parulkar, “Flowvisor: A network virtualization layer,”Technical Report Openflow-tr-2009-1, Stanford University, Tech. Rep.,2009.

[7] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, andS. Shenker, “NOX: towards an operating system for networks,” ACMSIGCOMM Computer Communication Review, vol. 38, no. 3, pp. 105–110, 2008.

[8] B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado, and S. Shenker,“Extending networking into the virtualization layer,” Proc. HotNets(October 2009), 2009.

[9] F. Hao, T. Lakshman, S. Mukherjee, and H. Song, “Enhancing dynamiccloud-based services using network virtualization,” in Proceedings ofthe 1st ACM workshop on Virtualized infrastructure systems and archi-tectures. ACM, 2009, pp. 37–44.

[10] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker, “Applying NOXto the Datacenter,” Proc. HotNets (October 2009), 2009.