Top Banner
Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning ? Fernando Rodr´ ıguez-Haro, Felix Freitag, Leandro Navarro, and Rene Brunner Polytechnic University of Catalonia, Jordi Girona 1-3, 08034 Barcelona, Spain {frodrigu, felix, leandro, rbrunner}@ac.upc.edu Abstract. The virtualization of resources in SMP machines with Xen offers automatic round-robin based coarse-grain assignment of VMs to physical processors and manual assignment via privileged commands. However, two problems can arise in Grid environments: (1) if Xen’s sEDF scheduler assigns the VMs, then some processors could be over or under- utilized and the VMs could receive more resources than specified, and (2) manual assignment is not feasible in a dynamic environment and also requires being aware of each node’s heterogeneity. Our approach proposes an enhanced fine-grain assignment of SMP’s virtualized resources for Grid environments by means of a local resource manager (LRM). We have developed a prototype which adds a partitioning layer of subsets of physical resources. Our experimental results show that our approach achieves a flexible assignment of resources. At the same time, due to the fine-grain access, a more efficient resource assignment is achieved compared to the original mechanism. Key words: Resource provisioning, local resource manager, virtual ma- chine. 1 Introduction Virtualization technologies have become an important research area for resource management in Grid computing [1]. Recent efforts have proposed and developed middleware that implements mechanisms to manage network and physical node virtualization. Most of these approaches leverage recent advances of Xen [2] and VMware [3] for LRMs, and VNET [4] for virtual networking. In current approaches it is more common to view a physical resource as a whole (e.g. undividable CPU) without considering the particular hardware configuration at each node. The resources are therefore assigned in a coarse-grain fashion. The problem is the unbalance of the workload. In SMP nodes, for instance, Xen offers automatic coarse-grain assignment of VMs to physical processors. Each VM created is assigned to a physical processor in a round robin fashion. Thus, some tasks must be done by an administrator to ensure an expected ? This work is supported in part by the European Union under Contracts SORMA EU IST-FP6-034286 and CATNETS EU IST-FP6-003769, the Ministry of Education and Science of Spain under Contract TIN2006-5614-C03-01, and the program PROMEP.
10

Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

Feb 07, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

Exploring the Behaviour of Fine-GrainManagement for Virtual Resource Provisioning ?

Fernando Rodrıguez-Haro, Felix Freitag, Leandro Navarro, and Rene Brunner

Polytechnic University of Catalonia, Jordi Girona 1-3, 08034 Barcelona, Spain{frodrigu, felix, leandro, rbrunner}@ac.upc.edu

Abstract. The virtualization of resources in SMP machines with Xenoffers automatic round-robin based coarse-grain assignment of VMs tophysical processors and manual assignment via privileged commands.However, two problems can arise in Grid environments: (1) if Xen’s sEDFscheduler assigns the VMs, then some processors could be over or under-utilized and the VMs could receive more resources than specified, and(2) manual assignment is not feasible in a dynamic environment and alsorequires being aware of each node’s heterogeneity. Our approach proposesan enhanced fine-grain assignment of SMP’s virtualized resources forGrid environments by means of a local resource manager (LRM). Wehave developed a prototype which adds a partitioning layer of subsetsof physical resources. Our experimental results show that our approachachieves a flexible assignment of resources. At the same time, due tothe fine-grain access, a more efficient resource assignment is achievedcompared to the original mechanism.

Key words: Resource provisioning, local resource manager, virtual ma-chine.

1 Introduction

Virtualization technologies have become an important research area for resourcemanagement in Grid computing [1]. Recent efforts have proposed and developedmiddleware that implements mechanisms to manage network and physical nodevirtualization. Most of these approaches leverage recent advances of Xen [2]and VMware [3] for LRMs, and VNET [4] for virtual networking. In currentapproaches it is more common to view a physical resource as a whole (e.g.undividable CPU) without considering the particular hardware configurationat each node. The resources are therefore assigned in a coarse-grain fashion.

The problem is the unbalance of the workload. In SMP nodes, for instance,Xen offers automatic coarse-grain assignment of VMs to physical processors.Each VM created is assigned to a physical processor in a round robin fashion.Thus, some tasks must be done by an administrator to ensure an expected? This work is supported in part by the European Union under Contracts SORMA EU

IST-FP6-034286 and CATNETS EU IST-FP6-003769, the Ministry of Education andScience of Spain under Contract TIN2006-5614-C03-01, and the program PROMEP.

Page 2: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

performance of the competing VMs. First, the SMP node architecture must beknown. Second, the vcpu-pin command must be issued to assign VMs to a specificprocessor. And third, the weights, needed by the simple Earliest Deadline First(sEDF) scheduler, need to be matched to user requirements taking in accountthe VMs competing in the same processor.

We proposed the fine-grain management of virtual resources [5] with a Mul-tiple Dimension Slotting approach (MuDiS). With MuDiS we have the followingbenefits. First, our approach acts on behalf of a custom policy defined by admin-istrators. Second, the resource providers are able to maximize the use of noderesources using fine-grain assignment. Third, this fine grain assignment allowsbetter fulfilling of service level agreements (SLAs). And fourth, it allows Gridmiddleware to make better scheduling decisions by receiving accurate informa-tion of the internal resource usage in every resource provider.

The rest of this paper is structured as follows: In Sect. 2 we describe workrelated to our approach. In Sect. 3 we explain the fine-grain resource managementapproach and the prototype implementation. In Sect. 4 we report on severalexperiments in which we study the achieved behaviour, both with the originalapproach and with the fine-grain resource management. Section 5 concludes andprovides an outlook on future work.

2 Related Work

Current work which manages resources with the help of virtualization techniquesenables the migration of VMs from over utilized nodes to under utilized nodes.This has led to new research challenges related to adaptation mechanisms in thescope of intra node and inter node resource managers. Our approach addressesthe intra node adaptation by means of a local resource manager.

K. Keahey et al [6] introduce the concept virtual workspace (VW), an ap-proach which aims at providing of a customized and controllable remote exe-cution environment for Grid resource isolation purposes. The underlying tech-nologies that support the prototype are virtual machines for the hardware vir-tualization (Xen and VMware) and Globus Toolkit (GT). The interactions ofresource provisioning follow VW descriptions. A VM state (running, shutdown,paused, etc) and the main properties that a required VM must complain suchRAM size, disk size, network, etc are defined. Currently, however, there is notdiscussion about the performance impact of multiple VM running at the sametime and the consequences when SLAs are not fulfilled.

Nadyr Kiyanclar et al [7] present Maestro-VC, a set of system software whichuses virtualization (Xen) to multiplex the resources of a computing cluster forclient use. The provisioning of resources is achieved by a scheduling mechanismwhich is split into two levels. An upper level scheduler (GS) which manages VMsinside the virtual cluster, and an optimal low-level scheduler (LS) per VM. Thepurpose is to incorporate information exchange between virtualized and nativeenvironments to coordinate resource assignment. The LS, however, is an optionalmechanism and if desired it must be explicitly supplied by the user.

Page 3: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

David Irwin et al [8] present Shirako, a system for on-demand leasing ofshared networked resources in federated clusters. Shirako uses an implementationof Cluster-on-Demand for the cluster site manager component and SHARP [9] forthe leasing mechanism. Leases are used as a mechanism for resource provisioning.Thus intra LRM adaptation cannot be made in an independent way and is onlypossible at long terms.

Our work is in the direction of Dongyan Xu et al [10]. The authors proposethe support of autonomic adaptation of virtual distributed environments (VDE)with a prototype of adaptive VIOLIN [11] based on Xen 3.0. They address chal-lenges of dynamic adaptation mechanisms and adaptation decision making. InterLRMs adaptation is based on dynamic cross-domain migration capability. IntraLRM adaptation by adjusting resources shares of physical nodes according toVM usage. The difference is that our approach addresses intra LRM fine-grainadaptation mechanisms within single physical resources.

3 Fine-grain Resource Management

3.1 The Management Component

The fine-grain resource management enhance the multiplexing mechanisms of-fered by Virtual Machine Monitors (VMMs). At the same time, it allows thatLRMs can manage the pinning mechanisms according to user requirements.

The management component runs in the first VM (known as privileged do-main or domain0). It carries out the usage accounting of every single resourceand groups single physical resources into subsets. These subsets which can be dy-namically changed are seen by the LRM upper level as pseudo-physical machineswith minimal interfering I/O interruptions between each other. Thus, physicalresources with different characteristics can be exploited by a LRM. These sub-sets of resources can be assigned to VMs with different application profiles (e.g.CPU intensive, net intensive, SMPs requirement, or I/O disk intensive).

In fine-grain resource management the VMM sEDF scheduler works withoutchanges in the VMs. With the help of pinning mechanisms, multiplexing occursin subsets of resources. The difference to coarse-grain assignment is that thevirtual division will allow VMs behaving according to the limitations of theassigned resources, and hence the running applications.

Figure 1 shows an example of the partitioning of physical resources. In Fig. 1awe can see the partition in 3 subsets, each one with different characteristics.These subsets are exposed to the LRM and each one can be sliced with traditionalapproaches (e.g. VM1 and VM2 could have each other 50% of each resourceof subset 1 respectively). The management component must map these subsetshares, and assign them to virtualized resources through the VMM. In Fig. 1b wecan see an example of new configuration after a certain time n. A usage patterncould have shown that some resources were under- or over-utilized. For betterperformance or load balancing of the VMs, the LRM could have grouped theseVMs taking into account the observed applications behaviour.

Page 4: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

Physical node

NIC1 10Mbs

NIC2 100Mbs

NIC3 1Gbs

Processor4

Processor3

Processor1

Processor2 IDE SATA SCSI

Virtualization

multiple dimension slotting management

LRM

Pinning frameworks

NIC1 10Mbs

NIC2 100Mbs

NIC3 1Gbs

Processor4 Processor3 Processor1 Processor2

IDE SATA SCSI

VM1 VM2 VM3 VM4 VM5

Physical node

NIC1 10Mbs

NIC2 100Mbs

NIC3 1Gbs

Processor4

Processor3

Processor1

Processor2 IDE SATA SCSI

Virtualization

multiple dimension slotting management

LRM

Pinning frameworks

NIC1 10Mbs

NIC2 100Mbs

NIC3 1Gbs

Processor4 Processor3 Processor1 Processor2

IDE SATA SCSI

VM1 VM2 VM3 VM4 VM5

(a) time t

(b) time t+1

Fig. 1. Example of subsets of physical resources at time t in a), and at time t+n in b.

3.2 Prototype

For evaluating the proposed approach we have developed a prototype of theLRM. The prototype has been implemented in Python.

The LRM aims to provide to Grid middleware the infrastructure for access-ing a machine’s subset of virtualized resources in a transparent way. It exposes asimple to use standard service interface for running jobs and hides the details ofthe managed local resources. The implementation of the proposal architectureleverages technologies that are currently in continuous development, i.e. virtu-alization based on Xen and Tycoon [12]. Tycoon is a market-based system formanaging compute resources in distributed environments.

In the following, the main components of the LRM and their interrelationsare explained in detail.

The Local Resource Manager (LRM) component offers public services viaxml-rpc interfaces. Part of the actions that can be performed by the LRM areencapsulated in a TycoonAPI component which interfaces with Tycoon. Thecurrent prototype uses Tycoon for managing the creation, deletion, boot andshutdown of virtual machines, as well as weighting the resources with its biddingmechanism.

The TycoonAPI component is part of the LRM architecture and has beendesigned to allow migration to other high level VM resource manager. One ofthe open issues of our prototype that are desirable to develop is a high level VMresource manager as alternative to Tycoon.

The design of the LRM includes other components that interface directly withXen to perform tasks such monitoring and CPU capacity management. The firsttraces VMs performance to compute costs, and the second offers fine-grain CPUspecifications for VMs in SMP architectures.

Page 5: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

A txtLRM client interface is used to remote submissions of jobs (among otheractions like retrieve output of jobs executed or query the status of the resourceprovider). The steps involved to execute jobs in our prototype are:

First, the user define a configuration file with each job application require-ments as follows:

[JOB.1]slaCPU = 1000 #expressed in MHzslaCPUfloor=0.10 #slaCPU tolerable percent degradationslaBUDGETceil=2.5 #spendable budget per GHz per unit of timejob = cputime.py #application, source or binaryoutput = cputime.csv #output filenecessary=True #means schedule obligation[JOB.2]slaCPU = 1500...

Second, the user query the action to parse and execute the job plan definitions

python txtLRM.py --lrm=147.83.30.203 --jobplan=job6fabrics

Third, if the user requirements can be fulfilled by the resource provider thensome processes are executed such as VMs creation and booting, applicationdeployment via ssh credentials, launching and monitoring. Otherwise, the userreceives information about the jobs’s requirements that can not be fulfilled.

Finally, the user can retrieve the output files, if the job plan applicationshave not yet finished it is properly informed.

4 Experiments

The behaviour and performance of the LRM prototype is interesting to studyin order to assess its design and to identify empirically the needs for futuredevelopments. The experimental results are therefore an important part of thepresented work which will illustrate our main conclusions.

In our experiments the hardware of the resource provider node has a PentiumD 3.00GHz (two processors), 1Gb of RAM, one hard disk (160GB, 7200 rpm,SATA), and one network interface card (Broadcom NetXtreme Gigabit). Theoperating system is Fedora Core 4 and for virtualization we use Xen 3.0.2. TheTycoon client and auctioneer are 0.4.1 version.

In order to stress each virtual workspace, we use an application that behavesas a CPU intensive job. The user application executes 500 transactions whereeach transaction is composed by math operations. Two experiments are discussedto evaluate our approach for fine-grain resource management.

– In the first experiment we create four VMs and compare the expected endtimes of the executed jobs in two settings: (a) requesting through LRMinterface the creation of the four VMs in the original Tycoon-Xen way, and(b) using the fine-grain assignment made by the LRM.

Page 6: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

– In the second experiment we request the creation of six VMs using fine-grainresource management.

In the first setting of the experiment one, we use our LRM prototype to createfour VMs. The request for the creation of a virtual workspace includes CPUrequirements (slaCPU) which must be expressed in Hertz. In this case the valuesare 300MHz, 600MHz, 900MHz, and 1200MHz for fabric1, fabric2, fabric3, andfabric4, respectively. When the creation process of the four virtual machinesends, we proceed to upload the user application to each VM. Finally the custombenchmark is launched in all four VMs at the same time.

The CPU requirements are mapped (by the LRM prototype) to percent share(0.0 – 1.0), and finally to Tycoon credits (or weight value for sEDF schedulerif we were interfacing directly with VMM). Therefore, external Grid users (orcomponents) expect a performance of 0.1, 0.2, 0.3, and 0.4 share of physicalCPU in fabric1, fabric2, fabric4, and fabric4, respectively (this will be achievedwith credit scheduler in future Xen versions). Furthermore, when we launch theuser application (executing 500 transactions) in every VM, the external Gridcomponents (e.g. Global Grid scheduler) would expect in principle that the orderof completion should be fabric4, fabric3, fabric2, and finally fabric1.

Figure 2 shows the utilization of each processor for experiment one (originalcoarse-grain approach).There are three important aspects to observe from thisexperiment. First, notice that virtual machines are assigned to each processor ina round-robin fashion. The assignment is affected by the order of virtual machinecreation (which was fabric1, fabric2, fabric3 and fabric4).

processor 0

0102030405060708090

100

0 14 30 45 61 76 91 105

time(s)

%cp

u

fabric4fabric2idledom0

processor 1

0102030405060708090

100

1 15 30 44 59 73 87 100

time(s)

%cp

u

fabric3fabric1idledom0

Fig. 2. Experiment 1: four virtual workspaces, assigned by the original coarse-grainapproach, executing the same CPU intensive job on a physical machine with two pro-cessors. The assignment of virtual machines is done in a round robin fashion by XenVMM during creation. The four VMs are distributed on the two processors.

This is how the VMM sEDF scheduler is programmed to do it by default.A LRM not aware of this will inform incomplete or incorrect information aboutnode capacity to Grid middleware (like number of processors) as discussed later.

Page 7: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

Second, a consequence of the first aspect is that the proportion of consumedphysical resources does not correspond to the initial requirements. We observefrom Fig. 2 that the share is translated to a new value in the domain of eachprocessor. Applying proportional share (PS) we obtain the new weights for eachfabric: 1/3, 2/3 for fabric2, fabric4, and 1/4, 3/4 for fabric1 and fabric3, re-spectively. And third, the order of the expected completion is not fulfilled. Eventhough fabric4 has the highest share, fabric3 ends first. This is given by theweights in each processor and the observed order of finishing is fabric3, fabric4,fabric2, fabric1 (instead of fabric4, fabric3, fabric2, fabric1).

A closer look to the completion times is given in Fig. 3. We can see that assoon as a job ends (stop consuming resources of its VM share), at t=67.73 andt=75.56 in processor 1 and processor 0 respectively, the share of the other jobin the same processor changes. This is due the nature of sEDF scheduler.

101,57

102,1467,73 75,56

0

50

100

150

200

250

300

350

400

450

500

0 10 20 30 40 50 60 70 80 90 100 110

time(s)

tran

sact

ion

s

Fabric1Fabric2Fabric3Fabric4

Fig. 3. Transactions completed during job execution (original coarse-grain manage-ment). The time consumed per transaction is measured by the benchmark application.

In the second setting of experiment one, we use the proposed fine-grain re-source management with the LRM prototype to request the same VMs require-ments. Utilization of each processor is shown in Fig. 4. The assignment by thefine-grain approach in the LRM meets the Hertz required for each VM. It canbe seen that all VMs are created in processor 0. Even though different policiescan be applied for load balancing, in this experiment we have instructed theLRM to allocate VMs in one processor. In this case processor 1 is only used byDomain-0.

The completion times of benchmarking in VMs assigned by the fine-grainapproach are shown in Fig. 5. If we compare the measured times at the comple-tion of each job with the results of the original setting, we notice that each jobtakes more time to end. However, this is the expected behaviour for each VMthat receives accurate proportions of physical processors. In fact, three of them

Page 8: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

processor 0

0

10

20

30

40

50

60

70

80

90

100

0 18 39 59 79 99 118 135 152 169 187 204

time(s)

%cp

ufabric4fabric3fabric2fabric1idleDom0

processor 1

0

10

20

30

40

50

60

70

80

90

100

1 21 41 61 81 101 120 137 154 171 189 206 223

time(s)

%cp

u idleDom0

Fig. 4. Experiment 1: four VMs assigned by fine-grain resource management throughpinning mechanisms and sEDF scheduler. The four VMs are assigned to one processor.

last less than the expected execution time since as soon as a job in the highestweighted VM finishes, the rest of VMs use the unused share. Finally, we can seethat with fine-grain LRM the completion order which is fabric4, faric3, fabric2,fabric1, is according to the requested share.

206,85184,76156,52130,75

0

50

100

150

200

250

300

350

400

450

500

0 20 40 60 80 100 120 140 160 180 200 220

time(s)

tran

sact

ion

s

fabric1fabric2fabric3fabric4

Fig. 5. Transactions completed during job execution (fine-grain approach). The se-quence in the creation of VMs does not affect the expected performance; hence theexecution time is according to requested share.

In the second experiment, we apply the fine-grain approach to assess re-source assignment efficiency, and require LRM to allocate six VMs. The CPUrequirements are 300MHz, 600MHz, 900MHz, 1200MHz, 900MHz, and 2100MHzcorresponding to fabric1, fabric2, fabric3, fabric4, fabric5, and fabric6, respec-tively. The LRM allocates four VMs, as in the experiment one, to processor 0.The rest of VMs (fabric5 and fabric6) are assigned to processor 1.

Page 9: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

The results for this experiment are shown in Fig. 6. We can see that thecompletion order (fabric4, fabric3, babric2, fabric1 in processor 0; fabric6, fabric5in processor 1) is according to the initial requirements. Finally, the jobs executingin fabric1-4 do not affect those of fabric5-6 and vice versa.

208,51182,09156,38134,91103,872,17

0

50

100

150

200

250

300

350

400

450

500

0 20 40 60 80 100 120 140 160 180 200

time(s)

tran

sact

ion

s

fabric1fabric2fabric3fabric4fabric5fabric6

Fig. 6. Allocation of six VMs with LRM according to requested Hertz. The sequencein the creation of VMs does not affect the expected performance; hence the executiontime is according to requested share.

5 Conclusions and Outlook

We have presented an approach for fine-grain assignment of the virtualized re-sources of a SMP machine by means of a local resource manager (LRM). Thisapproach is transparent to users who do not need to know the detailed hardwareconfiguration of the machine and can specify their resource needs in a generalway.

The LRM has been implemented taking advantage of the Xen virtualizationtool and Tycoon. Our preliminary results obtained from experiments showedthat with the LRM the local physical resources were properly assigned, suchthat the performance measured in transactions per second was accurate andfulfills to agreed job plan requirements.

We have observed two main benefits of our approach: First, the fine-grainapproach allows better fulfilling certain subjective constraints given in the jobplan requirements, like completions orders, than the coarse-grain approach. Sec-ond, a more efficient resource assignment can be achieved, since the LRM canassign in a flexible way virtualized resources according to different policies. Forinstance, we followed a fill-capacity per processor policy for the assignment ofVMs in order to meet the required Hertz. We also obtain certainty about theexpected completion times of the executed jobs.

Page 10: Exploring the Behaviour of Fine-Grain Management for Virtual Resource Provisioning

The weakness of the implemented prototype is that offers static partitioning.For this reason, we have identified additional features which could be beneficial tohave in the LRM in order to comply with more features of job plan definition. Oneof the identified features is to have a mechanism which addresses the dynamicbehaviour of the workload. This development could include on one hand theadaptation of resources to the changes in the workload composition and secondlythe dynamic adaptation of the resource to a changing application profile.

References

1. Figueiredo, R.J., Dinda, P.A., Fortes, J.A.B.: A case for grid computing on virtualmachines. In: ICDCS ’03: Proceedings of the 23rd International Conference onDistributed Computing Systems, Washington, DC, USA, IEEE Computer Society(2003) 550

2. Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R.,Pratt, I., Warfield, A.: Xen and the art of virtualization. In: SOSP ’03: Proceedingsof the nineteenth ACM symposium on Operating systems principles, New York,NY, USA, ACM Press (2003) 164–177

3. VMWare: http://www.vmware.com (2006)4. Sundararaj, A., Dinda, P.A.: Towards virtual networks for virtual machine grid

computing (2004)5. Rodrıguez, F., Freitag, F., Navarro, L.: A multiple dimension slotting approach for

virtualized resource management. In: 1st Workshop on System-level Virtualizationfor High Performance Computing (Eurosys 2007), Lisbon, Portugal (March 2007)

6. Keahey, K., Foster, I.T., Freeman, T., Zhang, X., Galron, D.: Virtual workspacesin the grid. In: Euro-Par. (2005) 421–431

7. Kiyanclar, N., Koenig, G.A., Yurcik, W.: Maestro-vc: On-demand secure clustercomputing using virtualization. In: 7th LCI International Conference on LinuxClusters. (May 2006)

8. Irwin, D., Chase, J., Grit, L., Yumerefendi, A., Becker, D., Yocum, K.G.: Sharingnetworked resources with brokered leases. In: USENIX Annual Technical Confer-ence (USENIX). (June 2006) 199–212

9. Fu, Y., Chase, J., Chun, B., Schwab, S., Vahdat, A.: Sharp: an architecture forsecure resource peering. In: SOSP ’03: Proceedings of the nineteenth ACM sym-posium on Operating systems principles, New York, NY, USA, ACM Press (2003)133–148

10. Xu, D., Ruth, P., Rhee, J., Kennell, R., Goasguen, S.: Short paper: Autonomicadaptation of virtual distributed environments in a multi-domain infrastructure.In: 15th IEEE International Symposium on High Performance Distributed Com-puting (HPDC’06). (June 2006) 317–320

11. Ruth, P., Rhee, J., Xu, D., Kennell, R., Goasguen, S.: Autonomic live adaptationof virtual computational environments in a multi-domain infrastructure. In: IEEEInternational Conference on Autonomic Computing, 2006. ICAC ’06. (2006) 5–14

12. Lai, K., Rasmusson, L., Adar, E., Sorkin, S., Zhang, L., Huberman, B.A.: Ty-coon: an implemention of a distributed market-based resource allocation system.Technical report, HP Labs (dec, 2004)