The Green Evolution of EMOTIVE Cloud EMOTIVE Cloud: The BSC’s IaaS open-source solution for Cloud Computing Alexandre Vaqué Brull Master in Computer Architecture, Network and Systems Department of Computer Architecture Universitat Politècnica de Catalunya Advisors: Jordi Torres and Jordi Guitart September 2011
64
Embed
The Green Evolution of EMOTIVE Cloud EMOTIVE Cloud: The BSC’s IaaS open-source solution for Cloud Computing
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
The Green Evolution of EMOTIVE Cloud
EMOTIVE Cloud: The BSC’s IaaS open-source solution for Cloud Computing
Alexandre Vaqué Brull
Master in Computer Architecture, Network and Systems Department of Computer Architecture
Universitat Politècnica de Catalunya
Advisors: Jordi Torres and Jordi Guitart
September 2011
2
3
Acknowledgements
Voldria aprofitar aquestes línies per donar els meus sincers agraïments a totes les persones que m'han
ajudat a realitzar aquest projecte:
Voldria mostrar la meva plena gratitud als Drs. Jordi Torres i Jordi Guitart per la confiança dipositada
en mi, per brindar-me l'oportunitat de formar part d'aquest magnífic equip i per la seva constant i gran
ajuda. D‟ells he aprés molt tant en l‟àmbit professional, com també personal.
Volia agrair també al Dr. Iñigo Goiri la seva constant ajuda i la seva paciència. I pels seus coneixe-
ments compartits.
He après molt de tots tres i han sigut un pilars bàsics per l‟elaboració d‟aquesta màster tesis. Sense
l‟ajuda d‟aquestes persones no ho hagués pogut aconseguir. Sempre els tindré molt present per tot lo
que han fet per mi.
També volia agrair el suport de la meva parella Sara Serra en aquest projecte i a la meva família la
constant ajuda que m‟han ofert sempre. Moltes gràcies a tots els companys de feina, professors (en
destacat el Dr. David Carrera), amics i familiars que m'han recolzat i s'han preocupat en tot moment,
que encara que no els mencioni de forma explícita, no els hi puc negar el meu sincer agraïment.
name=vlan00 bridge=vlan00 ip=192.168.1.1 ip range [start
192.168.1.2 , end 192.168.1.254
31
localhost:~$ net-create
localhost:~$ net-list
Networks:
1: vlan01
localhost:~$
Figure 17- demostration to create a VLAN
4.4.2 VPN
Virtual Lan Networks allows the creation of isolated networks. We also wanted to create secure
networks with Virtual Private Networks (VPN). So we developed virtual networks creation between
VLANs or in the same network.
Virtual Private Networking is a solution that supports remote access and private data communications
over public networks that are cheaper alternatives to leased lines. VPN clients communicate with VPN
servers utilizing a number of specialized protocols.
To do this function we need to create a Java function to create VPN and have easy VPN management.
We automatize all necessary to created it: using OpenVPN open-source tool (31) and PPTP open-
source tool (32) with bash commands in Java and EMOTIVE Java functions. So to create a VPN, we
developed and automated VPN configuration as a System Administrator's would do manually. So first
of all we create the system config file (/etc/openvpn/openvpn.cfg or /etc/pptpd.conf) in the 2 nodes,
later we add the certificates in the 2 nodes and finally we launch the OpenVPN or PPTPD daemon in
each node.
The protocol used is PPTP with PPTPD application and SSL with OpenVPN open source application.
We use these two protocols in EMOTIVE Cloud because they have different interesting features.
OpenVPN is very useful in:
- Stronger Encryption: Some customers consider the more encryption the better; it would also
be fair to say that it's possible with PPTP for someone to get your password while connecting.
- No drop of packets (when using TCP): If you lose connection, you won't be thrown back on
the internet. This maybe important for you.
- Allows you access to more servers: With our Open VPN accounts you have access to all of our
servers, PPTP and Open. At any time if you want a server that is offering only PPTP accounts,
you can simply request it via our customer area. This means that our Open VPN accounts are
also PPTP if you need them to be. Simply login to your customer area and choose the PPTP
server, and a login and pass will be sent to you.
- Port Modification Allowed: If for some reason the standard configuration of our Open VPN
accounts still does not let you connect, our expert team can provide a custom configuration that
will go over whatever port you may have available. Not sure what all that means? No problem
we can login and do everything for you remotely.
And PPTP VPN Advantages:
- Works on Mobile Devices: Iphone, Android, Windows Mobile are just a few of the devices
that work with PPTP. These are very easily setup, and just a Host Name, Login and Password
you will be connected.
32
Who's the winner? To sum up, if you are looking for high security and privacy you should choose
OpenVPN. If you need easy-to-setup VPN, PPTP is a good choice. For mobile devices, PPTP is the
only solution.
In EMOTIVE we can create model as a 1-N relationship to create VPN with PPTP and OpenVPN
protocol. And we can also create a N-N relationship (only in OpenVPN).
Cloud Computing include mobile devices and we focus this TFM in Cloud Computing and be green so
it is interesting to have support to several kind of devices. The PPTP VPN allows to create an easyVPN
tunnel in differents kinds of devices and services.
4.4.3 Networks by Software are Green
EMOTIVE VLAN and VPN is a software solution installed on an existing server. Although there are
hardware solutions to create VLAN and VPN; we choose software solution because it is easier to im-
plement and manage in a Cloud. Also the maintainance and cost is cheaper than hardware solutions.
As we know, Cloud EMOTIVE facilitates the implementation and management of virtualization infra-
structure. In this master thesis, we have discussed the advantages of virtualization and its green ap-
proach. Now we want to comment the same features in relation to Network Virtualization by Software.
In this case, VLAN and VPN implementation by software.
In the next paragraph we compare software and hardware network implementations, including their
green assessment.
Implementation:
Implementing VLAN/VPN by hardware involves adding a new hardware device to the existing net-
work infrastructure. Implementing VLAN/VPN by software involves installing the software on an ex-
isting server. So networks by software save capital cost because we do not need to buy a switch, router,
or other kind of networks appliance. And we can save power because we do not have to plug in the
power this appliance. We only need the same computer with the adient software. You do not need to
cascade virtual switches or prevent bad virtual switch connections, and because they don‟t share physi-
cal Ethernet adapters, leaks between switches do not occur. Just to make a single switch into multiple
virtual switches (VLANs). In this sense it simplifies the topologies of the networks.
Maintenance:
Maintaining VLAN/VPN implemented by hardware usually requires an ongoing contract with the ven-
dor, who would offer comprehensive support for the VLAN/VPN device. Furthermore, VLAN/VPN
implemented by hardware often requires additional training for the in-house staff to enable them to
manage the day-to-day operations.
Networks implemented by software have easy management and more flexibility in network administra-
tion. An administrator is not necessary and it is possible to indirectly save energy and cost because ad-
ministrators can work in remote mode, thus avoiding displacements in the datacenter. A virtual envi-
ronment is easy to manage but in datacenters with a lot of different networks and differents servers,
storage hardware… it is hard to consolidate this software. And in this case is easier to manage these
networks with physical hardware, as we can view today in the most data centers.
33
Cost:
A hardware VPN solution is generally more expensive up front. VPN hardware can also carry a cost in
terms of training, as it can be significantly more complex to implement and support VPN hardware.
Having virtual switches do not require a spanning tree protocol (energy efficient) and a real switch does
not process these network communications (energy efficient). There is also a reduction of routing in
the broadcasting of traffic on the network (energy efficient). VLAN removes the physical boundary
(energy efficient).
VLAN implementation by software is very useful to create virtual networks on demand in a little pe-
riod of time. This saves money in hardware creation and installation. It's possible to make easily and
quickly a VLAN and VPN with a pair of clicks and it is not necessary to buy switch, contract admin
sys, save power, cooling and space. This dynamic configuration is green because it provides additional
temporal communications, and it can modify topology network easily.
To create VPN/VLANs between Clouds it is necessary to find new possibles interfaces in Interopera-
bility.
Performance:
Performance of either solution is limited to the available hardware and network resources. Often a VPN
software package is installed onto an existing server with other applications, restricting performance of
all applications to the server's available resources. In contrast, VPN hardware is a dedicated device li-
mited only by its own hardware.
Secure:
VPN hardware devices are generally considered more secure than VPN software solutions, largely be-
cause the VPN hardware device is dedicated to the sole purpose of providing VPN and is already
equipped to handle the unsecure outside network. VPN software, on the other hand, often shares a
server with other applications. As a result, those applications and the server's operating system are vul-
nerable and must be "hardened"--that is, secured in the face of the open public network.
Conclusions:
VLAN advantages into Physical LAN is that VLAN is similar to physical Ethernet LAN. The upper
layers of communications and the software that runs in the network does not distinguish what type of
LAN is running if it is physical or virtual.
In conclusion now we have other ways to create LANs so this implies new network features and possi-
bilities. It is probably one solution is better in some cases and the other in other cases. Actually in phys-
ical datacenters hardware LAN is the most used solution. But now with the introduction of Cloud
Computing, managing networks by software is an interesting possibility. In this case the network im-
plementation by hardware is very hard to do. Another thing is if you don't have implemented the net-
work you can choose create network by software because it is very easy, quick and cheap to use. Net-
work by software is much more flexible but less secure than hardware. On an Interoperable Cloud,
network by software solution would be better.
If we analyze the energetic power, hardware network is a physical chip, an appliance or something
physical that consumes electricity, but software network needs additional performance in the server that
this produces a little increase of the server power... more processes on the server, more performance
34
and more consumption. However, we would also have to evaluate which is the cost of the carbon foot-
print of both solutions. We have not been able to extend the results because we do not have enough
machinery and resources but it is probable that software networks produces less carbon emission.
4.5 EMOTIVE Interoperability
4.5.1 API OCCI and Web Services
The problem with interoperability in Cloud providers is well-known. As shown in Figure18, different
Cloud providers use their own and independent interface. This makes it difficult to communicate and
federate multiple providers (33). Recently, OCCI API has been proposed as a common standard in
order to overcome this problem. OCCI is a Cloud Interaction Layer which uses HTTP methods (like
GET, POST, PUT, DELETE) using XML format. This interface uses multiple data structures (i.e.
Compute, Network, Storage) to describe the different resources. Using these structures, it can operate
the virtual resources (i.e. create, list, show, update, delete). [Figure18]
EMOTIVE was originally designed using a distributed SOAP architecture but now it uses RESTful
Web Services. This architecture allows the usage of only some parts of EMOTIVE and supports agile
and dynamic construction of new Cloud environments. Its REST interface makes EMOTIVE highly
interoperable with other Cloud solutions. Furthermore, popular Cloud solutions such as OpenNebula have adopted OCCI to define their
interfaces. Aiming at interoperability with other Cloud solutions, EMOTIVE also implements an OCCI
interface. Notice, however, that the standard OCCI interface does not support all the original
EMOTIVE functionality. For this reason, there are some methods for jobs and clusters management
that EMOTIVE supports using its original REST interface. According to this, EMOTIVE Cloud
currently supports two interfaces: EMOTIVE REST API and the standard OCCI (34). In the following
lines, we describe briefly these two interfaces.
Figure18 - Interfaces of differentClouds
35
4.5.2 REST vs. SOAP
REST (Representational State Transfer) basically means that each unique URL is a representation of
some object. You can get the contents of that object using an HTTP GET, to delete it, you then might
use a POST, PUT, or DELETE to modify the object.
The main goal to migrate Web Services SOAP to Web Services Restful is to support easy extend and
interoperability. And also nowadays a lot of new Web services are implemented using a REST archi-
tecture rather than a SOAP one.
The main advantages of REST Web services are:
It is lightweight because not a lot of extra XML markup
Human readable results
Easy to build because no toolkits are required
SOAP also has some advantages:
Sometimes, it is easy to consume.
Rigid - type checking, adheres to a contract
Development tools
For consuming web services, it is sometimes a toss up between which is easier. For instance Google's
AdWords web service is really hard to consume, it uses SOAP headers, and a number of other things
that make it kind of difficult. On the converse, Amazon's REST web service can sometimes be tricky to
parse because it can be highly nested, and the result schema can vary quite a bit based on what you
search for. Whichever architecture you choose make sure its easy for developers to access it, and well
documented.
EMOTIVE Cloud improves with the new Web Services RESTful architecture because:
Now EMOTIVE is modular
Easy to extend and adapt
Easy development
It is possible to do an easy OCCI adaption
Human readable results and easy data parsing and processing.
4.5.3 API OCCI in EMOTIVE
OCCI describes five methods that use Compute and four for Network and Storage. EMOTIVE supports
four of the Compute methods, four Network methods, but it does not support Storage ones.
The methods comprising the EMOTIVE REST interface are described in Table 1. The methods with a
correspondence in the OCCI interface are shown boldfaced. Our interfaces basically allow:
● Compute: create, get, list and cancel Virtual Machines (supporting CIM and OVF). ● Network: similar to Compute methods but used to describe virtual networks.
● Jobs: used to submit jobs to Virtual Machines (we use JSDL format to describe them). ● Nodes: describes the system topology (used for EMOTIVE internals).
In addition, EMOTIVE supports Job Submission Description Language (JSDL) to submit jobs using
the methods submitActivity(JSDL) and createEnvironmentAndJob (Compute, JSDL). JSDL is an
extensible XML specification for describing requirements of computational jobs. It was initially
focused in Grid but it is not restricted to this environment. JSDL describes: job name, description,
resource requirements (RAM, swap, CPU, number of CPUs, operating System, etc.), execution limits,
file staging, command to execute… This is an example of an ANSYS CFX simulation JSDL: <?xml version="1.0" encoding="UTF-8"?> <jsdl:JobDefinition xmlns:jsdl="http://schemas.ggf.org/jsdl/2005/11/jsdl" xmlns:jsdl-hpcpa="http://schemas.ggf.org/jsdl/2006/07/jsdl-hpcpa"> <jsdl:JobDescription> <jsdl:JobIdentification> <jsdl:JobName>AnsysDemo</jsdl:JobName> </jsdl:JobIdentification> <jsdl:Application> <jsdl:ApplicationName>AnsysCfx</jsdl:ApplicationName> <jsdl:ApplicationVersion>PM26</jsdl:ApplicationVersion> <jsdl-hpcpa:HPCProfileApplication> <jsdl-hpcpa:Argument>-cpu_load=1.0</jsdl-hpcpa:Argument> <jsdl-hpcpa:Argument>-threads_num=2</jsdl-hpcpa:Argument> </jsdl-hpcpa:HPCProfileApplication> </jsdl:Application> </jsdl:JobDescription> </jsdl:JobDefinition>
39
4.6 EMOTIVE for Green Computing
Following the green approach of this master thesis, we want to evaluate the energy impact of
EMOTIVE. We use three benchmarks, focused in power consumption and performance. Generally,
many computer benchmarks compare features that are linked in performance implicitly or explicitly.
But now it is important to detail power consumption, because power consumption is a new important
variable to consider. So new needs are emerging and the power consumption begins to be more
expensive. Even more expensive than the hardware! A lot of companies spend more money in power
consumption than hardware. Therefore we want to do a green evaluation.
So, we are researching about the possibility to find new green approaches. In the next subchapters we
present the results of the benchmarks. These tests are three: Green Hypervisor comparison, Atom-
Xeon-Hybrid Architecture comparison and Middleware scheduling comparison.
All tests have been made with two virtualized servers on a middleware that manages them. The
workload produced in these benchmarks is introducing virtual machines into the Cloud with running
tasks. Later we study the behavior and performance of the servers. And we measure the power
consumption with physical mesurator called „WattsUp Pro' (35).
4.6.1 Green Hypervisor Comparison
Introduction
The first benchmark compares the power consumption behavior of three different hypervisors into
EMOTIVE. As commented before, this is possible thanks to new Libvirt engine used in EMOTIVE.
Each hypervisor has different features. So we do a comparison between KVM hypervisor (based in
full-virtualization), XEN (based in paravirtualization) and VirtualBox (emulation). Therefore we want
to see the difference in power consumption (in Watts) of these hypervisors. We do not want to extend
the comparison to other aspects because it is possible to find a lot of hypervisors comparisons
(performance level and others) in the literature.
Also we think that it is not necessary to compare the behavior of these hypervisors in other Cloud
middlewares because the hypervisor power and performance is independent of the middleware that it
runs. Also we have the same point of view in computer architecture, so all tests run over the same
machine and operating system. Only changes the hypervisor used: we use Xen with 3.4.0 version,
KVM with the Linux kernel version 2.6.28.1-kvm and we use the Virtual Box version 3.2. The three
hypervisors use Libvirt 0.8.4, OS Debian GNU/Linux 5.0 and Intel Xeon E5440 2.83GHz CPU.
Workload
In the comparison we use a workload that creates 6 virtual machines in only one server with the
following order: In the second 50 we create the first VM, 50 secs later we create other VM, in the
second 150 secs we create 2 VMs, and later 2 virtual machines are created (one in the second 200 and
the last in the second 300). Within each virtual machine runs a job that produces a bucle with N
iterations, into each iteration there are several types of arithmetic operations. So this benchmark
produces a stress in the CPU. And this runs between 10 and 30 seconds to finalize. It depends on each
hypervisor and the benchmark. The performance has been closely linked with the power consumption
results, because power consumption is very linked in CPU consumption.
40
Results
As you can see in the results of the graph [Figure 19], between KVM and XEN we get similar results.
In addition, Virtual Box when it has to scale it loses a lot of performance. Therefore Virtual Box has
more power consumption and worst performance than XEN and KVM. So the battle is between XEN
and KVM. Virtualbox at the moment could be a virtualization environment very useful in Desktop
machines, but not in datacenters. It is enough to have only one virtual machine, because it has a
performance and good power consumption similar to the other two hypervisors, but VirtualBox does
not have a good scalability with more than two virtual machines. But it is possible to create more easily
and quickly a test Cloud environment in your Desktop. It is very useful to EMOTIVE testers and
developers to forget to use a big infrastructure to test new research environments. Maybe in the future
Oracle will improve his hypervisor (with the acquisition of Sun). After we had finished this
comparison, Oracle published a new VirtualBox version 4. But, at this moment, the best OpenSource or
free solutions are Xen and KVM.
In the next table [Table 3], we show the total average of watts consumption for all the hypervisors. We
can see that KVM hypervisor is the greener hypervisor. But XEN it is very near and the distance is
minimal. And we saw a smooth behavior with XEN, in contrast KVM has bigger peaks. So the decision
over KVM and Xen depends on the kind of workload. Finally KVM consumes a little less power than
XEN. In conclusion, the election of XEN or KVM hypervisor depends on each other, the environments,
the kind of workload and its uses.
Figure 19 - Power hypervisor comparation
Power Average EMOTIVE
XEN 290,2 W
KVM 289,2 W
VirtualBox 293,2 W Table 3 - Power average
41
XEN or KVM?
Xen is a hypervisor that supports x86, x86_64, Itanium, and ARM architectures, and can run Linux,
Windows, Solaris, and BSDs as guests on their supported CPU architectures. Xen can do full
virtualization on systems that support virtualization extensions, but can also work as a hypervisor on
machines that do not have the virtualization extensions: For example, Atom and ARM (that are some
interesting low power processors) and older CPUs. Also if you want to run a Xen host, you need to
have a supported kernel.
KVM is a hypervisor that is in the mainline Linux kernel. Your host OS has to be Linux, obviously, but
it supports Linux, Windows, Solaris, and BSD guests. It runs on x86 and x86-64 systems with hardware
supporting virtualization extensions. This means that KVM is not an option on older CPUs made before
the virtualization extensions were developed, and it rules out newer CPUs (like Intel's Atom CPUs) that
do not include virtualization extensions. For the most part, that is not a problem for data centers that
tend to replace hardware every few years anyway , but it means that KVM is not an option on some of
the niche systems like the SM10000 that are trying to utilize Atom CPUs in the data center.
Xen is running on quite a lot of servers, from low-cost Virtual Private Server providers like Linode to
big boys like Amazon with EC2. Xen has been around a bit longer, also it has had more time to mature
than KVM. You'll find some features in Xen that haven't yet appeared in KVM, though the KVM
project has a lengthy to do list but KVM is going to become more prominent in the future. Also,
RedHat and Canonical have begun supporting KVM. Also, KVM is not mature project but KVM
performance is improving day by day and it is growing in the kernel Linux because KVM is part of the
main Linux kernel.
In our test Full-virtualization is marginally faster than Paravirtualization. Therefore, from the results
obtained it does not appear that paravirtualization virtualization exhibits greater performance over full
virtualization. One reason for this may be the fact that our CPU's support full hardware virtualization.
However, our tests were not complete. Because, we don't want to do a full hypervisors comparison.
Basically we do these tests to know the behavior of the hypervisors in EMOTIVE Cloud and to know
the power consumption. To do a full comparison we need similar environment for example use XEN
with full-virtualization and not the para-virtualization. With this comparison we demonstrate that
paravirtualizatioin is a great alternative if you do not have processors without VT instruction to use
with full virtualization hypervisor.
But it is logic that emulation virtualization (used by VirtualBox) is a poor alternative. Only it is
recommended to be used in personal desktops and laptops. It is important to know that emulation does
not have scalability.
Xen and KVM consume little overhead and power. It is hard to choose a winner because it depends of
the environment and each one. KVM is rapidly improving through Xen has better management tools
than KVM and Xen migrations are more robust. In this line KVM needs to improve.
Conclusions
In conclusion in this comparison we demonstrate that EMOTIVE has a good behavior in all
hypervisors thanks to new Libvirt API used in EMOTIVE Cloud. So now, EMOTIVE Cloud will
evolve together API Libvirt. If Libvirt evolves, also EMOTIVE Cloud will evolve. Wherefore the new
features added in Libvirt will be added in EMOTIVE Cloud indirectly. Also we need to consider the
continuous KVM and Libvirt evolution for the future. Especially KVM is taking very strong. These
results will be outdated in a months for the continuous evolution of the KVM and others.
42
In the next comparisons, we use Xen hypervisor because we need to run on ATOM platforms. This
platform does not have CPU virtualization instructions, so we need to choose para-virtualization with
Xen. Nowadays, Intel chips haves VT instructions but Intel ATOM and the new ARM processors do not
have these. Anyway, this kind of processor is very interesting for low power consumption.
4.6.2 Architecture comparison (Atom-Xeon-Hybrid)
Introduction
In this second benchmark we demonstrate that EMOTIVE Cloud can work with different types of
computer architectures by comparing their power consumption. The main goal in this benchmark is to
find the best tradeoff between power consumption and performance.
It is important to know that Atom processor is a CPU specialized in lower power consumption but it
has low performance. On the other hand, Xeon has a good performance but the power consumption is
high. Nowadays, it is important to have datacenters or Clouds with the high performance and also to
have low power consumption. EMOTIVE Cloud allows to use hybrid architectures to achieve this. In
particular, we implement a hybrid solution with Xeon and Atom processors to do a better green
computing.
This benchmark is composed by three tests. All the tests run the EMOTIVE Cloud middleware. The
first test uses Xeon servers, the second test uses only Intel Atom servers and the last uses a hybrid
solution with both processors. All the tests use two physical nodes simultaneously (two Atom
processors, or two Xeon processors or the mixed solution with one Atom and one Xeon). [Figure 20,
Figure 22]
Workload
We use a workload that creates 6 virtual machines into two servers in the following order: the first VM
is created at second 100, we create 3 VMs at second 300, later we create 2 VMs at second 500 and, to
finish, we create the last VM at second 800. Within each virtual machine runs a job that executes a
bucle with N iterations, into each iteration there are several arithmetics operations. The EMOTIVE
Scheduler (which decides the placement of the VMs) used in this test is a round-robin.
Results
Consequently we present the experiments with the following results. The graphic [Figure 20] shows the
power consumption over time (including the two servers) with the three different architectures. It
shows that Xeon processors have the biggest consumption while Atom processors get the most efficient
power usage but Atom architecture has lower performance than Xeon. As expected, in the hybrid
solution, the power consumption is in the middle between the other two solutions and it has better
performance than using only the Atom processors. So we need to detail more about hybrid solution and
its possibilities.
The other graphic [Figure 21] is a zoom of the graphic above [Figure 20], where we increase the scale
to better detail the watts consumption that incurs the solution Atom processor, because in the [Figure
20] it is hard to appreciate the variability of Atom consumption in relation the other two architectures.
Figure 22 details the performance of the same benchmark: Xeon performance, hybrid performance and
Atom performance. To see better the relation between power consumption and the performance, you
can compare the Figure 20 with the Figure 22. The results are very logical and expected. So in the
Figure 22 (in the X range) you can see the time when the virtual machine was created, the duration time
43
and the time when virtual machine was destroyed. Also we can see the CPU utilization in the Y range.
So Xeon solution is faster than Atom. It is an expected result because Xeon CPU is focused to have
performance and Atom CPU is focused to have better power consumption. Xeon is faster in the virtual
machine execution and the CPU utilization is lower than Atom. It is very interesting to play with both
solutions to find the better relation between power and performance. Hybrid solution has interesting
results because it has three VMs with the same Xeon performance but the other three VMs has the
same performance than Atom solution but hybrid solution has less power consumption than Xeon
solution.
Figure 20 - Xeon-Atom-Hybrid comparation
Figure 21 - Atom Zoom
44
Xeon performance
Hybrid performance
Atom performance
Figure 22 - Performance of 3 solutions
Ratio To have a better understanding of the relation between power consumption and performance, we
calculate the performance (measured in executions of the benchmark per day) per watt ratio. The new
variable ratio presented in the Table 4 helps to find the better tuning configuration with both solutions.
In the Table 4 you can see the results. This ratio demonstrates that Atom has better ratio relation than
the Xeon solution but nowadays the performance, in the Computing world, is most important feature
than power consumption. Using the hybrid solution we can approach to Xeon performance and
improve the power consumption.
45
A Xeon machine is able to run hard tasks and it will take fewer seconds than Atom to finish a job from
the benchmark. Therefore, the execution of a single VM in the Xeon node will take 50 secs while in the
Xeon it will take 165 secs minutes. The consumption of a single Xeon node running this test is 268,8
Watts, while the consumption of a single Atom node is 38,7 Watts.
This benchmark it is based in general case and it is not a specific case. Each virtual machines does the
same kind of job and do not mind in what servers are running. Every server runs the same workload. It
is hard to do a diagnostic in this general case. We need to study the computing needs, it depends about
service type, calculation, etc.
XEON ATOM HYBRID
Average Watts (2 nodes) 537,6 W 77,4 W 205,6 W
Time 490 secs 1270 secs 880 secs
Performance
(86400/Time)
176,33
executions/day
68,03
executions/day
98,18
executions/day
Ratio
(Performance/Power)
0,65
exec/day per watt
1,76
exec/day per watt
0,95
exec/day per watt Table 4 – Average ratio
Conclusion Using both architectures can improve power consumption than Xeon solutions and performance than
Atom Solution. We need to study the tuning of hybrid architectures. Therefore this experiment wants
demonstrate that it is a good approach for saving energy to mix low power systems and high
performance systems in the same data center. However, it depends on the workload that it needs to run.
On the one hand, it is better to run HPC tasks in Xeon architectures because they get a much better
performance than Atom processors executing this kind of tasks. Moreover, these tasks have a deadline
which cannot be achieved by Atom hosts. On the other hand, it is possible to use ATOM with
environments that use hard memory access or hard access disk than CPU performance. For example, in
Web Servers, Databases and others. In the case of applications with lower performance requirements
such as web-applications or tasks with more relaxed SLAs, it is better to make use of Atom processors
which have much more efficient power consumption.
Finally, in the experiment (29) it is derived dealing with heterogeneous resources is a big deal and the
presented model is able to automatically balance workload among nodes with different features such as
power consumption and performance, which makes the provider able to get a better overall benefit.
In general we do this comparation to know better the behavior about the hybrid solutions and its
possibilities. But these benchmarks are more general synthetic with a hard syntactic workload. These
benchmarks do not want to find a real environment, but a synthetic environment to understand the
hybrid possibilities.
In the next benchmark we have a more specific solution and we study two Cloud solutions.
46
4.6.3 Middleware scheduling comparison (OpenNebula and EMOTIVE)
Introduction
In the next comparison, we compare the scheduling policies of two Cloud middleware from an energy
consumption point of view. We compare EMOTIVE middleware with OpenNebula. We choose
OpenNebula (ONE) because it is probably the best Open Source middleware at the moment and the
most used. As we did previously, this test runs over different computer architectures: Xeon, Atom and
hybrid architecture. All tests run over Xen hypervisor. In this case, we want to evaluate the behavior of
each middleware scheduler according to its power consumption and its performance. We want to
evaluate if we can take profit of hybrid architectures to have better power efficiency without losing
much performance. Therefore it is necessary to create a benchmark to evaluate and see the differences
between schedulers comparison.
To perform these tests we have chosen the EMOTIVE Scheduler prototype (29). This Scheduler was
created to improve the power consumption during the tasks executions in a Hybrid Cloud (greater than
or equal to two servers). So we compare this Scheduler with one of the three Schedulers of
OpenNebula that are included in the release version (36). This ONE Scheduler is called Packing Policy
and it is the better to do a power efficient computation. In addition, these Scheduling policies that
OpenNebula incorporates are very similar to most of the middleware virtualization products as Xen
Citrix, VMware, etc. So we can consider that we compare EMOTIVE Scheduler prototype between a
generic Cloud Scheduler.
We choose OpenNebula Scheduler (as we commented earlier) because it is an excellent software,
mature and nowadays consolidated in the open community of IaaS. Moreover, we have the advantage
that we work together with its developers in the NUBA national project.
In this comparison, we use the same hybrid scenario (1 Xeon – 1 Atom) that we used in the previous
one.
Before presenting the results, we would like to explain the operation of Schedulers and how they
consolidate the virtual machines. EMOTIVE Scheduler uses a Back-filling policy together with a smart
algorithm to decide the virtual machines placement into the servers. In addition, this scheduler
prototype can shutdown the servers if they are idle.
The Open Nebula Scheduler is very similar but it does not have a smart algorithm to choose the best
server to put the virtual machines and also it cans not shutdown idle nodes. So it is important to know
that the behavior of these schedulers can be equal because it can happen that OpenNebula Scheduler
takes randomly the correct node to run the tasks. We have only two nodes in the test environments, so
OpenNebula Scheduler has a 50% success to choose the correct node to run the tasks. If we use more
than two nodes, the OpenNebula Scheduler probability of success is reduced.
According to this, in order to have a fair comparison, we compare the worst-case of both schedulers. In
OpenNebula Scheduler, the worst-case is when it chooses the incorrect server node. In EMOTIVE
Scheduler the algorithm always choose the best server to run the tasks in order to reduce power
consumption. Also it is important to consider that the feature to shutdown the idle servers it is not in
production in EMOTIVE because it is in preliminary version. And OpenNebula does not have this
feature yet in current version, but it will incorporate with plugin extension. So we compare and
simulate both middleware with this feature because we hope will be a generic requirement in the future.
47
Therefore in the graphics Figure 23 and Figure 24 we can see the results in the worst-case of both
schedulers.
Workload
The workload in this benchmark is similar to the previous ones. In this one, we launch a set of 9 virtual
machines which execute a job that performs N arithmetic operations. Therefore the order creation and
execution of this virtual machines are: Initially we launch the first VM and when this VM has finished
the second VM starts. Third VM starts when the second VM has finished but sometime after them the
fourth VM runs together with the third VM. When these VMs has finished we run simultaneously the
last 2 VMs…. Later we begin the same VMs workload sequence but only we launch the initial three
VMs. Normally these ultimate three VMs runs in the other node. Notice that we have limited the Xeon
RAM to be equal to the Atom server, to have more similar environments. So we configurated the Xeon
Domain-0 RAM to provide this.
This benchmark uses 2 servers. When benchmark starts, virtual machines run in some server and each
virtual machine runs a job that produces N threads, each one running a bucle with arithmetic
calculations. So these threads are splited in 5 threads (with a job) to stress totally the CPUs available.
All virtual machines start sequentially in serial mode. Once the jobs finish, the virtual machines are not
destroyed. In this way, we simulate the situation of virtual machines offering some service over a big
period of time... So these virtual machines consume RAM memory and when these VMs spend all
memory from the server, the next VM runs into the next free node. The workload is composed by 9
virtual machines. The first 6 virtual machines overload one node, and later the other 3 virtual machines
run in the other node.
Given the better performance of Xeon processors, the job that runs into each virtual machine finishes in
less than 50 secons in the Xeon machine, while lasts 165 seconds in the Atom one. We need to find a
balanced benchmark for comparing both processor architectures… because if we run a powerful
workload the Atom processor it is saturated very quickly. In this benchmark, it is the Atom architecture
that defines the maximum load limit. Xeon processors are superior in performance capacity than Atom.
But we are focusing this benchmark to improve green capacity and not to get a lot of performance.
Results
In the first graphic we have the ONE Scheduler worst-case result and in the second graphic we have the
EMOTIVE Scheduler in the worst-case. It is important to note that in the best-case of both schedulers
the results are similar, but if we have more than two servers, there is more probability that the results
will be worse in OpenNebula. This occurs because EMOTIVE and ONE uses Backfilling scheduler but
only EMOTIVE has a smart management algorithm to choose the best node to place the VMs
We can see OpenNebula results [Figure 24] and there is an average power consumption of 288 Watts
with a benchmark process time about 890 seconds. In addition, EMOTIVE [Figure 23] has a better
power consumption of only 81 Watts, but the benchmark process time has increased to about 1250
seconds.
In the case of using only one architecture (Xeon-Xeon or Atom-Atom),we get the same results with two
middlewares because both middlewares uses the same scheduling policy and the smart algorithm used
in EMOTIVE Scheduler is specialized to run in hybrid architectures and this has not effect in regular
architectures. So in the solution Xeon-Xeon we get the better performance with 670 secs, the half that
EMOTIVE uses in hybrid system. If we use only Atom-Atom solution we get a low performance with
48
1865secs. In conclusion the intermediate solution Xeon-Atom decreases a little the performance but it
has good power consumption. We will go further on this evaluating the performance per watt ratio for
all these possibilities.
The hybrid solution could be a good solution if we use real systems as web servers, data bases, etc.
That the most important feature for this is the memory access to disc and is not more important to have
a big processing calculation capacity.
Figure 23 - EMOTIVE Scheduler
Figure 24 - OPENNEBULA Scheduler
49
Ratio
We can see in the Table 6 the performance per watt ratio. It is calculated in the same way than in the
[Table 5] and [Table 6]. So we can see that the best ratio is in Atom-Atom solution. But EMOTIVE
hybrid gets a ratio very close to the Atom-Atom solution. So it is a good ratio. In contrast, we see that
the green ratio is harmed, in the OpenNebula case with Hybrid Architecture. The ONE ratio is even
worst than Xeon solution. This demonstrates that if we mix both computer architectures, it can produce
worst results than using only unique architecture if the scheduling is not good. Therefore it is necessary
to have a good smart management in hybrid solution to take advantage of these solutions. Power is
nothing without control. There is a huge space to do research in these topics. In this case we
demonstrate that unique architectures could be better than hybrid solutions in power and performance
because Xeon consume much more than Atom but Xeon finish its jobs more quickly.
Also it is important to mention that OpenNebula is an open-source project where a lot of researchers
are working to improve its green capabilities, so we must expect these results to be improved in the
near future.
XEON-
XEON
ATOM-
ATOM
HYBRID (best-case) HYBRID (worst-case)
EMOTIVE 361,9 W 50,9 W x= 81,2 W x= 81,2 W
Opennebula 361,9 W 50,9 W x= 81,2 W x= 288 W Table 5- Power
XEON-
XEON
ATOM-
ATOM
EMOTIVE hybrid OpenNebula hybrid
Secs 670 secs 1865 secs 1250 secs 890 secs
Performance
(86400/Time)
128,95
executions/da
y
46,33
executions/
day
69,12
executions/day
97,08
executions/day
Ratio
(Performance/
Power)
0,36
executions/da
y per watt
0,91
executions/
day per watt
0,85
executions/day per
watt
0,34
executions/day per watt
Table 6 - Performance in time - RATIO (*) more is better
50
4.6.4 Middlewares qualitative comparison
Tool Eucalyptus OpenNebula EMOTIVECloud OpenStack
Main feature implements cloud semantics virtualization control framework virtualization control framework simple to implement and massively scala-ble
Highlights similar than Amazon EC2 Full framework Schedulers researches hypervisors, virtual networks and filesys-tems and the computing engine is orches-trating all of that
Provisioning Model Immediate Best-effort Best-effort Best-effort
Interfaces EC2-soap WS API and S3, Elastic Block Store (EBS)
EC2, Sunstone,vCloud, API OCCI (storage,virtualization,network)
WS REST / API OCCI (virtualiza-tion,network)
S3 and EC2
Support for Hybrid Cloud no Amazon EC2 and ElasticHosts Amazon EC2 S3 and EC2 this year
Initial placement based on a require-ment/rank policies to prioritize those resources more suitable for the VM using dynamic information, and dynamic placement to consolidate servers
Simple Scheduling and High Availability Scheduling
there are several to choose from (simple, chance, etc) but nova-scheduler is evolv-ing for the future releases
Configurable Placement Policies No Support for any static/dynamic placement policy
Easy RESTfull interface to extend with some develop
It is a area of hot development for the future releases of OpenStack Nova
OVF support No yes yes (Alpha) -
admin. interface only EC2 can be used (i.e. no sus-pend or migration of any kind)
a superior administration interface (migrate, suspend VM,...)
a superior administration interface (migrate, suspend VM,...)
Yes
advance contextualization No completed basic basic
powerful API to extend basic (EC2 calls) yes yes http://www.virtualizationtimes.com/does-openstack-change-cloud-game
Users management / Authorization & Authentication
yes yes no Amazon API, VMware’s vCloud, Eucalyp-tus, OpenNebula and others
MySQL support no mysql lite and mysql no [BETA] sqlite3, mySQL and PostgreSQL
52
5 Conclusions
5.1 Summary
Computer Science is a discipline that evolves so fast and it is usually focused on growth performance.
Technology impact has greatly influenced our society. And it is important to consider the power
consumption of the computer science and cloud computing. Improving this parameter is more difficult
than others. Since to research about green computing needs to play with physical laws (such as most
engineering and others disciplines) (37). So nowadays there is more effort to improve the ecological
aspects.
In our tests we can see a first approach to use hybrid architectures to improve power consumption and
to achieve that we do not lose performance. Also in this project we contribute adding new features to
improve interoperability into Clouds, to have new hypervisors, and other features, such as the
EMOTIVE modular architecture that facilitates to bring new schedulers, new interfaces, new
developments and adaptions.
In general this project shows a global vision about a type of IaaS project. It was focused to evolve this
middleware to achieve new features and new visions to improve this. Always we have linked this
project with the research conducted by UPC and BSC. It should be clear that EMOTIVE does not want
to compete with products as OpenNebula, OpenStack and others. EMOTIVE is a tool to test and
research. So all environments created with EMOTIVE are pre-production environments. And basically
this framework is used by BSC and UPC to do research in Cloud Computing (21) (38) (39) (40) (41)
(42) (43), mainly in environments Infrastructure as a Service.
5.2 Publications
This section details a list of publications related to this master thesis:
Book chapter: EMOTIVE Cloud: The BSC‟s IaaS Open Source Solution for Cloud Computing. Àlex
Vaqué, Iñigo Goiri, Jordi Guitart and Jordi Torres. Universitat Politècnica de Catalunya (UPC) and
Barcelona Supercomputing Center (BSC), April 2011.
Presentation: (OGF30) Open Grid Forum 30 2010 (Brussels) – November 2010. Open Cloud
Computing Interface presentations (OCCI-WG) - Toward Interoperable Clouds: the EMOTIVE
Experience with OCCI. Alexandre Vaqué.
Technical report: F.Julià, J. Roldan, R. Nou, O. Fitó, A. Vaqué, I. Goiri, J. Berral. “EEFSim: Energy
Efficency Simulator” Research Report number: UPC-DAC-RR-CAP-2010-15, June 2010.
5.3 Suggestions for future work
EMOTIVE needs to improve some features, for example new OVF compatibility. Now EMOTIVE has
OVF compatibility in Alpha version and it is unstable. If we want to be full interoperable, we need to
have more compatibility with the most popular Cloud interfaces as API OCCI, d-Cloud, vCloud and
53
EC2. In contrast, OpenNebula has a lot of interface compatibilities and it can adapt in a lot of Cloud
environments because it supports OCCI, EC2, vCloud, and Sunstone. While there are not standards
defined, it is good solution to have a lot of compatibilities but it has a cost in development time. So it
will be necessary to be attentive in the standardization. OCCI and OVF are in good position to be the
best open standards. Also maybe KVM will be a future open-source standard as hypervisor because it is
evolving into Linux Kernel. In contrast Xen is losing quota and KVM is taking very strong. But now
Xen will be integrated into Linux kernel 3.0.
Coming back to EMOTIVE features, it needs to have a web service Restful communication with user
and password authentication to have secure communications and to manage users.
Another aspect is that EMOTIVE, in comparation to other middlewares, has not explicit storage
management. It is interesting to add this feature using some open-source tool or developing from
scratch. With this feature EMOTIVE could be 100% adapted to API OCCI because now it already
supports computes and networks solution. But virtual networks management is very basic because API
Libvirt has limited network management. To improve this can be a good solution to use Openvswitch
open-source tool (44) or using system Linux etables/iptables as ONE. But before to develop this, we
need to research more in virtual networks… if we want to progress in this line.
Libvirt has a Windows installation package in development. Now it is an experimental version. And it
is interesting to mention that EMOTIVE Cloud uses Java. It is multi-platform and it can be used in
Windows. So EMOTIVE Cloud is developed in Java and it uses Libvirt API. So is interesting to be
aware of Libvirt Windows evolution because in a future, could we create EMOTIVE Cloud
environments into Windows Operating System?
New middlewares and big communities are emerging. Now OpenStack is taking strong. When we
began this master thesis, OpenStack did not exist and during the development of this master thesis
OpenStack is starting to emerge. OpenStack promises a lot and it is an important rival for OpenNebula.
But now we think that OpenNebula has a better position to be a standard and it is the best open-source
solution. OpenNebula is strong, with experience and ready to demonstrate that now is a better solution
than OpenStack, Eucalypthus and others. But OpenStack is supported and funded for big international
companies.
KVM, OpenNebula, OpenStack and others have a continuous quick evolution. In only few months
during the writing of this project they published a lot of new features and results, for example
OpenNebula had published v1.4, later v2.0, v2.2 and now 3.0 version! … They have an aggressive
growing and continuous development. It has been demonstrated that Cloud Computing is not the future
else it is the present.
Talking about green hardware: Intel historical evolution only was focused on the performance but now
also they begin to improve better power CPU consumption (45). On the other side, ARM-based
processors dominate the mobile chipset market but now they begin to have a little deployment in the
enterprise server space, where Intel owns a majority of the market. The chips run on lower-power
consumption than Intel. It will be interesting to extend our benchmarks in this kind of CPU architecture
to improve green feature because ARM has an interesting green architecture. Now the performance is
not absolute variable and also power consumption is a new important variable to consider. Intel needs
to improve this.
Finishing this master thesis we read interesting news about SNIA/CDMI (46), that is an important
Cloud Storage Initiative, collaborates with OCCI to improve storage interface. Other interesting news
that it is obligatory to talk about is OpenCompute (47). OpenCompute is a Facebook rollout of the
Open Compute Project, a new effort to create open industry standards for data center hardware and
54
design based on Facebook‟s work at its new Oregon data center. This project invites to share open
information to improve to create an open community about CPDs ecosystem to improve the PUE