Top Banner
1 Monitoring VirtualBox Performance Siyuan Jiang and Haipeng Cai Department of Computer Science and Engineering, University of Notre Dame Email: [email protected], [email protected] Abstract—Virtualizers on Type II VMMs have been popular among non-commercial users due to their easier installation and use along with lower cost than those on Type I VMMs. However, the overall performance of these virtualizers was mostly found worse than the latter from VM user’s point of view and, the reasons remain yet to be fully investigated. In this report, we present a quantitative study of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance metrics for VMM and VMs are separately collected and analyzed and, implications of the results are discussed, with the monitoring overhead quantified through well-known CPU, memory and I/O benchmarks. Results show that VMM takes only a marginal portion of resources within the whole virtualizer for the case of VirtualBox and, our monitoring comes with merely a negligible performance overhead and perturbation to the virtualizer. 1 I NTRODUCTION Virtual machine monitor (VMM) is a system that pro- vides virtual environments where other programs can run in a same manner as they run directly in the real environments. A virtual machine (VM) is used to repre- sent the virtual environment that a VMM provides. The software running upon a VM is called guest software. The operating system running upon a VM is called guest operating system (guest OS). VMMs are categorized into two types [1]. Type I VMMs run directly on hardware, which means they have specific hardware requirements. Type II VMMs run upon operating systems, which mean- s they are like other programs and do not require extra effort in installation and use. Although convenient and widely used by common users, Type II VMMs suffer from significant performance issues. As King [2] has showed, VMs running on a Type II VMM (UMLinux) take more than 250 times more time than those running on a hybrid between Type I and Type II (VMware Workstation [3]) to execute a null system call in average. In this work, the performance of VMMs is estimated by running benchmarks or some particular system calls upon VMs and comparing their running time under different VMMs. In contrast to this approach, we focus on investigating the performance bottleneck caused by unbalanced re- source usage. We aim at monitoring performance metrics of VMM and VM separately, because we believe a better understanding of the overhead of Type II VMMs can lead to a practical and effective improvement on VMM design. For this study, firstly, we choose VirtualBox 1 as our object because it is an professional, open-source VMM project with a large user group. Secondly, we imple- ment several performance collectors inside VirtualBox to record performance metrics, such as memory usage, of 1. VirtualBox is an open source software under the terms of the GNU General Public License (GPL) version 2. Instrumented VMM Performance Collectors Host OS Performance Monitor Virtual Machine 1 Virtual Machine 2 Fig. 1: Interactions between our project and VirtualBox VMM itself and the VMs, which are running on the VMM. Thirdly, we implement a performance monitor to organize and aggregate the data collected from perfor- mance collectors. By comparing the resource usage of the VMM and that of the VMs, we investigate how much the VMM costs compared with the total cost. Figure 1 shows the overall architecture of our project. We inspect the internal running state of VirtualBox VMM by instrumenting performance monitoring agents in the VMM source code. Pertinent information collected by those monitoring agents is then gathered in Performance Collectors. Then, Performance Collectors sends information to our Performance Monitor where designated perfor- mance metrics are calculated in runtime. The experimentation includes three parts. The first one is running one or two VMs on the instrumented VMM. Monitored metrics of the VMM and those of the VMs are collected respectively. The monitored metrics are examined along with the running situations of VMs at the time, e.g. the startup phase of operating system. The second part is to see how the overhead of VMM may increase if VM uses more resources. Lastly, the overhead introduced by the instrumentation will also be gauged roughly by comparing major performance indicators attributed to VirtualBox processes from the legacy system monitor running on the host OS prior to and after the instrumentation is applied.
6

Monitoring VirtualBox Performance · of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance

Jul 03, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Monitoring VirtualBox Performance · of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance

1

Monitoring VirtualBox PerformanceSiyuan Jiang and Haipeng Cai

Department of Computer Science and Engineering, University of Notre DameEmail: [email protected], [email protected]

Abstract—Virtualizers on Type II VMMs have been popular among non-commercial users due to their easier installation and use alongwith lower cost than those on Type I VMMs. However, the overall performance of these virtualizers was mostly found worse than thelatter from VM user’s point of view and, the reasons remain yet to be fully investigated. In this report, we present a quantitative studyof VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primaryperformance metrics for VMM and VMs are separately collected and analyzed and, implications of the results are discussed, withthe monitoring overhead quantified through well-known CPU, memory and I/O benchmarks. Results show that VMM takes only amarginal portion of resources within the whole virtualizer for the case of VirtualBox and, our monitoring comes with merely a negligibleperformance overhead and perturbation to the virtualizer.

1 INTRODUCTION

Virtual machine monitor (VMM) is a system that pro-vides virtual environments where other programs canrun in a same manner as they run directly in the realenvironments. A virtual machine (VM) is used to repre-sent the virtual environment that a VMM provides. Thesoftware running upon a VM is called guest software.The operating system running upon a VM is called guestoperating system (guest OS). VMMs are categorized intotwo types [1]. Type I VMMs run directly on hardware,which means they have specific hardware requirements.Type II VMMs run upon operating systems, which mean-s they are like other programs and do not require extraeffort in installation and use.

Although convenient and widely used by commonusers, Type II VMMs suffer from significant performanceissues. As King [2] has showed, VMs running on a TypeII VMM (UMLinux) take more than 250 times more timethan those running on a hybrid between Type I and TypeII (VMware Workstation [3]) to execute a null systemcall in average. In this work, the performance of VMMsis estimated by running benchmarks or some particularsystem calls upon VMs and comparing their runningtime under different VMMs.

In contrast to this approach, we focus on investigatingthe performance bottleneck caused by unbalanced re-source usage. We aim at monitoring performance metricsof VMM and VM separately, because we believe a betterunderstanding of the overhead of Type II VMMs canlead to a practical and effective improvement on VMMdesign.

For this study, firstly, we choose VirtualBox1 as ourobject because it is an professional, open-source VMMproject with a large user group. Secondly, we imple-ment several performance collectors inside VirtualBox torecord performance metrics, such as memory usage, of

1. VirtualBox is an open source software under the terms of the GNUGeneral Public License (GPL) version 2.

Instrumented VMM

Performance Collectors

Host OS

Performance Monitor

Virtual Machine 1 Virtual Machine 2

Fig. 1: Interactions between our project and VirtualBox

VMM itself and the VMs, which are running on theVMM. Thirdly, we implement a performance monitor toorganize and aggregate the data collected from perfor-mance collectors. By comparing the resource usage of theVMM and that of the VMs, we investigate how much theVMM costs compared with the total cost.

Figure 1 shows the overall architecture of our project.We inspect the internal running state of VirtualBox VMMby instrumenting performance monitoring agents in theVMM source code. Pertinent information collected bythose monitoring agents is then gathered in PerformanceCollectors. Then, Performance Collectors sends informationto our Performance Monitor where designated perfor-mance metrics are calculated in runtime.

The experimentation includes three parts. The firstone is running one or two VMs on the instrumentedVMM. Monitored metrics of the VMM and those of theVMs are collected respectively. The monitored metricsare examined along with the running situations of VMsat the time, e.g. the startup phase of operating system.The second part is to see how the overhead of VMMmay increase if VM uses more resources. Lastly, theoverhead introduced by the instrumentation will alsobe gauged roughly by comparing major performanceindicators attributed to VirtualBox processes from thelegacy system monitor running on the host OS prior toand after the instrumentation is applied.

Page 2: Monitoring VirtualBox Performance · of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance

2

2 RELATED WORK

We address two categories of previous work relatedto our project topic including those on VMM perfor-mance,which is the main theme of our project, and thoseon source code instrumentation, which is the primaryapproach to our implementation of the project proposal.

The performance characteristics of virtual machinesare one of the major concerns in VMM design [1].However, virtual machines running on Type II VMMscan suffer a great performance lost compared to runningdirectly on standalone systems [2] to the extent that theefficiency property has been treated as one of the threeproperties, as formal requirements, of any VMM [4].

In this context, performance of various VMMs hasbeen analyzed and compared independently beyondsimple functionalities for running statistics that are port-ed together with the release of the complete virtualizerpackage. To compare the performance of the VMM ofVMWare and VirtualBox, Vasudevan et. al. create twovirtual machines with each of the two visualizers, onerunning Windows and another Ubuntu Linux. They thenmeasure the peak value of floating point computingpower in GFLOPS using the LINPACK benchmark and,the bandwidth in Mbps using the Iperf benchmarkingtool [5]. A similar but earlier work was done by Cheet. al. in [6], where the performance of VMM in Xenand KVM was contrasted using benchmarking tools alsoincluding the LINPACK package. Different from theseperformance evaluations conducted in an indirect man-ner via running user-level applications on the top level ofthe VM system hierarchy, we directly gauge the runtimedynamics of the VMM’s internals with respect to itsscheduling and controlling tasks with virtual machinesrunning on it.

Another differentiation lies in the approach to mea-surements. While both works above are done throughapplication-level (benchmarking) tools without modify-ing the VMM or other components in the visualizer, weaim at probing VMM through source code instrumenta-tion. Among previous examples of approaches related toours is to instrument in an operating system kernel so asto capture processor counters, which is applied to calcu-late performance metrics [7]. Further, the authors embedbenchmarks into Linux kernel modules to eliminatemost interferences of operating system and interruptsthus to reduce the perturbations of the instrumentation.Applied at the application-level, another example ofsource code instrumentation is to map dynamic events totransformations at the program source code level in theAspect-oriented programming (AOP) [8]. By contrast, weinstrument VMM also at the source code level but forcollecting performance and resource usage informationat runtime.

In fact, this instrumentation approach has been ap-plied to many other areas. In SubVirt [9], a virtualmachine monitor was instrumented for building a rootkitfor the purpose of malware simulation. The virtual ma-

VBoxSVC

Performance Collectors

VM1 VM2 VM3

client client client

COM COM COM

Host OS

Performance Monitor(thread)

COM

VMM Monitor(GUI)

Shared Memory

Fig. 2: The architecture of our project

chine based rootkit (VMBR) was implemented to subverta Window XP and Linux target systems in order tostudy various ways of defending the real-world rootkitattacks. For similar security research purpose, Garfinkeland Rosenblum use virtual machine monitor to isolateintrusion detection system (IDS) from monitored host intheir virtual machine introspection architecture [10]. Wefocused on performance issues of the particular Type IIVMMs and adopted the instrumentation approach solelyfor this purpose.

3 IMPLEMENTATION

To investigate the performance bottleneck of VirtualBox,we implement a Performance Monitor of VirtualBox torecord performance metrics of the VMM and VMs. Toretrieve the relative resource usage of different partsof VMM and VMs, we implement Performance Collec-tors inside VirtualBox, which collects resource usageinformation and sends it to Performance Monitor. Theproject is developed in C++ under Fedora 17 Linux withGCC 4.4.1. GUI is developed by using Qt 4.8.3.

3.1 Architecture of VirtualBox

Our project is built upon VirtualBox, which is a repre-sentative of Type II VMM products. The architecture ofVirtualBox [11] is showed in Figure 2. As a Type II VMM,VirtualBox is a software running upon a host operatingsystem (host OS). Above the host OS, there is a systemservice, VBoxSVC, which is the VMM of VirtualBox,maintaining all VMs that are currently running. Each VMis working with a VirtualBox client, which helps the VMinteract with the VBoxSVC.

3.2 Overall Approach

Figure 2 shows how our project is implemented insideVirtualBox and how the data is transferred among thedifferent components. Our implementation mainly hasthree parts: (1) Performance Collectors, (2) PerformanceMonitor and (3) VMMMonitor (GUI).

Page 3: Monitoring VirtualBox Performance · of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance

3

Fig. 3: The instrumented VirtualBox, where the VBoxPerfMon(right-hand side) we extended works as an integral component.

First, the Performance Collectors, each for one of thethree main categories of metrics including (1) CPU us-age, (2) memory usage and (3) I/O traffic, were im-plemented inside VBoxSVC. They sends raw metricsto Performance Monitor via COM (Component ObjectModel). The three performance collectors were insertedinto the existing COM interface (named IPerformanceCol-lector) provided in the original source package. Moreprecisely, since we performed experiments in Fedora17 Linux, we extended the IPerformanceCollector serviceto cover the metrics of our interests for Linux on-ly (in main/src server/linux/PerformanceLinux). Sec-ond, Performance Monitor, a child thread created inVBoxSVC, maintains all metrics it has received. Third,the visualizer of the performance metrics was built asan extended GUI interface (named VMMMonitor) uponthe existing Virtual Machine Manager GUI (VMManager)and, precisely, as a non-modal child dialog of it.

As regards to the runtime mode of working, all perfor-mance collectors work in a single COM server to be con-sistent with the original framework of IPerformanceCol-lector while the Performance Monitor and VMMMonitorrun as a child QtGui thread created by the main threadof the original VMManager. This instrumented QtGuithread itself then hosted a renderer thread and separateworker threads each for a category of metrics running asa COM client to the extended IPerformanceCollector COMservice, where the renderer and workers communicatedthrough the legacy Qt4 mechanism of inter-thread signaland slots.

4 EVALUATIONWe have implemented the source code instrumenta-tion approach to performance monitoring for VirtualBoxVMM. With current configurations of the platform (seeSection 4.3) where we develop and run all experiments,a complete build of the source package costs about 15minutes without noticeable extra overhead in this regardintroduced by our work. Figure 3 shows a screenshot ofthe running instrumented VirtualBox.

4.1 Metrics of MeasurementCurrently two major categories of result have beencollected and analyzed: (1) performance measurement

of both VMM and running VMs; and (2) performanceoverhead and VMM perturbation of our instrumentationapproach.

For the first category, primary metrics, CPU andmemory usage, were monitored over a period of time.With a user-defined interval t, measurement of thesemetrics was updated in the runtime every t seconds byretrieving the related dynamic records received from theinstrumented IPerformanceCollector service and, the re-sults were pushed to the VBoxPerfMon frontend hostingsimple time-varying visualizations. These metrics havebeen chosen because they are the well-recognized strongindicators connected to the overall performance of theholistic virtualizer that common users can directly expe-rience. Therefore, exploring these metrics, in particularthose attributed to the VMM, is what answering ourmotivating questions asks for.

With the second category of metrics, we were con-cerned about the aggregate instrumentation overheadand perturbations to the VMM, including those of theperformance collector interface extension and the VBox-PerfMon frontend). This was realized by running theoriginal VirtualBox and the instrumented one on a givenset of VM workloads separately and then comparingthe CPU and memory statistics provided by top onhost OS and benchmark scores obtained on guest OSes.These tests were included in our experimental designbecause it is important to inform if our work wouldcause or not too much overall performance penaltiesand how acceptable our approach would be in terms ofthe extraneous costs concerned by both system analystsand end users. More importantly, a heavy overhead ofour work itself would even affect the accuracy of theperformance metrics we obtained for VMM and VMs inaddition to the whole virtualizer.

4.2 Experimental Design

Since our goal is to investigate possible reasons thatwould account for the unideal performance lying inthe core VMM that is we supposed attributed to theunsatisfactory performance of the Type II virtualizerlike VirtualBox, we separately measured performancemetrics associated with running VMs and those purelydedicated to the core VMM alone. To do this, we testedthe fluctuations of related metrics as responses to thegradual increase in the number of running VMs from 0but up to 2 (our tests were limited to 2 VMs runningconcurrently due to the processor and physical memorylimitations of our test platform). During the tests, weobserved the metric changes on the part of VMM againstthat of the whole virtualizer at different overloads, in-cluding hosting the VM without applications runninginside (i.e. on the guest OS) and that with benchmarksrunning inside. We used SciMark2 [12] as CPU bench-mark and h2 and fop in the Dacapo benchmark suite [13]as memory and I/O disk benchmarks respectively forour experiments.

Page 4: Monitoring VirtualBox Performance · of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance

4

When comparing the performance original and instru-mented virtualizer on a same set of tasks in order tomeasure the instrumentation overhead and perturbation,we run two VMs, one running guest OS of WindowsXP sp3 and another Fedora 17 Linux, concurrently onthe virtualizer with both executing a same benchmarkfor each test, totalling 3 groups of tests each on one ofthe three benchmarks described above. In each test, wecollected both the aggregate CPU and memory usagestats associated with all VirtualBox processes from thehost OS’s point of view, and the time spent finishing thebenchmark test in both VMs reported by the correspond-ing benchmark program. Due to the limited test platformresources, we run each benchmark 10 times and took theaverage as the quantities for analysis.

4.3 Experimental SetupThe VirtualBox source code was instrumented and thenrebuilt using the building scripts ported with the sourcepackage. During the experimentation, host machine wasa portable HP R© COMPAQ Presario CQ60 Notebookrunning Fedora 17 while mounted a single-core Intel R©

Celeron R© 2.20GHZ processor with a 1024KB cache and,2GB DDR2 physical memory. The Windows XP sp3 VMwas assigned 512MB main memory, 16MB VRAM and10G IDE virtual HDD. For the Linux VM, we configured768MB main memory for it along with a 12MB VRAMand 10G IDE virtual HDD.

4.4 Results and AnalysisTo demonstrate the different resource usage patterns,we monitoring VirtualBox under four situations, whichare running VirtualBox with no VM started, runningVirtualBox starting one VM, running VirtualBox startingtwo VMs, running VirtualBox with two VMs starting onebenchmark, scimark2. Figure 4 and Figure 5 exhibit theresults of our monitoring memory usage and cpu usagein the four situations. In Figure 4a, 4b, 5a, 5b, time slotsspanning a period of 67 seconds are represented on thex-axes while in Figure 4c, 4d, 5c, 5d, time slots span aperiod of 111 seconds. In Figure 4, the y-axes are thememory costs of corresponding series. In the legendsof Figure 4, VMM is the core VMM in VirtualBox;VirtualBox total is the entire VirtualBox which includesVMM and all other components, such as VMs, front-ends; Guest-XP+VMM is the sum of the memory costof VM(with Windows XP operating system) and one ofthe core VMM; Guest-Fedora+XP+VMM is the sum ofthe memory cost of the two VMs(one with XP operatingsystem and one with Fedora operating system) and oneof the core VMM. In Figure 5, the y-axes are the CPUusage percentage of corresponding series. In the legendsof Figure 5, VMM still is the core VMM in VirtualBox;user-level includes all the components in VirtualBox thatare not VMs and the core VMM; Guest-XP is the VMwith XP operating system; Guest-Fedora is the VM withFedora operating system.

Comparing the four figures in Figure 4, we can seethere is always a around 0.4GB gap between the totalmemory cost of VirtualBox and the memory of cost ofVMM and VMs. The gap is slightly reduced after twoVMs are launched in VirtualBox which is understandablebecause VMs use some of memory that VirtualBox hasalready allocated. The main observation in Figure 4 isthe steady low memory cost of core VMM except whenthere is VM to start. When there is one VM to start, VMMonly uses memory less than 0.1 GB for 10 seconds, whileVMM uses almost the same amount of memory but formore than 60 seconds to start two VMs at the same time.Overall, the memory cost of VMM is almost zero to othercomponents.

For CPU usage analysis, different from memory usage,we can see VMM is the major CPU resource user ofVirtualBox in Figure 5a which is reasonable because withno VM started, VirtualBox has only started the VMMservice underneath while other higher-level componentsare not launched. On the other hand, in the other threefigures in Figure 5, we can see the CPU usage percentageof VMs and VMM are stay low all the time while the totalCPU usage of VirtualBox is random and much higher.This leads to a conclusion that the CPU cost of VMM islow in most situations.

The second evaluation is conducted for estimatingthe overhead of our monitoring. We ran three bench-marks on the two VMs and recorded the finish timeof each benchmarks, as showed in Figure 6. There aresix columns in Figure 6, each represents a finish timeof one benchmark in one VM. The solid black arearepresents the amount of time by which our monitoringhas increased the finish time. Two VMs are runningbenchmarks at the same time under the same VirtualBox,so the overhead of our monitoring showed in Figur 6is bigger than the overhead when VMs does not runconcurrently. The proportion of the overhead in runningfop benchmark is larger than other benchmarks, becausethe finish time is relatively short while there is certainfixed overhead f our method, such as initialization ofmetrics collection. Overall, the overhead of monitoringis between 1.6% and 39.0%.

Additionally, we also logged the entire resource usageof VirtualBox being monitored and not being monitoredrespectively, under the four circumstances: (1) two VMsrunning fop (2) two VMs running h2 (3) two VMs runningscimark2 and (4) two VMs doing nothing. Table 1 andTable 2 shows the average usage of responding resourcescomparing the situations with our monitoring with thosewithout our monitoring. The average usage is increasedby around 5%.

5 CONCLUSION

We have presented a preliminary quantitative study onthe performance-wise dynamics in the VMM componentof the open source virtualizer VirtualBox as a repre-sentative of the Type II VMM that has been reported

Page 5: Monitoring VirtualBox Performance · of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance

5

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

1.8

1 6 11 16 21 26 31 36 41 46 51 56 61 66

Me

mo

ry C

ost

(G

B)

Time (Sec.)

VMM

VirtualBox Total

(a) No VM running

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

1.8

1 6 11 16 21 26 31 36 41 46 51 56 61 66

Me

mo

ry C

ost

(G

B)

Time (Sec.)

VMM

VirtualBox total

Guest-XP+VMM

(b) One VM running

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

1.8

1 6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

86

91

96

101

106

Me

mo

ry C

ost

(G

B)

Time (Sec.)

VMM

VirtualBoxtotal

Guest-XP+VMM

Guest-Fedora+XP+VMM

(c) Two VMs running

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

1.8

1 6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

86

91

96

101

106

111

Me

mo

ry C

ost

(G

B)

Time (Sec.)

VMM

VirtualBoxtotal

Guest-XP+VMM

Guest-Fedora+XP+VMM

(d) Two VMs running with benchmarks

Fig. 4: Memory usage monitoring in four situations

Benchmark Monitored(%) Not-monitored(%)fop 19.91 11.38h2 12.85 12.10scimark2 12.66 10.15none 10.23 7.37

TABLE 1: Average total CPU usage percentage of VirtualBox

Benchmark Monitored(%) Not-monitored(%)fop 73.50 72.89h2 73.29 72.90scimark2 72.25 68.26none 71.91 67.71

TABLE 2: Average total Memory usage of VirtualBox

to have performance issues in practical application. Todo this, we developed a runtime performance monitorof the VirtualBox VMM by instrumenting the sourcecode of the VMM, inserting performance inspectors that

0102030405060708090

Fin

ish

tim

e(s

ec.

)

benchmark/guest-os

Difference between thetime of instrumentedVM and that ofuninstrumented VM

Finish time ofuninstrumented VM

Fig. 6: The finish time of running benchmarks in different VMs

communicate with the core VMM module via COMinterfaces. The monitor itself was designed as a sepa-rate module running as a child thread created by themain VirtualBox thread (VBoxSVC) and launched perfor-mance monitoring along with the start of the VirtualBoxVirtual Machine Manager, the typical bootstrapping in-terface of the client used by common users.

We have measured the primary performance metrics,including memory usage and CPU usage, collected onthe basis of the resources usage solely consumed by theVMM compared to that of the whole virtualizer. And,based on the data we have retrieved, we have describedan analysis that is expected to inform the answers toour research questions proposed as what has motivatedthis project at the beginning. Our results obtained haveimplied that VMM should not be the real culprit ofthe overall unsatisfactory performance of the Type IIvirtualizer.

In addition, our measure of the overhead incurred bythe instrumentation approach we implemented has beenan evidence of a negligible cost, hence the promisingpracticality of the present work.

REFERENCES[1] R. Goldberg, “Survey of virtual machine research,” IEEE Comput-

er, vol. 7, no. 6, pp. 34–45, 1974.[2] S. T. King, G. W. Dunlap, and P. M. Chen, “Operating system

support for virtual machines,” in Proceedings of USENIX AnnualTechnical Conference, Berkeley, CA, USA, 2003, pp. 71–84.

[3] J. Sugerman, G. Venkitachalam, and B. Lim, “Virtualizing i/o de-vices on vmware workstations hosted virtual machine monitor,”in USENIX Annual Technical Conference, 2001, pp. 1–14.

[4] G. Popek and R. Goldberg, “Formal requirements for virtualiz-able third generation architectures,” Communications of the ACM,vol. 17, no. 7, pp. 412–421, 1974.

Page 6: Monitoring VirtualBox Performance · of VMM performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance

6

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

1 6 11 16 21 26 31 36 41 46 51 56 61 66

CP

U u

sage

(%

)

Time (Sec.)

VMM

User-level

(a) No VM running

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

1 6 11 16 21 26 31 36 41 46 51 56 61 66

CP

U u

sage

(%

)

Time (Sec.)

VMM

User-level

Guest-XP

(b) One VM running

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

1 6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

86

91

96

101

106

CP

U u

sage

(%

)

Time (Sec.)

VMM

User-level

Guest-XP

Guest-Fedora

(c) Two VMs running

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

1 6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

86

91

96

101

106

CP

U u

sage

(%

)

Time (Sec.)

VMM

User-level

Guest-XP

Guest-Fedora

(d) Two VMs running with benchmarks

Fig. 5: Cpu usage percentage monitoring in four situations

[5] V. M. S., B. R. Mohan, and D. K. Damodaran, “Performancemeasuring and comparison of VirtualBox and VMware,” in Inter-national Conference on Information and Computer Networks, vol. 27,2012, pp. 42–47.

[6] J. Che, Q. He, Q. Gao, and D. Huang, “Performance measuringand comparing of virtual machine monitors,” in Proceedings of the2008 IEEE/IFIP International Conference on Embedded and UbiquitousComputing, vol. 2, 2008, pp. 381–386.

[7] H. Najafzadeh and S. Chaiken, “Source code instrumentation andits perturbation analysis in Pentium II,” State University of NewYork at Albany, Albany, NY, USA, Tech. Rep., 2000.

[8] R. Filman and K. Havelund, “Source-code instrumentation andquantification of events,” in Workshop on Foundations of Aspect-Oriented Languages, 1st International Conference on Aspect-OrientedSoftware Development (AOSD), Enschede, Netherlands, 2002.

[9] P. Chen and K. Samuel, “Subvirt: Implementing malware withvirtual machines,” in 2006 IEEE Symposium on Security and Privacy,2006, pp. 14–27.

[10] T. Garfinkel, M. Rosenblum et al., “A virtual machine introspec-tion based architecture for intrusion detection,” in Proc. Networkand Distributed Systems Security Symposium, 2003.

[11] Oracle VM VirtualBox User Manual, Oracle Corporation, http-s://www.virtualbox.org/manual/UserManual.html, 2012.

[12] R. Pozo and B. Miller, “Scimark 2.0,”http://math.nist.gov/scimark2, 2012.

[13] “Dacapo benchmark suite,” The Dacapo Group,http://dacapobench.org, 2012.