Top Banner
Virtualization In the early days of computing, virtualization as we know it today didn't really exist. Instead, emulation was used. In emulation, the behavior of a complete computer is copied to a software program. The emulation layer talks to an operating system which on its turn talks to the computer hardware. The operating system that you want to install in an emulation layer doesn't see that it is used in an emulated environemt and therefore you can install it as you are used to install your favourite operating system. Two popular open source emulators are QEMU (http://fabrice.bellard.free.fr/qemu/) en Bochs • (http://bochs.sourceforge.net).
49
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Virtualization

Virtualization

• In the early days of computing, virtualization as we know it today didn't really exist. Instead, emulation was used.

• In emulation, the behavior of a complete computer is copied to a software program. The emulation layer talks to an operating system which on its turn talks to the computer hardware. The operating system that you want to install in an emulation layer doesn't see that it is used in an emulated environemt and therefore you can install it as you are used to install your favourite operating system.

• Two popular open source emulators are QEMU (http://fabrice.bellard.free.fr/qemu/) en Bochs

• (http://bochs.sourceforge.net).

Page 2: Virtualization

Virtualization• One of the most important properties of emulation, is that all

hardware is emulated, the CPU as well.• This has advantages, such as the fact that you can run an operating

system that was developed for another architecture on your architecture. With this advantage however, also comes the most important disadvantage; this same option to virtualize a complete CPU comes with a heavy performance price.

Page 3: Virtualization

Virtualization• In the next generation, virtualization was taken to a higher level. This

means that between the emulation layer that was responsible for interpreting instructions from the virtualized machines and the

• hardware, no host operating system was required between virtual machines and hardware anymore.

• Instead the virtual machine monitor, also known as the hypervisor was introduced to run directly on the hardware. Because of this new architecture, virtualization became much more efficient. VMware for

• example was very succesful with this approach as implemented in VMware ESX.

Page 4: Virtualization

Virtualization• There are however two different approaches when virtualization is

used this way. In the old approach all instructions that were generated by the virtualized machine needed to be translated to the appropriate format for the CPU, which involves a lot of work for the hypervisor.

• In the new approach which is used by Xen, there is no translation between the instructions that leave the virtualized machine

• and the CPU that executes them.

• This can be accomplished in two ways.• Option number one is to use a CPU that understands the unmodified

instructions that are generated by the virtualized operating system and interprets them (full virtualization).

• Option number two is to modify the operating system so that it generates instructions that are optimized for use in a virtualized environment (para virtualization).

Page 5: Virtualization

Full versus Para Virtualizationo Full virtualization is one way of handling virtualization. Using this method, the

virtual machine talks to a component called the virtual machine monitor and this virtual machine monitor talks to the hardware platform directly.

o To use full virtualization in a Xen environment, you need a CPU that understands unmodified instructions that are generated by the virtualized operating system. Without this special feature on the CPU's, it's not possible to use full virtualization in Xen.

o This is because in the Xen approach not every instruction that is generated by the virtualized operating system is translated to a format that every CPU understands, because this is very resource intensive. Instead, the virtualization feature that is implemented in modern CPU's helps the virtualized operating system in a way that it can send out unmodified instructions.

o The main advantage of full virtualization, is that an unmodified operating system is installed. This means that virtually every operating system that runs on the same architecture can be virtualized.

Page 6: Virtualization

Full versus Para Virtualizationo The most efficient approach in virtualization, is para virtualization. o In para virtualization, the guest operating system uses a specialized API to

talk to the virtual machine monitor which is responsible for handling the virtualization requests and putting them to the real hardware.

o Because of this special API, the virtual machine manager doesn't need to do a resource intensive translation of instructions any more before they can be passed to the hardware.

o Also, when using the paravirtualization API, the virtualized operating system is capable of generating much more efficients instructions.

o A disadvantage however, is that you do need a modified operating system that includes this specific API and for certain operating systems (Windows mainly) this is an important disadvantage because such an API is not available.

Page 7: Virtualization

Virtualization

• What is virtulization?• • virtulization is a broad term that refers to the abstraction• of computer resources.• • Server virtulization• – Hardware – ex: IBM pSeries and zSeries LPARS• – Software – ex: Vmware, Xen, Solaris Containers, SWsoft• Virtuozzo, Virtual Box, KVM• • Storage virtulization• – Hardware – ex: RAID, SAN• – Software – ex: iSCSI, Veratis Storage Foundation,

Software• RAID

Page 8: Virtualization

Virtualization

Page 9: Virtualization

Virtualization

• Virtual Machines• • Enabled by layer that sits between the OS and• hardware• – OS instances think they are controlling the “real”

machine*• – Virtulization layer mediates access to hardware resources• – Permits multiple OS instances to coexist on a single server• – Even incompatible OS's can share a single server• – the “layer” is referred to as a Virtual Machine Monitor

(VMM)

Page 10: Virtualization

Full Virtualization

full virtualization on CPUs that have been designed specifically for virtualization. (Examples include the next-generation AMD processors with AMD-V.) A fully virtualized operating system is one that has not been modified specifically to run in a virtual environment, so it is unaware that it is being virtualized. As a result, the hypervisor traps and emulates every I/O and hardware instruction that is deemed privileged by the hypervisor.

Typically, the overhead occurring from these trapping and emulation operations would have a significant impact on performance. However, the AMD processors with AMD-V have been designed specifically for virtualization. The Xen hypervisor interacts with the virtualization extensions in the AMD processors not only to improve performanceand efficiency, but also to provide hardwarebased isolation between these unmodified guest operating systems running on a virtualization server.

The main benefit of full virtualization comes from its ability to host legacy operating systems that have not been paravirtualized. The ability to host these legacy operating systems in a virtualized environment is critical to a data center’s server-consolidation efforts. This feature is mandatory for virtualizing proprietary operating systems, including those from Microsoft*.

Page 11: Virtualization

Full Virtualization• To run full virtualization guests on systems with Hardware-assisted Virtual

Machine (HVM), Intel, or AMD platforms, you must check to ensure your CPUs have the capabilities needed to do so.

• To check if you have the CPU flags for Intel support, enter the following: • grep vmx /proc/cpuinfo• The output displays: • flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi

mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm

• If a vmx flag appears then you have Intel support. • To check if you have the CPU flags for AMD support, enter the following: • grep svm /proc/cpuinfo• cat /proc/cpuinfo | grep svm• The output displays: • flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dt acpi

mmx fxsr sse sse2 ss ht tm syscall nx mmtext fxsr_opt rdtscp lm 3dnowext pni cx16 lahf_lm cmp_legacy svm cr8_legacy

Page 12: Virtualization

Paravirtualization

Xen’s unique performance benefits accrue from its use of paravirtualization. With paravirtualization,the operating system running inside of a virtual machine (known as a guest operating system) is modified to run on top of a hypervisor.

virtualized operating system instance is aware that it is running in a virtualized state and has been fine-tuned for optimal performancein that environment.

Paravirtualization allows the hypervisor to avoid hard-to-virtualize processor instructions by replacing them with procedure calls thatprovide that functionality. A paravirtualized operating system loads and runs virtual drivers that are capable of interacting with Xen to access resources on the host virtual server. In other words, it does not require complete emulation of computer devices.

Page 13: Virtualization

Full & Paravirtualization Overview

Page 14: Virtualization

• 32-bit hosts runs only 32-bit paravirtual guests. 64-bit hosts runs only 64-bit paravirtual guests. And a 64-bit full virtualization host runs 32-bit, 32-bit PAE, or 64-bit guests. A 32-bit full virtualization host runs both PAE and non-PAE full virtualization guests.

Page 15: Virtualization

Xen Architecture

Page 16: Virtualization

Xen Terminology• guest operating system: An operating system that can run within

the Xen environment. • hypervisor: Code running at a higher privilege level than the

supervisor code of its guest operating systems. The hypervisor is Xen itself. It goes between the hardware and the operating systems of the various domains. The hypervisor is responsible for checking page tables, allocating resources for new domains, and scheduling domains. It presents the domains with a VirtualMachine that looks similar but not identical to the native architecture. It is also responsible for booting the machine enough that it can start dom0.

• Just as applications can interact with an OS by giving it syscalls, domains interact with the hypervisor by giving it hypercalls. The hypervisor responds by sending the domain an event, which fulfils the same function as an IRQ on real hardware.

• virtual machine monitor ("vmm"): In this context, the hypervisor. • domain: A running virtual machine within which a guest OS

executes. • domain0 ("dom0"): The first domain, automatically started at boot

time. Dom0 has permission to control all hardware on the system, and is used to manage the hypervisor and the other domains.

Page 17: Virtualization

Xen Terminology• unprivileged domain ("domU"): A domain with no special hardware

access.• Full virtualization: An approach to virtualization which requires no

modifications to the hosted operating system, providing the illusion of a complete system of real hardware devices.

• paravirtualization: An approach to virtualization which requires modifications to the operating system in order to run in a virtual machine. Xen uses paravirtualization but preserves binary compatibility for user space applications.

• HVM: Hardware Virtual Machine, which is the full-virtualization mode supported by Xen. This mode requires hardware support, e.g. Intel's Virtualization Technology (VT) and AMD's Pacifica technology.

• SVM: full-virtualization support on AMD's Pacifica-enabled processors • VT-x: full-virtualization support on Intel's x86 VT-enabled processors • VT-i: full-virtualization support on Intel's IA-64 VT-enabled processors

Page 18: Virtualization

Xen Terminology• backend: one half of a communication end point - interdomain

communication is implemented using a frontend and backend device model interacting via event channels.

• frontend: the device as presented to the guest; other half of the communication endpoint.

• vif: virtual interface; the name of the network backend device connected by an event channel to a network front end on the guest.

• vethN: local networking front end on dom0; renamed to ethN by xen network scripts in bridging mode (FIXME)

• pethN: real physical device (after renaming)• Live migration: A technique for moving a running virtual machine to

another physical host, without stopping it or the services running on it.

Page 19: Virtualization

hypervisor• Hypervisors are currently classified in two types:

• A Type 1 (or native or bare-metal) hypervisor is software that runs directly on a given hardware platform (as an operating system control program).

• A guest operating system thus runs at the second level above the hardware. The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's current z/VM. More recent examples are Xen, Oracle VM, VMware's ESX Server, L4 microkernels, TRANGO, IBM's LPAR hypervisor (PR/SM), Microsoft's Hyper-V (currently in Beta), and Sun's Logical Domains Hypervisor (released in 2005). A variation of this is embedding the hypervisor in the firmware of the platform, as is done in the case of Hitachi's Virtage hypervisor. KVM, which turns a complete Linux kernel into a hypervisor, is also Type 1.

Page 20: Virtualization

hypervisor• A Type 2 (or hosted) hypervisor is software that runs within an

operating system environment. A "guest" operating system thus runs at the third level above the hardware. Examples include VMware Server (formerly known as GSX), VMware Workstation, VMware Fusion, the open source QEMU, Microsoft's Virtual PC and Microsoft Virtual Server products, InnoTek's VirtualBox, as well as SWsoft's Parallels Workstation and Parallels Desktop.

• The term hypervisor apparently originated in IBM's CP-370 reimplementation of CP-67 for the System/370, released in 1972 as VM/370. The term hypervisor call, or hypercall, referred to the paravirtualization interface, by which a "guest" operating system could access services directly from the (higher-level) control program – analogous to making a "supervisor call" to the (same level) operating system. (The term "supervisor" refers to the operating system kernel, which on IBM mainframes runs in supervisor state.)

Page 21: Virtualization

Virtualization Technologies (cont)

• • Classification by Interface• – Fully emulated Hardware (Xen Full-

Virtualization)• > Use of unmodified OS• > Privileged Instructions to be cached

(performance relevant)• – Semi emulated Hardware (Xen Para-

Vitualization)• > Modified Guest OS required• > API for Operations and Processor Instructions

Page 22: Virtualization

Prerequisites• Prerequisites• The following is a full list of prerequisites. o A working Linux distribution using the GRUB bootloader

and running on a P6-class (or newer) CPU.o The iproute2 package.o The Linux bridge-utils1 (e.g., /sbin/brctl)o An installation of Twisted v1.3 or above2. There may be a

binary package availableo for your distribution; alternatively it can be installed by

running ` make install-twisted' in the root of the Xen source tree.

Page 23: Virtualization

Prerequisiteso Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).o Development installation of libcurl (e.g., libcurl-devel)o Development installation of zlib (e.g., zlib-dev).o Development installation of Python v2.2 or later (e.g.,

python-dev).• Once you have satised the relevant prerequisites, you can

now install either a binary or source distribution of Xen.

Page 24: Virtualization

Installing from Binary Tarball

• Pre-built tarballs are available for download from the Xen download page

• http://xen.sf.net• Once you've downloaded the tarball, simply

unpack and install:• # tar zxvf xen-2.0-install.tgz• # cd xen-2.0-install• # sh ./install.sh

Page 25: Virtualization

GRUB Conguration

• An entry should be added to grub.conf (often found under /boot/ or /boot/grub/)

• to allow Xen / XenLinux to boot. This le is sometimes called menu.lst, depending

• on your distribution. The entry should look something like the following:

• title Xen 2.0 / XenLinux 2.6• kernel /boot/xen-2.0.gz dom0_mem=131072• module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro console=tty0• The kernel line tells GRUB where to nd Xen itself and what boot

parameters should• be passed to it (in this case, setting domain 0's memory allocation

in kilobytes and the• settings for the serial port).

Page 26: Virtualization

GRUB Conguration• Serial Console (optional)• In order to congure Xen serial console output, it is necessary to add an

boot option to your GRUB cong; e.g. replace the above kernel line with:• kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1• This congures Xen to output on COM1 at 115,200 baud, 8 data bits, 1 stop

bit and no parity. Modify these parameters for your set up.• One can also congure XenLinux to share the serial console; to achieve this

append• .console=ttyS0. to your module line.• If you wish to be able to log in over the XenLinux serial console it is

necessary to add a line into /etc/inittab, just as per regular Linux. Simply add the line:

• c:2345:respawn:/sbin/mingetty ttyS0• and you should be able to log in. Note that to successfully log in as root

over the serial line will require adding ttyS0 to /etc/securetty in most modern distributions.

Page 27: Virtualization

Xen Boot Options • These options are used to configure Xen's behaviour at runtime. They

should be appended to Xen's command line, either manually or by editing grub.conf.

• noreboot Don't reboot the machine automatically on errors. This is useful to catch debug output if you aren't catching console messages via the serial line. nosmp Disable SMP support. This option is implied by `ignorebiostables'. watchdog Enable NMI watchdog which can report certain failures. noirqbalance Disable software IRQ balancing and affinity. This can be used on systems such as Dell 1850/2850 that have workarounds in hardware for IRQ-routing issues. badpage=<page number>,<page number>, ... Specify a list of pages not to be allocated for use because they contain bad bytes. For example, if your memory tester says that byte 0x12345678 is bad, you would place `badpage=0x12345' on Xen's command line.

Page 28: Virtualization

• com1=<baud>,DPS,<io_base>,<irq> com2=<baud>,DPS,<io_base>,<irq> •

Xen supports up to two 16550-compatible serial ports. For example: `com1=9600, 8n1, 0x408, 5' maps COM1 to a 9600-baud port, 8 data bits, no parity, 1 stop bit, I/O port base 0x408, IRQ 5. If some configuration options are standard (e.g., I/O base and IRQ), then only a prefix of the full configuration string need be specified. If the baud rate is pre-configured (e.g., by the bootloader) then you can specify `auto' in place of a numeric baud rate.

• console=<specifier list> • Specify the destination for Xen console I/O. This is a comma-separated list of,

for example: • vga Use VGA console and allow keyboard input.• com1 Use serial port com1. • com2H Use serial port com2. Transmitted chars will have the MSB set. Received

chars must have MSB set. • com2L Use serial port com2. Transmitted chars will have the MSB cleared.

Received chars must have MSB cleared.

Page 29: Virtualization

• sync_console • Force synchronous console output. This is useful if you system fails

unexpectedly before it has sent all available output to the console. In most cases Xen will automatically enter synchronous mode when an exceptional event occurs, but this option provides a manual fallback.

• conswitch=<switch-char><auto-switch-char> • Specify how to switch serial-console input between Xen and DOM0.

The required sequence is CTRL-<switch-char> pressed three times. Specifying the backtick character disables switching. The <auto-switch-char> specifies whether Xen should auto-switch input to DOM0 when it boots -- if it is `x' then auto-switching is disabled. Any other value, or omitting the character, enables auto-switching. [NB. Default switch-char is `a'.]

• nmi=xxx • Specify what to do with an NMI parity or I/O error.

`nmi=fatal': Xen prints a diagnostic and then hangs. `nmi=dom0': Inform DOM0 of the NMI. `nmi=ignore': Ignore the NMI.

Page 30: Virtualization

• mem=xxx • Set the physical RAM address limit. Any RAM appearing beyond this

physical address in the memory map will be ignored. This parameter may be specified with a B, K, M or G suffix, representing bytes, kilobytes, megabytes and gigabytes respectively. The default unit, if no suffix is specified, is kilobytes.

• dom0_mem=xxx • Set the amount of memory to be allocated to domain0. In Xen 3.x

the parameter may be specified with a B, K, M or G suffix, representing bytes, kilobytes, megabytes and gigabytes respectively; if no suffix is specified, the parameter defaults to kilobytes. In previous versions of Xen, suffixes were not supported and the value is always interpreted as kilobytes.

• tbuf_size=xxx • Set the size of the per-cpu trace buffers, in pages (default 1). Note

that the trace buffers are only enabled in debug builds. Most users can ignore this feature completely.

Page 31: Virtualization

• sched=xxx• Select the CPU scheduler Xen should use. The current possibilities

are `sedf' (default) and `bvt'.• apic_verbosity=debug,verbose Print more detailed information

about local APIC and IOAPIC configuration. • lapic Force use of local APIC even when left disabled by

uniprocessor BIOS. • nolapic Ignore local APIC in a uniprocessor system, even if enabled

by the BIOS. • apic=bigsmp,default,es7000,summit Specify NUMA platform. This

can usually be probed automatically. In addition, the following options may be specified on the Xen command line. Since domain 0 shares responsibility for booting the platform, Xen will automatically propagate these options to its command line. These options are taken from Linux's command-line syntax with unchanged semantics.

Page 32: Virtualization

• acpi=off,force,strict,ht,noirq,... • Modify how Xen (and domain 0) parses the BIOS ACPI tables. • acpi_skip_timer_override • Instruct Xen (and domain 0) to ignore timer-interrupt override

instructions specified by the BIOS ACPI tables. • noapic • Instruct Xen (and domain 0) to ignore any IOAPICs that are present

in the system, and instead continue to use the legacy PIC.• xencons=xxx • Specify the device node to which the Xen virtual console driver is

attached. The following options are supported: • `xencons=off': disable virtual console • `xencons=tty': attach console to /dev/tty1 (tty0 at boot-time)

`xencons=ttyS': attach console to /dev/ttyS0 • The default is ttyS for dom0 and tty for all other domains.

Page 33: Virtualization

XEN-NETWORKING

• Virtual Ethernet interfaces• Xen creates, by default, seven pair of "connected virtual

ethernet interfaces" for use by dom0. Think of them as two ethernet interfaces connected by an internal crossover ethernet cable. veth0 is connected to vif0.0, veth1 is connected to vif0.1, etc, up to veth7 -> vif0.7. You can use them by configuring IP and MAC addresses on the veth# end, then attaching the vif0.# end to a bridge.

Page 34: Virtualization

XEN-NETWORKING

Page 35: Virtualization

XEN-NETWORKING• Every time you create a running domU instance, it is assigned a new

domain id number. • For each new domU, Xen creates new "connected virtual ethernet

interfaces", with one end of each pair is within the domU and the other end exists within dom0. For linux domU's, the device name it sees is named eth0.

• The other end of that virtual ethernet interface pair exists within dom0 as interface vif<id#>.0.

• For example, domU #5's eth0 is attached to vif5.0. • If you create multiple network interfaces for a domU, it's ends will

be eth0, eth1, etc, whereas the dom0 end will be vif<id#>.0, vif<id#>.1, etc.

Page 36: Virtualization

Logical network cards connected between dom0 and dom1:

Page 37: Virtualization

Bridging

Page 38: Virtualization

• network-bridge• When xend starts up, it runs the network-bridge script, which: o creates a new bridge named xenbr0 o "real" ethernet interface eth0 is brought down o the IP and MAC addresses of eth0 are copied to virtual network interface veth0 o real interface eth0 is renamed peth0 o virtual interface veth0 is renamed eth0 o peth0 and vif0.0 are attached to bridge xenbr0 o the bridge, peth0, eth0 and vif0.0 are brought up • It is good to have the physical interface and the dom0 interface separated; thus

you can e.g. setup a firewall on dom0 that does not affect the traffic to the domUs (just for protecting dom0 alone).

• vif-bridge• When a domU starts up, xend (running in dom0) runs the vif-bridge script, which: o attaches vif<id#>.0 to xenbr0 o vif<id#>.0 is brought up

Page 39: Virtualization

• you can change the bridge name from xenbr0 using: • (network-script 'network-bridge bridge=mybridge') in xend-

config.sxp and rebooting or restarting xend• you can create multiple network interfaces, and attach them to

different bridges using: vif=[ 'mac=00:16:3e:70:01:01,bridge=br0', 'mac=00:16:3e:70:02:01,bridge=br1' ]

Page 40: Virtualization

Domain Management Tools

• Command-line Management• Command line management tasks are also performed using the xm tool.

For online• help for the commands available, type:• # xm help• You can also type xm help <command> for more information on a given

command.

Page 41: Virtualization

Starting/Stopping a Domain at Boot Time

• You can start or stop running domains at any time. Domain0 waits for all running domains to shutdown before restarting.

• You must place the configuration files of the domains you wish to shut down in the /etc/xen/ directory.

• All the domains that you want to start at boot time must be symlinked to /etc/xen/auto.

• chkconfig xendomains on• The chkconfig xendomains on command does not automatically

start domains; instead it will start the domains on the next boot. • chkconfig xendomains off• Terminates all running Red Hat Virtualization domains. The

chkconfig xendomains off command shuts down the domains on the next boot.

Page 42: Virtualization

Xendo The Xend node control daemon performs system management

functions related to virtual machines. It forms a central point of control of virtualized resources, and must be running in order to start and manage virtual machines. Xend must be run as root because it needs access to privileged system management functions.

• Xend can be started on the command line as well, and supports the following set of parameters:

• # xend start start xend, if not already running • # xend stop stop xend if already running• # xend restart restart xend if running, otherwise start it• # xend status indicates xend status by its return code

Page 43: Virtualization

Logging

• As xend runs, events will be logged to /var/log/xend.log and (less frequently) to /var/log/xend-debug.log. These, along with the standard syslog files, are useful when troubleshooting problems.

Page 44: Virtualization

Configuring Xend• Xend is written in Python. At startup, it reads its configuration

information from the file /etc/xen/xend-config.sxp. The Xen installation places an example xend-config.sxp file in the /etc/xen subdirectory which should work for most installations.

Page 45: Virtualization

• An HTTP interface and a Unix domain socket API are available to communicate with Xend. This allows remote users to pass commands to the daemon.

• By default, Xend does not start an HTTP server. It does start a Unix domain socket management server, as the low level utility xm requires it. For support of cross-machine migration, Xend can start a relocation server. This support is not enabled by default for security reasons.

• From the file: • #(xend-http-server no) • (xend-http-server yes)• #(xend-unix-server yes)• #(xend-relocation-server no)• (xend-relocation-server yes)• Comment or uncomment lines in that file to disable or enable features that you require. • Connections from remote hosts are disabled by default: • # Address xend should listen on for HTTP connections, if xend-http-server is • # set. • # Specifying 'localhost' prevents remote connections. • # Specifying the empty string '' (the default) allows all connections. • #(xend-address '') • (xend-address localhost) • It is recommended that if migration support is not needed, the xend-relocation-server

parameter value be changed to ``no'' or commented out.

Page 46: Virtualization

Xm• The xm tool is the primary tool for managing Xen from the console.

The general format of an xm command line is: • # xm command [switches] [arguments] [variables] • # xm help• This will list the most commonly used commands. The full list can be

obtained using xm help --long. You can also type xm help <command> for more information on a given command.

Page 47: Virtualization

• One useful command is• # xm list • which lists all domains running in rows of the following format: • name domid memory vcpus state cputime • The meaning of each field is as follows: • name The descriptive name of the virtual machine.• domid The number of the domain ID this virtual machine is running in.• memory Memory size in megabytes. • vcpus The number of virtual CPUs this domain has. • state Domain state consists of 5 fields: • r running b blocked p paused s shutdown c crashed • cputime How much CPU time (in seconds) the domain has used so far.

• The xm list command also supports a long output format when the -l switch is used. This outputs the full details of the running domains in xend's SXP configuration format.

• You can get access to the console of a particular domain using the # xm console command (e.g. # xm console myVM).

Page 48: Virtualization

5.1 Configuration Files • Xen configuration files contain the following standard variables. Unless otherwise

stated, configuration items should be enclosed in quotes: see the configuration scripts in /etc/xen/ for concrete examples.

• kernel Path to the kernel image. • ramdisk Path to a ramdisk image (optional). • memory Memory size in megabytes. • vcpus The number of virtual CPUs. • console Port to export the domain console on (default 9600 + domain ID).• vif Network interface configuration. e.g. vif = [ 'mac=00:16:3E:00:00:11,

bridge=xen-br0', 'bridge=xen-br1' ] • disk List of block devices to export to the domain e.g. disk = [ 'phy:hda1,sda1,r' ]• dhcp Set to `dhcp' if you want to use DHCP to configure networking.• netmask Manually configured IP netmask. • gateway Manually configured IP gateway. • hostname Set the hostname for the virtual machine. • root Specify the root device parameter on the kernel command line. • nfs_server IP address for the NFS server (if any).• nfs_root Path of the root filesystem on the NFS server (if any). • extra Extra string to append to the kernel command line (if any)

Page 49: Virtualization

Domain Save and Restore

• The administrator of a Xen system may suspend a virtual machine's current state into a disk file in domain 0, allowing it to be resumed at a later time.

• For example you can suspend a domain called ``VM1'' to disk using the command:

• # xm save VM1 VM1.chk • This will stop the domain named ``VM1'' and save its current state

into a file called VM1.chk. • To resume execution of this domain, use the xm restore command: • # xm restore VM1.chk • This will restore the state of the domain and resume its execution.