Top Banner
VMware vSphere 5.1 Section 1: Introduction to virtualization Technology: 1 Definition of virtualization: Virtualization is a technology that transforms hardware into software. It allows you to run multiple operating systems as virtual machines on a single computer. 2 Why do we virtualize? Virtualization is implemented primarily to make optimal utilization of compute resources i.e memory and processor which is usually under-utilized in usual physical infrastructure. 3 What are the benefits of virtualization? Benefits of virtualization are: Maximize the utilization of hardware resources (primarily memory, processor...) Ability to pool resources from multiple servers into one pool Consolidate servers, which help in reduced space and power utilization Saves capital and operation costs Zero downtime maintenance Reduce administration overhead Save time on repetitive administration tasks Dynamic upward scalability of resources and servers as per business needs Provides high availability for applications and servers Provides security at the hypervisor level and server/application levels Easy backup and restore of virtual machines/data Supports wide range of current and legacy operating systems 4 Components that can be virtualized?
67
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: V mwarev sphere5.1notes-v2

VMware vSphere 5.1

Section 1: Introduction to virtualization Technology:

1 Definition of virtualization:

Virtualization is a technology that transforms hardware into software. It allows you to run multiple operating systems as virtual machines on a single computer.

2 Why do we virtualize?

Virtualization is implemented primarily to make optimal utilization of compute resources i.e memory and processor which is usually under-utilized in usual physical infrastructure.

3 What are the benefits of virtualization?

Benefits of virtualization are:

● Maximize the utilization of hardware resources (primarily memory, processor...)● Ability to pool resources from multiple servers into one pool● Consolidate servers, which help in reduced space and power utilization● Saves capital and operation costs● Zero downtime maintenance● Reduce administration overhead● Save time on repetitive administration tasks● Dynamic upward scalability of resources and servers as per business needs● Provides high availability for applications and servers● Provides security at the hypervisor level and server/application levels● Easy backup and restore of virtual machines/data● Supports wide range of current and legacy operating systems

4 Components that can be virtualized? ● Memory● Processor● Network● Storage

Page 2: V mwarev sphere5.1notes-v2

5 Leading providers of virtualization technology:

● VMware ( vSphere, VMware View, Thin App)● Microsoft ( Hyper-V)● Citrix ( Xen server, Xen desktop, Xen App)

6 Types of virtualizations available currently?

a Server virtualization:

● Virtualization of physical servers running enterprise application like, Sql, Oracle, SAP etc into virtual machine which are easy to administer,Backup, provision, maintain high availability while providing equal or greater performance benefits that obtained in physical environment.

● Technologies that can be used for server virtualization are: VMware vSphere, Microsoft Hyper-v, Citrix Xen server

b Desktop virtualization:

● Virtualization of end user physical desktops into virtual desktops which provides same user experience as in physical environment and also are easy to be deployed on demand

● Technologies used for desktop virtualization: Vmware view, Xen Desktop

c Application virtualization

● Virtualization of application makes the application independent of the operating system.● Helps in running the application on older/legacy operating systems, faster deployment on end user

desktops, easy manageability, less interaction with operating system ensuring underlying operating system is not affected by changes or updates to application

● Technologies which enable application virtualization are: VMware ThinApp, Citrix Xen App

d Cloud computing

● Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet )

Page 3: V mwarev sphere5.1notes-v2

7 Types of Cloud computing:

● Public Cloud

a. In this model service provider provides applications, storage, as a service for a price to the end user. b. Key providers are: Amazon, Microsoft, Google

● Private Cloud

a Private cloud is cloud infrastructure setup and operated solely by and for a single organization, whether managed internally or by a third-party and hosted internally or externally.

● Hybrid Cloud

a Hybrid cloud is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, and offers the benefits of multiple deployment models.

b Hybrid clouds lack the flexibility, security and certainty of in-house applications

● Community Cloud

a Community cloud shares infrastructure between several organizations from a specific community with common concerns

8 Types of virtualization products based on installation:

a Bare-metal: Hypervisor ( virtualization OS) is installed directly on the hardware

b Hosted: Virtualization application is installed over an existing operating system

Page 4: V mwarev sphere5.1notes-v2

Section 2: Introduction to VMware Organization

History of VMware:

● Founded in 1998 by Diane Greene, Mendel Rosenblum, Scott Devine, Edward Wang and Edouard Bugnion. Greene and Rosenblum, who are married, first met while at the University of California, Berkeley.

● Headquarters: Palo Alto, CA● VMware operated throughout 1998 with roughly 20 employees by the end of that year. The company

was launched officially in February 1999.● First product launched: VMware workstation in 1999● In 2001, company entered server virtualization with GSX server (hosted platform) and VMware ESX

server (baremetal)● Virtual Center launched in 2003● 64bit OS support in 2004● Revenue: $4.61 Billion● Acquired by EMC: 2004 and now operates as separate software subsidiary

Few products in offer from VMware:

○ VMware workstation/Fusion: Hosted version which is installed on top of an existing operating system.○ VMware ESX/ESXi: Bare-metal installation, installed directly on hardware like other operating systems.○ vCenter server : Management application for ESX/ESXi hosts.○ vCloud Director : Cloud solution from VMware.○ VMware View: Desktop virtualization solution from VMware.○ VMware Site Recovery Manager: Disaster recovery solution.○ VMware vSphere Data Protection: Backup solution.○ VMware Capacity Planner: Estimate consolidation ratios of existing physical infrastructure to virtual

Section3: Differences between different server virtualization products:

Page 5: V mwarev sphere5.1notes-v2

1 Hypervisor comparisons:

2 Management solution comparison:

Page 6: V mwarev sphere5.1notes-v2

3 Business continuity options and Storage comparision:

Page 7: V mwarev sphere5.1notes-v2

4 Network solution comparision:

Page 8: V mwarev sphere5.1notes-v2

Section4: Career opportunities and certifications in VMware:

Page 9: V mwarev sphere5.1notes-v2

Certifications:

1 Entry level certification:

● VCP5-DV (VMware certified professional in Datacenter Virtualization)● VCP5-DT (VMware certified professional in Desktop Virtualization)● VCP-Cloud (VMware certified professional in Cloud computing)

2 Mid-level certification:

Administration and design level certifications:

● VCAP5-DCA (VMware certified advanced professional in vSphere)● VCAP5-DT (VMware certified advanced professional in desktop virtualization)● VCAP5-CID (VMware certified advanced professional in Cloud computing)● VCAP5-DCD (VMware certified advanced professional in datacenter design)● VCAP-CID (VMware certified advanced professional in cloud infrastructure design)● VCAP-DTD (VMware certified advanced professional in Desktop design)

3 Expert Level certifications: ● VCDX5-DV (VMware certified design expert in datacenter virtualization)● VCDX-DT (VMware certified design expert in desktop virtualization)● VCDX-Cloud (VMware certified design expert in cloud computing)

Career paths:

1 VMware support professionals2 VMware administrators ( vSphere administrators, View administrators, cloud administrators)3 VMware consultants ( vSphere consultants, View consultants, cloud consultants)4 VMware architects5 VMware certified instructors

Section 5: Installing ESXi

Page 10: V mwarev sphere5.1notes-v2

Requirements to install ESXi

1 Processor – 64-bit x86 CPU:● Requires at least two cores● ESXi supports a broad range of x64 multicore processors

2 Memory – 2GB RAM minimum3 Ethernet controllers ( One or More):

● Gigabit and 10 Gigabit Ethernet controllers are supported. ● For best performance and security, use separate Ethernet controllers for the management network and

the virtual machine networks.4 Disk storage:

● A SCSI adapter, Fibre Channel adapter, converged network adapter, iSCSI adapter, or internal RAID controller

● A SCSI disk or Serial Attached SCSI● Fibre Channel, iSCSI Luns

Types of ESXi available:

1 ESXi Installable - Version which is available for download from VMware website2 ESXi Embedded - Version which is provided by OEM ( HP/DELL/IBM) preinstalled with their servers

Types of installation supported:

1 Local installation2 Boot from SAN

Installation Methods:

1 Using CD/DVD/iso2 Scripted installation 3 Auto deploy

Page 11: V mwarev sphere5.1notes-v2

ESXi Architecture:

Basic ESXi configuration:

● Accessing DCUI (Direct Console User Interface) - Press F2● Logging into the ESXi shell: Alt + F1● To check live logging : Alt + F12● Default user account: Root● To enable shell access and ssh: From DCUI select Troubleshooting options● ESXi shell looks similar to: Linux/Unix shell and file system● Important files location in ESXi:

1 Configuration files for VMware: /etc/vmware2 Log file location: /var/log3 Boot files: bootbank and altbootbank4 ESXi administrative commands location: /sbin5 Location used to save virtual machines: /vmfs6 ESXi library files: /lib

● Editor used to edit files: Vi editor ● Main agents for host connectivity:

1 Hostd – To connect to host directly using vSphere client and perform on tasks on the host2 Vpxa – To connect to the host using virtual center and perform all virtual center related tasks

● Important log files:1 Vmkernel log: Saves all information about the changes done at kernel level2 Hostd log: Saves information regarding hostd agent and tasks initiated and performed by hostd agent3 Vmksummary: Stores information on the top 3 processes running on the host. Updated every hour.4 Vpxa log: Log for vpxa agent 5 Sysboot log : information of all event that occur during the boot of the ESXi host6 Shell.log: Stores information on all the commands typed in the ESXi shell prompt

● Scratch partition: Used to save core dump files, log files persistently after reboot of ESXi server● From ESXi host, we can see only real time performance charts, for historic ( weekly, monthly, yearly ), we need

virtual center

Page 12: V mwarev sphere5.1notes-v2

Section 6: Installing Virtual Center

Requirements to installing virtual center in Windows:

Hardware requirements (can be installed on a physical or virtual machine):

● Number of CPUs – Two 64-bit CPUs or one 64-bit dual-core processor

● Processor – 2.0GHz or higher Intel or AMD processor*

● Memory – 4GB RAM minimum*

● Disk storage – 4GB minimum*

● Networking – Gigabit connection recommended

* Higher if database, SSO, and Inventory Services runs on the same machine

Software requirements:

● 64-bit operating system is required.

● Database ( SQL/Oracle/IBM DB2)

● Virtual or physical machine must be part of domain and time needs to in sync

● Always use static IP address

Supported databases:

● Microsoft SQL Server 2005 SP3 (required), recommended sp4

● Microsoft SQL Server 2008

● Microsoft SQL Server 2008 R2 Express ( Included with the virtual center server installer)

● Oracle 10g R2 and 11g

● IBM DB2 9.5 and 9.7 ( included with virtual center virtual appliance)

Methods to install Virtual center:

1 Using windows based installer ( VIM setup file )

Page 13: V mwarev sphere5.1notes-v2

2 Deploying virtual center appliance ( Appliance – preconfigured VM with virtual center installed with DB)

Virtual center architecture:

Virtual center communication with ESXi hosts:

Components/Dependencies for virtual center to be installed:

1 Single sign-on : single sign-on is used to enable one time login into all the plug-ins/components which interact with virtual center, once we login into the virtual center. It can be installed in standalone mode or multi node mode for redundancy

2 Inventory service: Inventory service is used to maintain the virtual center inventory information and perform search operations

3 Database: Database is used to store information about all the virtual center inventory objects like, virtual machines, datacenters, clusters, performance charts, etc…)

Ways to install virtual center server components:

1 Simple install : In simple install, all the mandatory components ( single sign-on, inventory service, database and virtual center are installed automatically )

Page 14: V mwarev sphere5.1notes-v2

2 individual component install : In individual component install , we need to install the components manually one by one in an order (single sign-on, inventory service, database, virtual center)

Types of distributed component install:

1 Install all components on single virtual or physical machine2 Install each component on an individual virtual or physical machine

Modes of operation of virtual center server:

1 Standalone : Used for only single instance of virtual center installation2 Linked mode : Used to link multiple instances of virtual center installation, so that we could manage all of them

from single screen

Virtual Center Virtual Appliance:

1 It is a pre-configured SUSE 11 enterprise server virtual machine which includes, pre-packaged 64bit virtual center application installed along with other mandatory components for virtual center server.

2 It include a default database ( IBM DB2)

3 The only external database supported is Oracle

Requirements for deploying virtual center appliance:

1 Minimum disk space: 7GB and maximum 82GB2 Processors: 2 vCpu3 Network card

Default plug-ins in virtual center are:

Page 15: V mwarev sphere5.1notes-v2

1 vCenter hardware status – Used to monitor the hardware of the server on which esxi is installed.2 vCenter service status - Used to monitor services which are related to virtual center3 VMware vCenter storage monitor – Used to monitor storage presented to the ESXi hosts

Optional

Few Optional Plug-ins are:

1 Update Manager2 Site Recovery Manager3 Data protection

Single Sign-on

Single sign-on is used to enable single sign-in of users into virtual center to be able to log into other applications/plug-in which interact with virtual center server i.e once you are logged into the virtual center you are by default logged into other plug-ins are well.

Single sign-on can get the users and group details for following Identity sources:

1 Active Directory2 Open LDAP v2.4 and later3 Local users4 NIS

We can configure multiple identity sources for single instance of single sign-on installation.

New Tabs seen on the host when it is connected to virtual center:

1 Alarms2 Maps3 Storage views4 Hardware status

Page 16: V mwarev sphere5.1notes-v2

Section7: Storage

The primary file system that the VMkernel uses is the VMware Virtual Machine File System (VMFS). VMFS isa cluster file system designed and optimized to support large files such as virtual disks and swap files. TheVMkernel also supports the storage of virtual disks on NFS file systems.

Types of Storage supported in ESXi

1 Local 2 Fibre channel3 iSCSI4 FCoE

Storage technologies, protocols and interfaces supported:

Features supported by various storages:

Page 17: V mwarev sphere5.1notes-v2

Protocol used in Fibre channel:

FCP: fibre channel Protocol. It encapsulates scsi command in the frame and sends it over the fibre channel cables.

Fibre channel components:

1 FC HBA: ( Host Bus Adapters – used on ESXi host end)2 FC switch: ( special switch used for fibre channel connectivity)3 Storage adapters: ( Present on the storage array end)4 Fibre channel cables: Front end connectivity

Fibre channel naming convention:

1 WWNN : worldwide node name ( unique name assigned to each FC HBA)2 WWPN: Worldwide port name ( unique port name assigned to each port on storage processor)

Zoning: it is done on the FC switch to make sure that only specified HBA’s can connect to the specified Storage processors.

Masking: It is done on the Storage end, to make sure only certain Luns are visible to the HBA’s

Fibre channel Storage array types:

1 Active – Active: Luns are accessible through all the port on all the storage processors at given point of time2 Active – Passive: luns are visible through any one of the SP ports at given point of time3 Asymmetric storage system: Access is provided by port basis. Some paths are primary and some are secondary

How virtual machines are accessed on FC datastores:

Page 18: V mwarev sphere5.1notes-v2

ESXi stores a virtual machine's disk files within a VMFS datastore that resides on a SAN storage device. Whenvirtual machine guest operating systems issue SCSI commands to their virtual disks, the SCSI virtualizationlayer translates these commands to VMFS file operations.

When a virtual machine interacts with its virtual disk stored on a SAN, the following process takes place:

1 When the guest operating system in a virtual machine reads or writes to a SCSI disk, it issues SCSIcommands to the virtual disk.2 Device drivers in the virtual machine’s operating system communicate with the virtual SCSI controllers.3 The virtual SCSI controller forwards the command to the VMkernel.4 The VMkernel performs the following tasks.a Locates the file in the VMFS volume that corresponds to the guest virtual machine disk.b Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.c Sends the modified I/O request from the device driver in the VMkernel to the physical HBA.5 The physical HBA performs the following tasks.a Packages the I/O request according to the rules of the FC protocol.b Transmits the request to the SAN.6 Depending on a port the HBA uses to connect to the fabric, one of the SAN switches receives the requestand routes it to the storage device that the host wants to access.

iSCSI storage:

iSCSI storage system uses Ethernet connections between systems and storage array for connectivity. It send scsi frames over Ethernet frames.

Components of iSCSI

1 iSCSI HBA / Network card which support iscsi offload2 Network switches/routers3 Ethernet cables4 Storage processors on Storage array end

iSCSI naming convention:

1 IQN or EUI: iSCSI Qualified Name / Extended unique identifier

Format of IQN: iqn.1998-01.com.vmare.com:localhost

1998-01: Year and month when naming authority was established Naming Authority: reverse of the domain name

Localhost: unique name , could be host name.

Types of iSCSI initiators:

Page 19: V mwarev sphere5.1notes-v2

1 Hardware Based iSCSI a Independent: Ex: Qlogic cards which have iscsi capabilityb Dependent: Ex: Broadcom nics which offload iscsi to the vmkernel

2 Software Based iSCSI: it is a software based adapter built inside vmkernel

iSCSI storage system types:

1 Active – Active 2 Active – Passive 3 Asymmetric

How virtual machines are accessed on iSCSI datastores:

When a virtual machine interacts with its virtual disk stored on a SAN, the following process takes place:

1 When the guest operating system in a virtual machine reads or writes to SCSI disk, it issues SCSIcommands to the virtual disk.2 Device drivers in the virtual machine’s operating system communicate with the virtual SCSI controllers.3 The virtual SCSI controller forwards the command to the VMkernel.4 The VMkernel performs the following tasks.

a Locates the file, which corresponds to the guest virtual machine disk, in the VMFS volume.b Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.c Sends the modified I/O request from the device driver in the VMkernel to the iSCSI initiator (hardwareor software).

5 If the iSCSI initiator is a hardware iSCSI adapter (both independent or dependent), the adapter performsthe following tasks.

a Encapsulates I/O requests into iSCSI Protocol Data Units (PDUs).b Encapsulates iSCSI PDUs into TCP/IP packets.c Sends IP packets over Ethernet to the iSCSI storage system.

6 If the iSCSI initiator is a software iSCSI adapter, the following takes place.a The iSCSI initiator encapsulates I/O requests into iSCSI PDUs.b The initiator sends iSCSI PDUs through TCP/IP connections.c The VMkernel TCP/IP stack relays TCP/IP packets to a physical NIC.d The physical NIC sends IP packets over Ethernet to the iSCSI storage system.

7 Depending on which port the iSCSI initiator uses to connect to the network, Ethernet switches and routerscarry the request to the storage device that the host wants to access.

Page 20: V mwarev sphere5.1notes-v2

How VMFS5 Differs from VMFS3

VMFS5 provides many improvements in scalability and performance over the previous version.

VMFS5 has the following improvements:

● Greater than 2TB storage devices for each VMFS extent.● Increased resource limits such as file descriptors.● Standard 1MB file system block size with support of 2TB virtual disks.● Greater than 2TB disk size for RDMs in physical compatibility mode.● Support of small files of 1KB.● With ESXi 5.1, any file located on a VMFS5 datastore, new or upgraded from VMFS3, can be opened in a shared

mode by a maximum of 32 hosts. VMFS3 continues to support 8 hosts or fewer for file sharing. This affects VMware products that use linked clones, such as View Manager.

1 Storage devices that your host supports can use either the master boot record (MBR) format or the GUIDpartition table (GPT) format.

2 With ESXi 5.0 and later, if you create a new VMFS5 datastore, the device is formatted with GPT. The GPT format enables you to create datastores larger than 2TB and up to 64TB for a single extent.

3 With VMFS5, you can have up to 256 VMFS datastores per host, with the maximum size of 64TB. The requiredMinimum size for a VMFS datastore is 1.3GB, however, the recommended minimum size is 2GB.

VMFS Locking Mechanism

1 SCSI Reservations: Used on storages which does not support hardware acceleration. Lock is on entire datastore.2 Atomic tests and sets (ATS): Used on storages which support hardware acceleration. Lock is on specific virtual

machine files.

Upgrading datastores format from VMFS3 to VMFS5:

1 Upgrade to vmfs5 from vmfs3 can be done on the fly while virtual machines are powered on2 It is one way process. No downgrade available

Difference between new format of datastore with VMFS5 and upgrade to VMFS5:

Page 21: V mwarev sphere5.1notes-v2

RAW Device Mapping:

Raw device mapping (RDM) provides a mechanism for a virtual machine to have direct access to a LUN onThe physical storage subsystem (Fibre Channel or iSCSI only).

An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device The RDM contains metadata for managing and redirecting disk access to the physical device.

Page 22: V mwarev sphere5.1notes-v2

The file gives you some of the advantages of direct access to a physical device while keeping some advantagesof a virtual disk in VMFS. As a result, it merges VMFS manageability with raw device access.

Two compatibility modes are available for RDMs:

Virtual compatibility mode allows an RDM to act exactly like a virtual disk file, including the use ofsnapshots.

Physical compatibility mode allows direct access of the SCSI device for those applications that need lowerlevel control.

RDM Limitations:

If you are using the RDM in physical compatibility mode, you cannot use a snapshot with the disk. Physicalcompatibility mode allows the virtual machine to manage its own, storage-based, snapshot or mirroringoperations.

Virtual machine snapshots are available for RDMs with virtual compatibility mode.

You cannot map to a disk partition. RDMs require the mapped device to be a whole LUN.

If you use vMotion to migrate virtual machines with RDMs, make sure to maintain consistent LUN IDsfor RDMs across all participating ESXi hosts.

Multipathing:

Multipathing is primarily used for consistent connectivity of storage with ESXi server and also provide load balancing.

To manage storage multipathing, ESXi uses a collection of Storage APIs, also called the Pluggable StorageArchitecture (PSA)

Page 23: V mwarev sphere5.1notes-v2

The VMkernel multipathing plug-in that ESXi provides by default is the VMware Native Multipathing Plug-In (NMP). The NMP is an extensible module that manages sub plug-ins. There are two types of NMP sub plugins,Storage Array Type Plug-Ins (SATPs), and Path Selection Plug-Ins (PSPs). SATPs and PSPs can be built-inand provided by VMware, or can be provided by a third party.

When coordinating the VMware NMP and any installed third-party MPPs, the PSA performs the followingtasks:

● Loads and unloads multipathing plug-ins.● Hides virtual machine specifics from a particular plug-in.● Routes I/O requests for a specific logical device to the MPP managing that device.● Handles I/O queueing to the logical devices.● Implements logical device bandwidth sharing between virtual machines.● Handles I/O queueing to the physical storage HBAs.● Handles physical path discovery and removal.● Provides logical device and physical path I/O statistics

The multipathing modules perform the following operations:

● Manage physical path claiming and unclaiming.● Manage creation, registration, and deregistration of logical devices.● Associate physical paths with logical devices.● Support path failure detection and remediation.● Process I/O requests to logical devices:

Select an optimal physical path for the request.Depending on a storage device, perform specific actions necessary to handle path failures and I/O command retries.

● Support management tasks, such as reset of logical devices.

Path policies:

Page 24: V mwarev sphere5.1notes-v2

Most recently Used: The host selects the path that it used most recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for most active-passive storage devices.

Fixed: The host uses the designated preferred path, if it has been configured. Otherwise, it selects the first working path discovered at system boot time. If you want the host to use a particular preferred path, specify it manually. Fixed is the default policy for most active-active storage devices.

Round Robin: The host uses an automatic path selection algorithm rotating through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays

Types of virtual disks:

Thick Provision Lazy Zeroed:

Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. Virtual machines do not read stale data from the physical device

Thick Provision Eager Zeroed:

A type of thick virtual disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the thick provision lazy zeroed format, the data remaining on the physical device is zeroed out when the virtual disk is created.

Thin Provision:

Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk would require based on the value that you enter for the virtual disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations.

If physical storage space is exhausted and the thin provisioned disk cannot grow, the virtual machine becomes unusable.

Page 25: V mwarev sphere5.1notes-v2

Section 8: High Availability

vSphere HA is an high availability solution from VMware which enables reduction of downtime on virtual machines. vSphere HA provides availability at virtual machine level, Guest operating system and applications.

High availability is achieved in different ways for various components of virtual infrastructure:

1 Networking: Nic teaming, multiple kernel port groups2 Storage: Multipathing3 Virtual machines: vMotion, HA, FT

vSphere HA leverages multiple ESXi hosts configured as a cluster to provide rapid recovery from outages and cost-effective high availability for applications running in virtual machines.

vSphere HA protects application availability in the following ways:

● It protects against a server failure by restarting the virtual machines on other hosts within the cluster.● It protects against application failure by continuously monitoring a virtual machine and resetting it in the event

that a failure is detected.

Advantages of using VMware HA over traditional Failover solutions:

● Minimal setup● Reduce hardware costs● Increased application availability● DRS and vMotion integration

Requirements for enable HA:

1 Virtual Center2 Creation of Cluster3 IP address ( mostly switch/router) for isolation check4 Minimum two hosts with static ip5 Required license6 Atleast one Management network for sending the HA heart beats7 Shared storage and correct network configurations on all hosts8 For virtual machine monitoring, VMware tools needs to be installed on all the virtual machines

Page 26: V mwarev sphere5.1notes-v2

Types of heart beating used to with HA:

1 Network ( VMkernel port group)2 Datastore ( Minimum 2 shared datastores)

Working of HA:

Once we enable VMware HA on the cluster, the hosts which are part of the cluster and virtual machines on them are protected by HA. Once hosts are added one of the hosts is elected as master hosts and rest as slaves. Master hosts, monitors other slave hosts and virtual machines and in case of any network issues, virtual machines Guest operating system issues, the virtual machines are either moved to another hosts or restarted respectively.

Host with highest number of datastores is elected as master. Master host has the below roles and responsibilities

● Monitoring the state of slave hosts. If a slave host fails or becomes unreachable, the master host identifieswhich virtual machines need to be restarted.

● Monitoring the power state of all protected virtual machines. If one virtual machine fails, the master hostensures that it is restarted. Using a local placement engine, the master host also determines where therestart should be done.

● Managing the lists of cluster hosts and protected virtual machines

● Acting as vCenter Server management interface to the cluster and reporting the cluster health state.

● Orchestrate restarts of protected virtual machines

If a master host is unable to communicate directly with the agent on a slave host, the slave host does not respond to ICMP pings, and the agent is not issuing heartbeats it is considered to have failed. The host's virtual machines are restarted on alternate hosts. If such a slave host is exchanging heartbeats with a datastore, the master host assumes that it is in a network partition or network isolated and so continues to monitor the host and its virtual machines.

Once HA is enabled, in the selected datastores, an auto generated file is created with list of protected virtual machines ( powered on virtual machines in HA enabled cluster) , so that master hosts knows which virtual machines needs to be restarted/migrated when HA triggers.

Types of host failures:

There are 3 types of host failures. They are:

1 Host stops functioning ( freeze/hung state)2 Host becomes network isolated3 Host loses network connectivity with master host.

Page 27: V mwarev sphere5.1notes-v2

The master host monitors the liveness of the slave hosts in the cluster. This communication is done through the exchange of network heartbeats every second. When the master host stops receiving these heartbeats from a slave host, it checks for host liveness before declaring the host to have failed. The liveness check that the master host performs is to determine whether the slave host is exchanging heartbeats with one of the datastores. Also, the master host checks whether the host responds to ICMP pings sent to its management IP addresses.

If a master host is unable to communicate directly with the agent on a slave host, the slave host does not respond to ICMP pings, and the agent is not issuing heartbeats it is considered to have failed. The host's virtual machines are restarted on alternate hosts. If such a slave host is exchanging heartbeats with a datastore, the master host assumes that it is in a network partition or network isolated and so continues to monitor the host and its virtual machines.

Host network isolation occurs when a host is still running, but it can no longer observe traffic from vSphere HA agents on the management network. If a host stops observing this traffic, it attempts to ping the cluster isolation addresses. If this also fails, the host declares itself as isolated from the network.

The master host monitors the virtual machines that are running on an isolated host and if it observes that they power off, and the master host is responsible for the virtual machines, it restarts them.

Determining Responses to Host Issues

If a host fails and its virtual machines need to be restarted, you can control the order in which this is done with the VM restart priority setting. You can also configure how vSphere HA responds if hosts lose management network connectivity with other hosts by using the host isolation response setting.

VM Restart Priority: This setting can be set in order to make sure that HA restarts the machines based on the priority assigned to them and in a specific order.

The values for this setting are: Disabled, Low, Medium (the default), and High. If you select Disabled, vSphereHA is disabled for the virtual machine, which means that it is not restarted on other ESXi hosts if its host fails.

Host Isolation Response

Host isolation response determines what happens when a host in a vSphere HA cluster loses its management network connections but continues to run. You can use the isolation response to have vSphere HA power off virtual machines that are running on an isolated host and restart them on a non-isolated host. Host isolation responses require that Host Monitoring Status is enabled.

The three isolation responses are:

1 Leave Powered On – no response at all, leave the VMs powered on when there’s a network isolation

2 Shutdown VM – guest initiated shutdown, clean shutdown

3 Power Off VM – hard stop, equivalent to power cord being pulled out

When to use “Leave Powered On”This is the default option and more than likely the one that fits your organization best as it will work in most scenarios. When you have a Network Isolation event but retain access to your datastores HA will not respond and your virtual machines will keep running. If both your Network and Storage environment are isolated then HA will recognize this and power-off the VMs when it recognizes the lock on the VMDKs of the VMs have been acquired by other VMs to avoid a split brain scenario as explained above. Please note that in order to recognize the lock has been acquired by another host the

Page 28: V mwarev sphere5.1notes-v2

“isolated” host will need to be able to access the device again. (The power-off won’t happen before the storage has returned!)

When to use “Shutdown VM”It is recommend to use this option if it is likely that a host will retain access to the VM datastores when it becomes isolated and you wish HA to restart a VM when the isolation occurs. In this scenario, using shutdown allows the guest OS to shutdown in an orderly manner. Further, since datastore connectivity is likely retained during the isolation, it is unlikely that HA will shut down the VM unless there is a master available to restart it. Note that there is a time out period of 5 minutes by default. If the VM has not been gracefully shutdown after 5 minutes a “Power Off” will be initiated.

When to use “Power Off VM”It is recommend to use this option if it is likely that a host will lose access to the VM datastores when it becomes isolated and you want HA to immediately restart a VM when this condition occurs. This is a hard stop in contrary to “Shutdown VM” which is a guest initiated shutdown and could take up to 5 minutes.

As stated, Leave Powered On is the default and fits most organizations as it prevents unnecessary responses to a Network Isolation but still takes action when the connection to your storage environment is lost at the same time.

VM and Application MonitoringVM Monitoring restarts individual virtual machines if their VMware Tools heartbeats are not received within a set time. Similarly, Application Monitoring can restart a virtual machine if the heartbeats for an application it is running are not received. You can enable these features and configure the sensitivity with which vSphere HA monitors non-responsiveness.

When you enable VM Monitoring, the VM Monitoring service (using VMware Tools) evaluates whether eachvirtual machine in the cluster is running by checking for regular heartbeats and I/O activity from the VMwareTools process running inside the guest. If no heartbeats or I/O activity are received, this is most likely becausethe guest operating system has failed or VMware Tools is not being allocated any time to complete tasks. Insuch a case, the VM Monitoring service determines that the virtual machine has failed and the virtual machine is rebooted to restore service.

Network PartitionsWhen a management network failure occurs for a vSphere HA cluster, a subset of the cluster's hosts might be unable to communicate over the management network with the other hosts. Multiple partitions can occur in a cluster.

A partitioned cluster leads to degraded virtual machine protection and cluster management functionality.Correct the partitioned cluster as soon as possible

Datastore Heartbeating

When the master host in a vSphere HA cluster cannot communicate with a slave host over the management network, the master host uses datastore heartbeating to determine whether the slave host has failed, is in a network partition, or is network isolated. If the slave host has stopped datastore heartbeating, it is considered to have failed and its virtual machines are restarted elsewhere.

Datastore Heartbeat mechanism plays no role whatsoever in the process for declaring a host isolated, or does it? No from the perspective of the host which is isolated it does not. The Datastore Heartbeat mechanism is used by the master to determine the state of the unresponsive host. The Datastore Heartbeat mechanism allows the the master to determine if the host which stopped sending network heartbeats is isolated or has failed completely. Depending on the determined state the master will take appropriate action.

Page 29: V mwarev sphere5.1notes-v2

To summarize, the datastore heartbeat mechanism has been introduced to allow the master to identify the state of hosts and is not use by the “isolated host” to prevent isolation.

Port used for HA agent to agent communication is: 8182 ( TCP and UDP )

Log location for HA agent:

For ESXi 5.x hosts, vSphere HA writes to syslog only by default, so logs are placed where syslog is configured to put them. The log file names for vSphere HA are prepended with fdm, fault domain manager, which is a service of vSphere HA.

vSphere HA Admission Control

vCenter Server uses admission control to ensure that sufficient resources are available in a cluster to provide failover protection and to ensure that virtual machine resource reservations are respected.

Three types of admission control are available.

Host: Ensures that a host has sufficient resources to satisfy the reservations of all virtual machines running on it.

Resource Pool: Ensures that a resource pool has sufficient resources to satisfy the reservations, shares, and limits of all virtual machines associated with it.

vSphere HA: Ensures that sufficient resources in the cluster are reserved for virtual machine recovery in the event of host failure.

Slot Size Calculation

Slot size is comprised of two components, CPU and memory.

vSphere HA calculates the CPU component by obtaining the CPU reservation of each powered-on virtual machine and selecting the largest value. If you have not specified a CPU reservation for a virtual machine,

it is assigned a default value of 32MHz. You can change this value by using the das.vmcpuminmhz advanced attribute.

vSphere HA calculates the memory component by obtaining the memory reservation, plus memory overhead, of each powered-on virtual machine and selecting the largest value. There is no default value for the memory reservation.

If your cluster contains any virtual machines that have much larger reservations than the others, they will distort slot size calculation. To avoid this, you can specify an upper bound for the CPU or memory component of the slot size by using the das.slotcpuinmhz or das.slotmeminmb advanced attributes, respectively

Time Duration of each task of HA:

● T0 – Isolation of the host (slave)

● T10s – Slave enters “election state”

● T25s – Slave elects itself as master

● T25s – Slave pings “isolation addresses”

● T30s – Slave declares itself isolated

Page 30: V mwarev sphere5.1notes-v2

● T60s – Slave “triggers” isolation response

Isolation times of Master and Slave HA hosts:

Isolation of a slave

● T0 – Isolation of the host (slave)

● T10s – Slave enters “election state”

● T25s – Slave elects itself as master

● T25s – Slave pings “isolation addresses”

● T30s – Slave declares itself isolated and “triggers” isolation response

Isolation of a master

● T0 – Isolation of the host (master)

● T0 – Master pings “isolation addresses”

● T5s – Master declares itself isolated and “triggers” isolation response

Fault Tolerance:

Fault Tolerance is built on the ESXi host platform (using the VMware vLockstep technology), and it providesContinuous availability by having identical virtual machines run in virtual lockstep on separate hosts.

Cluster requirements for Fault Tolerance:

1 At least two FT-certified hosts running the same Fault Tolerance version or host build number2 ESXi hosts have access to the same virtual machine datastores and networks3 Fault Tolerance logging and VMotion networking configured4 vSphere HA cluster created and enabled

Hosts requirement for Fault Tolerance:

1 Hosts must have processors from the FT-compatible processor group2 Hosts must be licensed for Fault Tolerance3 Hosts must be certified for Fault Tolerance4 The configuration for each host must have Hardware Virtualization (HV) enabled in the BIOS

vSphere features which are not supported with Fault Tolerance:

1 Snapshots2 Storage vMotion3 Linked clone4 Virtual machines

Page 31: V mwarev sphere5.1notes-v2

Features and devices not supported on FT enabled virtual machines:

1 SMP2 RDM3 CD-ROM and floppy drives backed by physical or remote device4 Paravirtualized guests5 NPIV6 NIC passthrough7 Virtual disks in thin provision and thick lazy zeroed format8 Serial or parallel ports9 Virtual EFI Bios10 Hot pluggable devices

Working of Fault Tolerance:

vSphere Fault Tolerance provides continuous availability for virtual machines by creating and maintaining a Secondary VM that is identical to, and continuously available to replace, the Primary VM in the event of a failover situation.

You can enable Fault Tolerance for most mission critical virtual machines. A duplicate virtual machine, called the Secondary VM, is created and runs in virtual lockstep with the Primary VM. VMware vLockstep captures inputs and events that occur on the Primary VM and sends them to the Secondary VM, which is running on another host. Using this information, the Secondary VM's execution is identical to that of the Primary VM. Because the Secondary VM is in virtual lockstep with the Primary VM, it can take over execution at any point without interruption, thereby providing fault tolerant protection.

The Primary and Secondary VMs continuously exchange heartbeats. This exchange allows the virtual machine pair to monitor the status of one another to ensure that Fault Tolerance is continually maintained. A transparent failover occurs if the host running the Primary VM fails, in which case the Secondary VM is immediately activated to replace the Primary VM. A new Secondary VM is started and Fault Tolerance redundancy is reestablished within a few seconds. If the host running the Secondary VM fails, it is also immediately replaced. In either case, users experience no interruption in service and no loss of data.

The option to turn on Fault Tolerance is unavailable (dimmed) for virtual machines if any of these conditions apply:

● The virtual machine resides on a host that does not have a license for the feature.● The virtual machine resides on a host that is in maintenance mode or standby mode.● The virtual machine is disconnected or orphaned (its .vmx file cannot be accessed).● The user does not have permission to turn the feature on.

Validation Checks for Turning On Fault Tolerance

● SSL certificate checking must be enabled in the vCenter Server settings.● The host must be in a vSphere HA cluster or a mixed vSphere HA and DRS cluster.● The host must have ESX/ESXi 4.0 or greater installed.● The virtual machine must not have multiple vCPUs.● The virtual machine must not have snapshots.

Page 32: V mwarev sphere5.1notes-v2

● The virtual machine must not be a template.● The virtual machine must not have vSphere HA disabled.● The virtual machine must not have a video device with 3D enabled.

Section 9: Resource Monitoring

Resource types:

Resources include CPU, memory, power, storage, and network resources

Resources providers:

Hosts and clusters, including datastore clusters, are providers of physical resources

Resource consumers:

Virtual machines are resource consumers.

Understanding resources shares, allocations, reservations and limits:

Resource shares: Shares specify the relative importance of a virtual machine (or resource pool). If a virtual machine has twice as many shares of a resource as another virtual machine, it is entitled to consume twice as much of that resource when these two virtual machines are competing for resources

Specifying shares makes sense only with regard to sibling virtual machines or resource pools, that is, virtual machines or resource pools with the same parent in the resource pool hierarchy. Siblings share resources according to their relative share values, bounded by the reservation and limit.

For example, an SMP virtual machine with two virtual CPUs and 1GB RAM with CPU and memory shares set to Normal has 2x1000=2000 shares of CPU and 10x1024=10240 shares of memory.

Note: Virtual machines with more than one virtual CPU are called SMP (symmetric multiprocessing) virtual machines. ESXi supports up to 64 virtual CPUs per virtual machine.

Resource Reservation:

A reservation specifies the guaranteed minimum allocation for a virtual machine. The server guarantees that amount even when the physical server is heavily loaded. You can specify a reservation if you need to guarantee that the minimum required amounts of CPU or memory are always available for the virtual machine

Resource Limits:

Limit specifies an upper bound for CPU, memory, or storage I/O resources that can be allocated to a virtual machine.

Page 33: V mwarev sphere5.1notes-v2

Benefits — assigning a limit are useful if you start with a small number of virtual machines and want to manage user expectations. Performance deteriorates as you add more virtual machines. You can simulate having fewer resources available by specifying a limit.

Drawbacks — you might waste idle resources if you specify a limit. The system does not allow virtual machines to use more resources than the limit, even when the system is underutilized and idle resources are available. Specify the limit only if you have good reasons for doing so.

CPU Virtualization:

When CPU resources are overcommitted, the ESXi host time-slices the physical processors across all virtual machines so each virtual machine runs as if it has its specified number of virtual processors. When an ESXi host runs multiple virtual machines, it allocates to each virtual machine a share of the physical resources.

Software-Based CPU Virtualization

With software-based CPU virtualization, the guest application code runs directly on the processor, while the guest privileged code is translated and the translated code executes on the processor.

Hardware-Assisted CPU Virtualization

Certain processors provide hardware assistance for CPU virtualization. When using this assistance, the guest can use a separate mode of execution called guest mode. The guest code, whether application code or privileged code, runs in the guest mode. When you use hardware assistance for virtualization, there is no need to translate the code. As a result, system calls or trap-intensive workloads run very close to native speed

An application is CPU-bound if it spends most of its time executing instructions rather than waiting for external events such as user interaction, device input, or data retrieval. For such applications, the CPU virtualization overhead includes the additional instructions that must be executed. This overhead takes CPU processing time that the application itself can use. CPU virtualization overhead usually translates into a reduction in overall performance.

VMware uses the term socket to describe a single package which can have one or more processor cores with one or more logical processors in each core. Each logical processor of each processor core can be used independently by the ESXi CPU scheduler to execute virtual machines, providing capabilities similar to SMP systems

The ESXi CPU scheduler can interpret processor topology, including the relationship between sockets, cores, and logical processors. The scheduler uses topology information to optimize the placement of virtual CPUs onto different sockets to maximize overall cache utilization, and to improve cache affinity by minimizing virtual CPU migrations.

To avoid confusion between logical and physical processors, Intel refers to a physical processor as a socket

Memory Management:

Page 34: V mwarev sphere5.1notes-v2

The VMkernel manages all machine memory. The VMkernel dedicates part of this managed machine memory for its own use. The rest is available for use by virtual machines. Virtual machines use machine memory for two purposes: each virtual machine requires its own memory and the virtual machine monitor (VMM) requires some memory and a dynamic overhead memory for its code and data.

Each virtual machine consumes memory based on its configured size, plus additional overhead memory for virtualization.

Shares Specify the relative priority for a virtual machine if more than the reservation is available.

Reservation Is a guaranteed lower bound on the amount of physical memory that the host reserves for the virtual machine, even when memory is overcommitted. Set the reservation to a level that ensures the virtual machine has sufficient memoryto run efficiently, without excessive paging. After a virtual machine has accessed its full reservation, it is allowed to retain that amount of memory and this memory is not reclaimed, even if the virtual machine becomes idle.

Limit Is an upper bound on the amount of physical memory that the host can allocate to the virtual machine. The virtual machine’s memory allocation is also implicitly limited by its configured size. Overhead memory includes space reserved for the virtual machine frame buffer and various virtualization data structures

Software based memory virtualization:

The VMM for each virtual machine maintains a mapping from the guest operating system's physical memory pages to the physical memory pages on the underlying machine. (VMware refers to the underlying host physical pages as “machine” pages and the guest operating system’s physical pages as “physical” pages.)

The VMM intercepts virtual machine instructions that manipulate guest operating system memory management structures so that the actual memory management unit (MMU) on the processor is not updated directly by the virtual machine.

The ESXi host maintains the virtual-to-machine page mappings in a shadow page table that is kept up to date with the physical-to-machine mappings (maintained by the VMM).

The shadow page tables are used directly by the processor's paging hardware

Hardware-Assisted Memory Virtualization

Some CPUs, such as AMD SVM-V and the Intel Xeon 5500 series, provide hardware support for memory virtualization by using two layers of page tables.

The first layer of page tables stores guest virtual-to-physical translations, while the second layer of page tables stores guest physical-to-machine translation. The TLB (translation look-aside buffer) is a cache of translations maintained by the processor's memory management unit (MMU) hardware. A TLB miss is a miss in this cache and the hardware needs to go to

Page 35: V mwarev sphere5.1notes-v2

memory (possibly many times) to find the required translation. For a TLB miss to a certain guest virtual address, the hardware looks at both page tables to translate guest virtual address to host physical address

Memory Reclamation Techniques:

1 Transparent page sharing: Transparent page sharing allows pages with identical contents to be stored only once. i.e if there are multiple virtual machine running same operating system, then only one copy of operating files would be copied to memory, so that all virtual machines running same operating system can access the common files from memory when required.

2 Memory Ballooning: memory ballooning occurs when one of the virtual machine is in scare of memory i.e required more memory then allocated. In this case, it takes unused memory from other virtual machines using the vmmemctl ( Ballooning) driver which is installed as part of vmware tools installation.

3 Memory compression: When a host's memory becomes overcommitted, ESXi compresses virtual pages and stores them in memory.

Because accessing compressed memory is faster than accessing memory that is swapped to disk, memory compression in ESXi allows you to overcommit memory without significantly hindering performance. Whena virtual page needs to be swapped, ESXi first attempts to compress the page. Pages that can be compressedto 2 KB or smaller are stored in the virtual machine's compression cache, increasing the capacity of the host

4 Host-level SSD swapping : Datastores that are created on solid state drives (SSD) can be used to allocate space for host cache. The host reserves a certain amount of space for swapping to host cache.

The host cache is made up of files on a low-latency disk that ESXi uses as a write back cache for virtual machine swap files. The cache is shared by all virtual machines running on the host. Host-level swapping of virtual machine pages makes the best use of potentially limited SSD space.

5 Page virtual machine memory out to disk: This option is used as a last resort if all the above techniques fail. Virtual machines will page the memory files to its disks (vmdk)

Memory reliability:

Memory reliability, also known as error insolation, allows ESXi to stop using parts of memory when it determines that a failure might occur, as well as when a failure did occur.

When enough corrected errors are reported at a particular address, ESXi stops using this address to prevent the corrected error from becoming an uncorrected error.

Memory reliability provides a better VMkernel reliability despite corrected and uncorrected errors in RAM. It also enables the system to avoid using memory pages that might contain errors.

Page 36: V mwarev sphere5.1notes-v2

Storage IO control (SIOC):

vSphere Storage I/O Control allows cluster-wide storage I/O prioritization, which allows better workload consolidation and helps reduce extra costs associated with over provisioning.

You can control the amount of storage I/O that is allocated to virtual machines during periods of I/O congestion, which ensuresthat more important virtual machines get preference over less important virtual machines for I/O resource allocation.

When you enable Storage I/O Control on a datastore, ESXi begins to monitor the device latency that hosts observe when communicating with that datastore. When device latency exceeds a threshold, the datastore is considered to be congested and each virtual machine that accesses that datastore is allocated I/O resources in proportion to their shares

Configuring Storage I/O Control is a two-step process:

1 Enable Storage I/O Control for the datastore.2 Set the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for each virtual machine.

By default, all virtual machine shares are set to Normal (1000) with unlimited IOPSRequirements and Limitations of using SOIC:

Storage I/O Control has several requirements and limitations.

● Datastores that are Storage I/O Control-enabled must be managed by a single vCenter Server system.

● Storage I/O Control is supported on Fibre Channel-connected, iSCSI-connected, and NFS-connected

● storage. Raw Device Mapping (RDM) is not supported.

● Storage I/O Control does not support datastores with multiple extents.

● Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities.

Resource Pools:

A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to hierarchically partition available CPU and memory resources.

Users can create child resource pools of the root resource pool or of any user-created child resource pool. Each child resource pool owns some of the parent’s resources and can, in turn, have a hierarchy of child resource pools to represent successively smaller units of computational capability.

Resource pools allow you to delegate control over resources of a host (or a cluster), but the benefits are evident when you use resource pools to compartmentalize all resources in a cluster.

When you move a virtual machine to a new resource pool:

● The virtual machine’s reservation and limit do not change.● If the virtual machine’s shares are high, medium, or low, %Shares adjusts to reflect the total number of shares in

use in the new resource pool.● If the virtual machine has custom shares assigned, the share value is maintained.

Page 37: V mwarev sphere5.1notes-v2

Types of reservation:

1 Fixed: Amount of resources in the resource pool is limited to the configured values on the resource pool.

2 Expandable: When child resource pool, is short of resources, it can get additional resources from parent pool.

Distributed Resource Scheduling

A cluster is a collection of ESXi hosts and associated virtual machines with shared resources and a shared management interface.

When DRS is enabled admission controthl and initial placement of virtual machine into cluster is enabled as well. Which means, when you create a virtual machine at the cluster level, it will recommend the host where to place the virtual machine when DRS is set to manual and will automatically select host, if DRS is set to automatic.

DRS uses vmotion to automatically load balance the virtual machines, in case it determines load on any of the host to be in on higher side.

DRS migration threshold:

The DRS migration threshold allows you to specify which recommendations are generated and then applied (when the virtual machines involved in the recommendation are in fully automated mode) or shown (if in manual mode). This threshold is also a measure of how much cluster imbalance across host (CPU and memory) loads is acceptable.

Migration Recommendations:

If we create a cluster with a default manual or partially automated mode, vCenter Server displays migration recommendations on the DRS Recommendations page.

The reasons for recommendations to be generated to move virtual machines are:

● Balance average CPU loads or reservations.● Balance average memory loads or reservations.● Satisfy resource pool reservations.● Satisfy an affinity rule.● Host is entering maintenance mode or standby mode.

Requirements to enable DRS:

1 Shared Storage2 Same processors on all the hosts part of DRS cluster to enable vmotion

Page 38: V mwarev sphere5.1notes-v2

3 Does not support RAW disk and also virtual machines with MSCS clustering enabled

Automation levels on DRS cluster:

Manual: Placement and migration recommendations are displayed, but do not run until you manually apply the recommendation.

Partially Automated: Initial placement is performed automatically. Migration recommendations are displayed, but do not run.

Fully automated: Placement and migration recommendations run automatically.

Disabled: vCenter Server does not migrate the virtual machine or provide migration recommendations for it.

Disabling DRS:

When DRS is disabled, the cluster’s resource pool hierarchy and affinity rules are not reestablished when DRS is turned back on. If you disable DRS, the resource pools are removed from the cluster. To avoid losing the resource pools, save a snapshot of the resource pool tree on your local machine. You can use the snapshot to restore the resource pool when you enable DRS.

Things to consider before moving hosts out of DRS cluster:

When host is removed from the DRS cluster, it effects the availability of resources on that cluster and in turn effect resource pool allocations.

Consider the affected objects before removing the host from DRS cluster:

Resource Pool Hierarchies – When you remove a host from a cluster, the host retains only the root resource pool, even if you used a DRS cluster and decided to graft the host resource pool when you added the host to the cluster. In that case, the hierarchy remains with the cluster. You can create a host-specific resource pool hierarchy.

Virtual Machines – A host must be in maintenance mode before you can remove it from the cluster and for a host to enter maintenance mode all powered-on virtual machines must be migrated off that host.

Invalid Clusters – When you remove a host from a cluster, the resources available for the cluster decrease. If the cluster has enough resources to satisfy the reservations of all virtual machines and resource pools in the cluster, the cluster adjusts resource allocation to reflect the reduced amount of resources. If the cluster does not have enough resources to satisfy the reservations of all resource pools, but there are enough resources to satisfy the reservations for all virtual machines, an alarm is issued and the cluster is marked yellow. DRS continues to run.

Validity of DRS cluster:

Page 39: V mwarev sphere5.1notes-v2

DRS clusters become overcommitted or invalid for several reasons.

● A cluster might become overcommitted if a host fails.● A cluster becomes invalid if vCenter Server is unavailable and you power on virtual machines using a● vSphere Client connected directly to a host.● A cluster becomes invalid if the user reduces the reservation on a parent resource pool while a virtual● machine is in the process of failing over.● If changes are made to hosts or virtual machines using a vSphere Client connected to a host while vCenter Server is

unavailable, those changes take effect. When vCenter Server becomes available again, you might find that clusters have turned red or yellow because cluster requirements are no longer met.

Distributed Power Management:

The vSphere Distributed Power Management (DPM) feature allows a DRS cluster to reduce its power consumption by powering hosts on and off based on cluster resource utilization.

vSphere DPM monitors the cumulative demand of all virtual machines in the cluster for memory and CPU resources and compares this to the total available resource capacity of all hosts in the cluster. If sufficient excess capacity is found, vSphere DPM places one or more hosts in standby mode and powers them off after migrating their virtual machines to other hosts. Conversely, when capacity is deemed to be inadequate, DRS brings hosts out of standby mode (powers them on) and uses vMotion to migrate virtual machines to them. When making these calculations, vSphere DPM considers not only current demand, but it also honors any user-specified virtual machine resource reservations.

vSphere DPM can use one of three power management protocols to bring a host out of standby mode:

Intelligent Platform Management Interface (IPMI) Hewlett-Packard Integrated Lights-Out (iLO) Wake-On-LAN (WOL).

Each protocol requires its own hardware support and configuration. If a host does not support any of these protocols it cannot be put into standby mode by vSphere DPM. If a host supports multiple protocols, they are used in the following order: IPMI, iLO, WOL.

Page 40: V mwarev sphere5.1notes-v2

Section 10: Auto Deploy

Autodeploy is a rapid esxi server provisioning tool from VMware.

Components of Autodeploy:

1 Auto deploy server2 TFTP server3 DHCP server4 PowerCli5 Virtual center server6 Host profiles

1 Install Auto Deploy server on virtual center server or any other windows server OS machine using the Virtual center installer.

2 Next download and configure TFTP server3 Once TFTP is configured and service is started, download the boot files from autodeploy and save them in TFTP

folder.4 Now install PowerCli. During installation of PowerCli ser the execution policy to remote signed using the below

command

Set-executionpolicy remotesigned

5 Now launch powerCli and connect to virtual center using the below command:Connect-viserver [vc dns name]Provide VC credentials if prompted for.

6 Now create a software depot, which is a repository of ESXI images and software Vibs to be packages and deployed using below command.

Add-esxsoftwaredepot [path of esxi zip files or vibs]

7 Now create profile. Profile is a basically a image that we create with all the modules that we added to the depot.

New-esxImageprofile –cloneprofile “ESXi-5.0.0-standard” -name “EsxiAutoDeployFull”

8 Now add software packages to this profile.Add-esxsoftwarepackage –imageprofile “EsxiAutoDeployFull” -software package [package name]

9 To list the packages added to profileGet-esxisoftwarepackage [package name]

10 Now create a deploy rule. Deploy rule defines, as to which hosts the image needs to be applied to.

New-deployrule –name [nameofrule] –Item “ESXiautodeployfull” -Allhosts

Page 41: V mwarev sphere5.1notes-v2

11 Now export the imageprofile to autodeploy serverExport-imageprofile –imageprofile “ESXi autodeployfull” –exportbundle –filepath [location to save]

12 Now add the deploy rule

Add-deployrule “[rulename]”

For lab pupose:

a Create a new virtual machine to install ESXI using autodeploy with 2GB ram, 2vCPU, E1000 network card and select last option for cpu/mmu

b Make a note of the MAC Id of the network card

13 Now connect to DHCP server and create a new reservation14 Provide an ip address and enter the mac id that we copied from the vm that was created15 On the reservation that was created, configure options 66 TFTP server ip/name, option 67 boot file name16 Now reboot the virtual machine and now we should see the machine contacting DHCP and then load the boot files

from tftp

17 We have completed new autodeployment of one esxi server.

18 Now we could go ahead and configure it and then take a host profile

19 Create cluster and configure it

20 Now add host profile and check for compliance

21 Now go back to powercli and create new deploy rule to add second hosts to the new cluster that we created

New-deployrule -name “[name of rule]” –item “esxiautodeployfull” [host profile] [clustername] –pattern “model-vmware Virtual Platform”

22 Now add new deploy ruleAdd-deployrule -deployrule “rulename”

23 Remove the old rule that we created

Remove-deployrule –deployrule [rulename] –delete

24 Now go to virtual centre and create new vm and perform the steps to add reservation in dhcp

Page 42: V mwarev sphere5.1notes-v2

Section 11: Patch Management using Update Manager

Update Manager enables centralized, automated patch and version management for VMware vSphere® ESXi™ hosts, virtual machine hardware, VMware Tools, and virtual appliances.

Update manager capabilities:

1. Enables cross-platform upgrade from VMware ESX® to ESXi

2. Automated patch downloading:

Begins with information-only downloading

Is scheduled at regular configurable intervals

Contacts the following sources for patching ESXi hosts:

• For VMware® patches: https://hostupdate.vmware.com

• For third-party patches: URL of third-party source

3. Creation of baselines and baseline groups

4. Scanning: Inventory systems are scanned for baseline compliance.

5. Remediation: Inventory systems that are not current can be automatically patched.

6. Reduces the number of reboots required after VMware Tools updates

H/W and S/W Requirements:

H/W:

1. Processor : 2cpu’s 2. Memory: 2GB if on different machine to virtual center, 4GB if on same machine as virtual center3. Disk space: 40GB

S/W:1. OS: windows 64bit operating system2. Database: SQL or Orcacle DB3. DSN: 32bit connection even on 64bit windows OS

Information need during installation of update manager:

1. Vcenter server ip address/hostname2. Vcenter server’s username and password3. Database DSN4. Update managers port and proxy server settings5. Destination folder to download and save patches

Page 43: V mwarev sphere5.1notes-v2

Baselines in update manager:

1. A baseline consists of one or more patches, extensions, or upgrades

Types of Baselines:

Host patch

Host extension

Host upgrade

Virtual machine upgradefor hardware or VMware Tools

Virtual appliance upgrade

Patch Recall Notification:

At regular intervals, Update Manager contacts VMware to download notifications about patch recalls, new fixes, and alerts.

Notification Check Schedule is selected by default.

On receiving patch recall notifications, Update Manager:

Generates a notification in the notification tab

No longer applies the recalled patch to any host:

• Patch is flagged as recalled in the database.

Deletes the patch binaries from its patch repository

Does not uninstall recalled patches from ESXi hosts:

• Instead, it waits for a newer patch and applies that to make a host compliant.

Remediation Enabled for DRS:

Eliminate downtime for virtual machines when patching ESXi hosts:

1. Update Manager puts host in maintenance mode.

2. VMware vSphere® Distributed Resource Scheduler™ moves virtual machines to available host.

3. Update Manager patches host and then exits maintenance mode.

4. DRS moves virtual machines back per rule.

Page 44: V mwarev sphere5.1notes-v2

Update Manager has two deployment models:

Internet-connected model:

The Update Manager server is connected to the VMware patch repository, andthird-party patch repositories (for ESX/ESXi 4.x, ESXi 5.x hosts, as well as for virtual appliances). Update Manager works with vCenter Server to scan and remediate the virtual machines, appliances, hosts, and templates.

Air-gap model:

Update Manager has no connection to the Internet and cannot download patch metadata. In this model, you can use UMDS to download and store patch metadata and patch binaries in a shared repository. To scan and remediate inventory objects, you must configure the Update Manager server to use a shared repository of UMDS data as a patch datastore.

Scanning host:

Scanning is the process in which attributes of a set of hosts, virtual machines, or virtual appliances are evaluated against the patches, extensions, and upgrades included in the attached baselines and baseline groups.

Compliance:

When hosts or virtual machines are scanned against baselines, it checks for compliance, i.e if the patches included in the baseline are already installed on the host/virtual machines, they are said to be in compliance with baseline and result is displayed in green.

If the patches included in the baseline are not already installed on the host/virtual machines, they are said to be non-compliant and the result of scan is shown in red.

When result of scan is non-compliant, then we need to go ahead and install the patches.

Remediation:

You can remediate virtual machines, virtual appliances, and hosts using either user-initiated remediation or scheduled remediation at a time that is convenient for you.

Remediation is nothing but installing patches on the host

Page 45: V mwarev sphere5.1notes-v2

Section 12: VMware Converter

VMware® vCenter Converter Standalone is a scalable solution to convert virtual and physical machines to VMware virtual machines. You can also configure existing virtual machines in your vCenter Server Environment

The Converter Standalone application consists of Converter Standalone server, Converter Standalone worker, Converter Standalone client, and Converter Standalone agent.

Converter Standalone server

Enables and performs the import and export of virtual machines. The Converter Standalone server comprises two services, Converter Standalone server and Converter Standalone worker. The Converter Standalone worker service is always installed with the Converter Standalone server service.

Converter Standalone agent

The Converter Standalone server installs the agent on Windows physical machines to import them as virtual machines. You can choose to remove the Converter Standalone agent from the physical machine automatically or manually after the import is complete.

Converter Standalone client

The Converter Standalone server works with the Converter Standalone client. The client component consists of the Converter Standalone user interface, which provides access to the Conversion and the Configuration wizards, and allows you to manage the conversion and the configuration tasks.

VMware vCenter converter Boot CD

The VMware vCenter Converter Boot CD is a separate component that you can use to perform cold cloning on a physical machine. Converter Standalone 4.3 and later versions do not provide a Boot CD, but you can use previous versions of the Boot CD to perform cold cloning

In Windows conversions, the Converter Standalone agent is installed on the source machine and the source information is pushed to the destination.

In Linux conversions, no agent is deployed on the source machine. Instead, a helper virtual machine is created and deployed on the destination ESX/ESXi host. The source data is then copied from the source Linux machine to the helper virtual machine. After the conversion is complete, the helper virtual machine shuts down to become the destination virtual machine the next time you power it on.

Types of Data Cloning Operations:

Volume based: Copy volumes from the source machine to the destination machine. Volume-based cloning is relatively slow. File-level cloning is slower than block-level cloning. Dynamic disks are converted into basic volumes on the target virtual machine.

Volume-based cloning is performed at the file level or block level, depending on the destination volume size that you select.

Page 46: V mwarev sphere5.1notes-v2

Volume-based cloning at the file level

Performed when you select a size smaller than the original volume for NTFS volumes or you choose to resize a FAT volume.Volume-based cloning at the file level is supported only for FAT, FAT32, NTFS, ext2, ext3, ext4, and ReiserFS file systems.

Volume-based cloning at the block level

Performed when you choose to preserve the size of the source volume or when you specify a larger volume size for NTFS source volumes.

Disk based: Create copies of the source machines, for all types of basic and dynamic disks. You cannot select which data to copy. Disk-based cloning is faster than volume-based cloning.

Linked clone: Use to quickly check compatibility of non-VMware images For certain third-party sources, the linked clone is corrupted if you power on the source machine after the conversion. Linked cloning is the fastest (but incomplete) cloning mode that Converter Standalone supports.

Supported source volumes:

Virtual machine conversion:

• Basic volumes• All types of dynamic volumes• Master boot record (MBR) disks

Powered-on machine conversion:

• All types of source volumes that• Windows recognizes• Linux ext2, ext3, and ReiserFS

System settings effected by conversion:

1. Preserved settings:

Operating system configuration (computer name, security ID, user accounts, profiles, preferences, and soon)

Applications and data files Volume serial number for each disk partition

2. Settings changed during conversion:

Some hardware-dependent drivers Mapped drive letters

Page 47: V mwarev sphere5.1notes-v2

Supported Operating systems:

Page 48: V mwarev sphere5.1notes-v2

Supported source types for conversion:

Page 49: V mwarev sphere5.1notes-v2

Converter limitations:

Limitations for powered on virtual machines:

Windows xp and later:

• Synchronization is supported only for volume-based cloning at the block level.• Scheduling synchronization is supported only for managed destinations that are ESX 4.0 or later.

Linux:

• Only volume-based cloning at the file level is supported.• Only managed destinations are supported.• Converting multiboot virtual machines is supported only if GRUB is installed as the boot loader.

LILO is not supported

Limitations for converting virtual machines:

Windows virtual machine sources:

When you convert a virtual machine with VMware snapshots, the snapshots are not transferred to the destination virtual machine

Linux virtual machine sources

Only disk-based cloning is supported for Linux guest operating systems. Configuration or customization is not supported for Linux guest operating systems. Installing VMware Tools is not supported on Linux guest operating systems

Page 50: V mwarev sphere5.1notes-v2

Section 12: vSphere Data Protection

vSphere Data Protection (VDP) is a robust, simple to deploy, disk-based backup and recovery solution. VDPis fully integrated with the VMware vCenter Server and enables centralized and efficient management of backup jobs while storing backups in deduplicated destination storage location.

VDP has two tiers:

• vSphere Data Protection (VDP)• vSphere Data Protection Advanced (VDP Advanced)

Features of VDP:

Virtual machines supported by VDP appliance: 100Maximum datastore size: 2TBAbility to expand current datastore: NoSupport for image-level backup: yesSupport for guest level backups for sql servers: NoSupport for guest level backups for Exchange servers: NoSupport for file level recovery: yes

Benefits of vSphere data protection:

Provides fast and efficient data protection for all of your virtual machines, even those powered off ormigrated between ESX hosts.

Significantly reduces disk space consumed by backup data using patented variable-length deduplication across all backups. Reduces the cost of backing up virtual machines and minimizes the backup window using Change Block Tracking (CBT) and VMware virtual machine snapshots. Allows for easy backups without the need for third-party agents installed in each virtual machine. Uses a simple, straight-forward installation as an integrated component within vSphere, which is managed by a web portal. Provides direct access to VDP configuration integrated into the vSphere Web Client. Protects backups with checkpoint and rollback mechanisms. Provides simplified recovery of Windows and Linux files with end-user initiated file level recoveries from a web-based interface.

A datastore is a virtual representation of a combination of underlying physical storage resources in the datacenter. A datastore is the storage location (for example, a physical disk, a RAID, or a SAN) for virtual machine files.

Changed Block Tracking (CBT) is a VMkernel feature that keeps track of the storage blocks of virtual machines as they change over time. The VMkernel keeps track of block changes on virtual machines, which enhances the backup process for applications that have been developed to take advantage of VMware’s vStorage APIs.

Page 51: V mwarev sphere5.1notes-v2

File Level Recovery (FLR) allows local administrators of protected virtual machines to browse and mount backups for the local machine. From these mounted backups, the administrator can then restore individual files. FLR is accomplished using the vSphere Data Protection Restore Client.

VMware vStorage APIs for Data Protection (VADP) enables backup software to perform centralized virtual machine backups without the disruption and overhead of running backup tasks from inside each virtual machine.

Virtual Machine Disk (VMDK) is a file or set of files that appears as a physical disk drive to a guest operating system. These files can be on the host machine or on a remote file system.

Image-level Backup and Restore:

VDP creates image-level backups, which are integrated with vStorage API for Data Protection, a feature set within vSphere to offload the backup processing overhead from the virtual machine to the VDP Appliance. The VDP Appliance communicates with the vCenter Server to make a snapshot of a virtual machine’s .vmdk files. Deduplication takes place within the appliance using a patented variable-length deduplication technology

Each VDP Appliance can simultaneously back up to eight virtual machines.

The advantages of VMware image-level backups are:

• Provides full image backups of virtual machines, regardless of the guest operating system• Utilizes the efficient transport method SCSI hot add when available and properly licensed, which

avoids copying the entire VMDK image over the network• Provides file-level recovery from image-level backups• Deduplicates within and across all .vmdk files protected by the VDP Appliance• Uses CBT for faster backups and restores• Eliminates the need to manage backup agents in each virtual machine• Supports simultaneous backup and recovery for superior throughput

Guest-level Backup and Restore

VDP Advanced supports guest-level backups for Microsoft SQL and Exchange Servers. With guest-level backups, client agents are installed on the SQL or Exchange Servers in the same manner backup agents are typically installed on physical servers.

The advantages of VMware guest-level backups are:

• Provides a higher level of deduplication than image-level backups• Provides additional application support for SQL or Exchange Servers inside the virtual machines• Support for backing up and restoring entire SQL or Exchange Servers or selected databases• Ability to support application consistent backups• Identical backup methods for physical and virtual machines

vSphere Data Protection (VDP) uses a vSphere Web Client and a VDP Appliance to store backups todeduplicated storage.

VDP is composed of a set of components that run on different machines

• vCenter Server 5.1• VDP Appliance (installed on ESX/ESXi 4.1 or 5.x)• vSphere Web Client

Page 52: V mwarev sphere5.1notes-v2

vSphere data protection requirements:

VDP 5.1 requires the following software:

• VMware vCenter Server• vCenter Server Linux or Windows: Version 5.1• vSphere Web Client (see the VMware website for current vSphere 5.1 web browser support)• Web browsers must be enabled with Adobe Flash Player 11.3 or higher to access the vSphere Web Client and VDP

functionality• VMware ESX/ESXi (the following versions are supported) ESX/ESXi 4.1,ESXi 5.0, ESXi 5.1

H/W requirements:

VDP is available in three configurations:

0.5 TB 1 TB 2 TB

Processor: 2GHzMemory: 4GB

Best practices for deploying vSphere Data Protection (VDP) Appliance

Deploy VDP Appliance on shared VMFS5 or higher to avoid block size limitations. Avoid deploying virtual machines with IDE virtual disks. VDP does not perform well with IDE virtual disks. If you are using ESXi 4.1 or 5.0 make sure the ESXi hosts are licensed for HotAdd. ESXi 5.1 includes this feature by

default. HotAdd transport is recommended for faster backups and restores and less exposure to network routing, firewall

and SSL certificate issues. To support HotAdd, the VDP Appliance must be deployed on an ESXi host that has a path to the storage holding

the virtual disk(s) being backed up. HotAdd will not work if the virtual machine(s) being backed up have any independent virtual hard disks. When planning for backups, make sure the disks are supported by VDP. Currently, VDP does not support the

following disk types:

Independent RDM Independent - Virtual Compatibility Mode RDM Physical Compatibility Mode

Make sure that all virtual machines are running hardware version 7 or higher in order to support Change Block Tracking (CBT).

Install and configuring vDP:

1 From a web browser, access the vSphere Web Client.https://<IP_address_vCenter_Server>:9443/vsphere-client/2 Login with administrative rights.3 Select vCenter Home > vCenter > VMs and Templates. Expand the vCenter tree and select the VDPAppliance.

Page 53: V mwarev sphere5.1notes-v2

4 Right-click the VDP Appliance and select Open Console.5 After the installation files load, the Welcome screen for the VDP menu appears. Open a web browser andtype:https://<IP_address_VDP_Appliance>:8543/vdp-configure/6 From the VMware Login screen, enter the following:a User: rootb Password: changemec Click Login7 The Welcome screen appears. Click Next.8 The Network settings dialog box appears. Specify (or confirm) the following:a IPv4 Static addressb Netmaskc Gatewayd Primary DNSe Secondary DNSf Host nameg Domain9 Click Next.10 The Time Zone dialog box appears. Select the appropriate time zone and click Next.11 The VDP credentials dialog box displays. For VDP credentials, type in the appliance password. This willbe the universal configuration password. Specify a password that contains the following: Nine characters At least one uppercase letter At least one lowercase letter At least one number No special characters12 Click Next.13 The vCenter registration dialog box appears. Specify the following:a vCenter user name (If the user belongs to a domain account then it should be entered in the format“SYSTEM-DOMAIN\admin”.)b vCenter passwordc vCenter host name (IP address or FQDN)d vCenter porte SSO host name (IP address or FQDN)f SSO port14 Click Test connection.A Connection success message displays. If this message does not display, troubleshoot your settings andrepeat this step until a successful message displays.NOTE If on the vCenter registration page of the wizard you receive the message “Specified user either is nota dedicated VDP user or does not have sufficient vCenter rights to administer VDP. Please update your userrole and try again,” go to “User Account Configuration” on page 18 for instructions on how to update thevCenter user role.15 Click OK.16 Click Next.17 The Ready to Complete page displays. Click Finish.A message displays that configuration is complete. Click OK.18 Configuration of the VDP Appliance is now complete, but you will need to return to the vSphere WebClient and reboot the appliance. Using the vSphere Web Client, right click on the appliance and selectRestart Guest OS.19 In the Confirm Restart message, click Yes. The reboot can take up to 30 minutes.

Ports used by vDP

Page 54: V mwarev sphere5.1notes-v2

Port number source destination usage

111 vdp esxi rpcbind 700 vdp ldap AD Login mgr tool 7778/7779 vcenter vdp vdp RMI 8509 vcenter vdp tomcat 8580 vcenter vdp vdp downloader 9443 vcenter vdp vdp web service 27000 vdp vcenter licensing 28001 MS app client vdp client software