Top Banner
vSphere 5.0 – What’s New Lovas Balázs Vmware oktató Arrow ECS Kft.
86
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: vSphere

vSphere 5.0 – What’s New

Lovas Balázs Vmware oktató

Arrow ECS Kft.

Page 2: vSphere

Agenda

• Platform

• Misc

• Storage

• Network

• HA

• Data Recovery

• AutoDeploy

• SRM 5

Page 3: vSphere

PLATFORM

Page 4: vSphere

New ESXi Hardware Maximums

New for ESXi 5.0:

– 2TB host memory

– Up to 160 logical CPUs

– 512 virtual machines per host

– 2,048 virtual CPUs per host

2TB 160 LCPUs

2048 vCPUs 512 VMs

Page 5: vSphere

ESXi Convergence Most Trusted

vSphere ESXi

vSphere 5.0 will utilize the ESXi hypervisor exclusively

ESXi is the gold standard for hypervisors

Overview

Benefits

Thin architecture

Smaller security footprint

Streamlined deployment and configuration

Simplified patching and updating model

Page 6: vSphere

ESXi 5.0 Firewall Features • ESXi 5.0 has a new firewall engine which is not based on iptables.

• The firewall is service oriented, and is a stateless firewall.

Page 7: vSphere

DCUI over ssh

Page 8: vSphere

Create virtual machines with up to:

32 vCPU

1 TB of RAM

4x size of previous vSphere versions

Run even the largest applications in vSphere, including very large databases

Virtualize even more applications than ever before (Tier 1 and 2)

vSphere 5.0 – Scaling Virtual Machines

Overview

Benefits

Page 9: vSphere

New Virtual Machine Features

• vSphere 5.0 supports the industry’s most capable virtual machines

Other new features

VM Scalability

Broader Device Coverage

32 virtual CPUs per VM UI for multi-core virtual

CPUs

Client-connected USB devices

USB 3.0 devices Smart Card Readers for

VM Console Access

1TB RAM per VM 4x previous capabilities!

Support for Mac OS X servers

Richer Desktop Experience

3D graphics

VM BIOS boot order config API and PowerCLI interface

EFI fimware

Page 10: vSphere

Misc

Page 11: vSphere

Update Manager Features

• VM patching REMOVED

• Optimized Cluster Patching and Upgrade:

– Based on available cluster capacity, it can remediate an optimal number of ESX/ESXi servers simultaneously without virtual machine downtime.

– For those scenarios where turnaround time is more important than virtual machine uptime, you have the choice to remediate all ESX servers in a cluster simultaneously.

• Less Downtime for VMware Tools Upgrade

– can schedule an upgrade to occur at the time of next virtual machine reboot.

• New Update Manager Utility:

– helps users reconfigure the setup of Update Manager

– change the database password and proxy authentication

– replace the SSL certificates for Update Manager.

Page 12: vSphere

Update Manager: ESX to ESXi Migration

• Supported Paths

– Migration from ESX (“Classic”) 4.x to ESXi 5.0

– For VUM-driven migration, pre-4.x hosts will have to be upgraded to 4.x first

• Might be better just to do fresh install of ESXi 5.0

• Preservation of Configuration Information

– Most standard configurations will be preserved, but not all:

• Information that’s not applicable to ESXi will not be preserved, e.g.

– /etc/yp.conf (no NIS in ESXi)

– /etc/sudoers (no sudo in ESXi)

• Any additional custom configuration files will not be preserved, e.g.

– Any scripts added to /etc/rc.d

12

Page 13: vSphere

vSphere 5.0 – vCenter Server Appliance (Linux)

Run vCenter Server as a Linux-based appliance

Simplified setup and configuration

Enables deployment choices according to business needs or requirements

Leverages vSphere availability features for protection of the management layer

Overview

Benefits

Page 14: vSphere

vCenter Linux

• vCenter Server Appliance (VCSA) consists of:

– A pre-packaged 64 bit application running on SLES 11

• Distributed with sparse disks

• Disk Footprint

– A built in enterprise level database with optional support for a remote Oracle /BD2 databases.

– Limits are the same for VC and VCSA

• Embedded DB

– 5 hosts/50 VMs

• External DB

– <300 hosts/<3000 VMs (64 bit)

– A web-based configuration interface

Distribution Min Deployed Max Deployed

3.6GB ~5GB ~80GB

Page 15: vSphere
Page 16: vSphere

Configuration

• Complete configuration is possible through a powerful web-based interface!

Page 17: vSphere

vSphere 5.0 – Web Client

Run and manage vSphere from any web browser anywhere in the world

Platform independence

Replaces Web Access GUI

Building block for cloud based administration

Overview

Benefits

Page 18: vSphere

Web Client Use Case

– VM Management

• VM Provisioning

• Edit VM, VM power ops, Snapshots, Migration

• VM Resource Management

• View all vSphere objects (hosts, clusters, datastores, folders, etc)

– Basic Health Monitoring

– Viewing the VM console remotely

– Search through large, complex environments

– vApp Management

• vApp Provisioning, vApp Editing, vApp Power Operations

Page 19: vSphere

vSphere 5.0 – vMotion Enhancements

• Multi-NIC Support • Support up to four 10Gbps or sixteen 1Gbps NICs (ea. NIC must have it's own IP)

• Single vMotion can now scale over multiple NICs (load balance across multiple NICs)

• Faster vMotion times and allows for a higher number of concurrent vMotions

• Reduced Application Overhead • Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce timeouts and improve success

• Ensures less than 1 Second switchover time in almost all cases

• Support for higher latency networks ( up to ~10ms) • Extend vMotion capabilities over slower networks

Page 20: vSphere

Host Profiles Enhancements

• New feature enables greater flexibility and automation

– Integration to AutoDeploy

– Host Profiles now has support for a greatly expanded set of configurations, including:

• iSCSI

• FCoE

• Native Multipathing

• Device Claiming and PSP Device Settings

• Kernel Module Settings

• And more

Page 21: vSphere

STORAGE

Page 22: vSphere

VMFS-5 vs VMFS-3 Feature comparison

Feature VMFS-3 VMFS-5

2TB+ VMFS Volumes Yes (using extents)

Yes

Support for 2TB+ Physical RDMs No Yes

Unified Block size (1MB) No Yes

Atomic Test & Set Enhancements (part of VAAI, locking mechanism)

No Yes

Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k)

Small file support No 1KB

Page 23: vSphere

VMFS-3 to VMFS-5 Upgrade • The Upgrade to VMFS-5 is clearly displayed in the vSphere Client under

Configuration -> Storage view.

• It is also displayed in the Datastores -> Configuration view.

• Non-disruptive upgrades.

Page 24: vSphere

VAAI Thin Provisioning - Dead Space Reclamation

• Dead space is previously written blocks that are no longer used by the VM. For instance after a Storage vMotion

• vSphere conveys block information to storage system via VAAI & storage system reclaims the dead blocks

vSphere

VMFS volume A

VMFS volume B

Storage vMotion

Page 25: vSphere

‘Out Of Space’ User Experience

VMware

VMware

Space exhaustion, affected VMs paused, LUN online & awaiting space allocation.

Space exhaustion warning in UI

Storage vMotion based evacuation or add space

Page 26: vSphere

Tier 1 Tier 2 Tier 3

Tier storage based on performance characteristics (i.e. datastore cluster)

Simplify initial storage placement

Load balance based on I/O

Overview

Benefits

Eliminate VM downtime for storage maintenance

Reduce time for storage planning/configuration

Reduce errors in the selection and management of VM storage

Increase storage utilization by optimizing placement

High IO Throughput

Profile-driven Storage

Page 27: vSphere

Selecting a Storage Profile during provisioning

By selecting a VM Storage Profile, datastores are now split into Compatible & Incompatible. The Celerra_NFS datastore is the only datastore which meets the GOLD Profile requirements – i.e. it is the only datastore that has our user-defined storage capability associated with it.

Page 28: vSphere

Storage Capabilities & VM Storage Profiles

Storage Capabilities surfaced by VASA or

user-defined

xxx VM Storage Profile referencing Storage

Capabilities

VM Storage Profile associated with VM

Compliant Not

Compliant

Page 29: vSphere

Software FCoE Adapters

• A software FCoE adapter is a software code that performs some of the FCoE processing.

• This adapter can be used with a number of NICs that support partial FCoE offload.

• Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.

Page 30: vSphere

Storage vMotion

VMkernel

Guest OS

Mirror Driver

Source Destination

Datamover

VMM/Guest

Userworld

Page 31: vSphere

Storage DRS

Storage DRS provides the following:

1. Initial Placement of VMs and VMDKS based on available space and I/O capacity.

2. Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization.

3. Load balancing via Storage vMotion based on I/O metrics, i.e. latency.

Page 32: vSphere

Datastore Cluster

• An integral part of SDRS is to create a group of datastores called a datastore cluster.

• Datastore Cluster without Storage DRS – Simply a group of datastores.

• Datastore Cluster with Storage DRS - Load Balancing domain similar to a DRS Cluster.

• A datastore cluster , without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than just a folder.

datastore cluster

2TB

datastores 500GB 500GB 500GB 500GB

Page 33: vSphere

Storage DRS Operations - Thresholds

Page 34: vSphere

Storage DRS Operations

Datastore Cluster

VMDK affinity

Keep a Virtual Machine’s

VMDKs together on the same

datastore

Maximize VM availability

when all disks needed in

order to run

On by default for all VMs

VMDK anti-affinity Keep a VM’s VMDKs on

different datastores

Useful for separating log

and data disks of database

VMs

Can select all or a subset of

a VM’s disks

Datastore Cluster

VM anti-affinity

Keep VMs on different

datastores

Similar to DRS anti-affinity

rules

Maximize availability of a

set of redundant VMs

Datastore Cluster

Page 35: vSphere

So what does it look like? Provisioning…

Page 36: vSphere

So what does it look like? Load Balancing

• It will show “utilization before” and “after”

• There’s always the option to override the recommendations

Page 37: vSphere

VSA vSphere Storage Appliance

Page 38: vSphere

Introduction • Each ESXi server has a VSA deployed to it as a Virtual Machine.

• The appliances use the available space on the local disk(s) of the ESXi servers & present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures.

vSphere vSphere vSphere

VSA VSA VSA

NFS NFS NFS

vSphere Client

VSA Manager

Page 39: vSphere

vCenter Server

Manage

VSA Manager

VSA Cluster Service

VSA Datastore 2

VSA Datastore 1

Volume 1 (Replica)

Volume 2

VSA cluster with 2 members

Volume 1 Volume 2 (Replica)

Page 40: vSphere

vCenter Server

Manage

VSA Manager

Volume 1 Volume 3 (Replica)

Volume 2 (Replica)

Volume 3

Volume 1 (Replica)

Volume 2

VSA Datastore 2 VSA

Datastore 3 VSA

Datastore 1

VSA cluster with 3 members

Page 41: vSphere

Simplified UI for VSA Cluster Configuration

1 2

3 4

Introduction Datacenter

Selection

ESXi host

Selection

IP Address

Assignment

Page 42: vSphere

VSA Cluster Recovery • In the event of a vCenter server loss, re-install the VSA plugin andn choose

to Recover the VSA cluster.

Page 43: vSphere

vSphere Storage Appliance – Licensing

Shared storage capabilities,

without the cost and complexity

vSphere Storage Appliance

$5,995 List Price

Pricing Licensing

vSphere Storage Appliance is licensed on a per-instance basis (like vCenter Server)

Each VSA instance supports up to 3 nodes

At least two nodes needs to be part of a VSA deployment

vSphere Storage Appliance available at 40% off when purchased with vSphere Essentials Plus

vSphere Essentials Plus w/ vSphere Storage Appliance

+

$4,495 Essentials Plus

$7,995 List Price

$3,500 (40% off) vSphere Storage Appliance

Page 44: vSphere

NETWORK

Page 45: vSphere

LLDP Neighbour Info – vSphere side

Sample output using LLDPD Utility

Page 46: vSphere

NetFlow • NetFlow is a networking protocol that collects IP traffic information as

records and sends them to third party collectors such as CA NetQoS, NetScout etc .

vDS

VM A VM B

trunk

Physical switch Collector

•The Collector/Analyzer report on various information such as

• Current top flows consuming the most bandwidth

• Which flows are behaving irregularly

• Number of bytes a particular flow has sent and received in the past 24 hours.

NetFlow session

Host

VM traffic

Legend :

Page 47: vSphere

Port Mirror

Ingress Source Destination

vDS

Egress Source Destination

vDS

Ingress Source Destination

vDS

External System

Egress Source Destination

vDS

External System

Intra-VM traffic

Inter-VM traffic

Mirror Flow

Legend :

VM Traffic

Page 48: vSphere

Server Admin

Mgmt NFS iSCSI

vMotion FT

Traffic Shares Limit (Mbps)

802.1p

5 150 1

30 --

10 250 --

10 2

20 2000 4

5 --

15 --

Teaming Policy vNetwork Distributed Switch

HBR

NETIOC VM traffic Coke VM

Pepsi VMs

HBR

Mgmt

vMotion

NFS

Pepsi VMs

Coke VMs

Other VMs

Page 49: vSphere

802.1p Tag for Resource Pool

• vSphere infrastructure does not provide QoS based on these tags.

• vDS simply tags the packets according to the Resource Pool setting, and it is down to the physical switch to understand the flag and act upon it.

Page 50: vSphere

High Availability

Page 51: vSphere

HA

vSphere HA feature provides organizations the ability to run their critical business applications with confidence. Enhancements allow: • A solid, scalable foundation upon which to build to the cloud • Ease of management • Ease of troubleshooting • Increased communications mechanisms

VMware ESX VMware ESX VMware ESXi

Resource Pool

Failed Server Operating Server Operating Server

Page 52: vSphere

vSphere HA Primary Components

• Every host runs a agent

– Referred to as ‘FDM’ or Fault Domain Manger

– One of the agents within the cluster is chosen to assume the role of the Master

– All other agents assume the role of Slaves

• There is no more Primary/Secondary concept with vSphere HA

– There is only one Master per cluster during normal operations

vCenter

ESX 02

ESX 01 ESX 03

ESX 04

Page 53: vSphere

Storage Level Communications

• One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication.

• The datastores used for this are referred to as ‘Heartbeat Datastores’.

• Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning.

ESX 02

ESX 01 ESX 03

ESX 04

Page 54: vSphere

Data Recovery

Page 55: vSphere

vDR: Deduplication Performance Improvements

Overall Improvements 1. New compression algorithm will speed up compressing of data 2. More efficient IO path when accessing slab files 3. Group transactions together with parent (i.e. daily backups of the same VMs

stored in same slab file)

Integrity Check Improvements 1. Periodic checkpoints allows suspending and resuming IC operation 2. Group similar transactions together so they can be processed in bulk 3. Additional tweaking of IC options via datarecovery.ini file (for example, what day

you want the full integrity check to run and frequency per month)

Page 56: vSphere

Email Reports – Sample

Good backup – no errors

Page 57: vSphere

Supported Environment

• VMware vSphere vCenter 4.1 Update 1 and later

• VMware vSphere 4.0 Update 3 and later

Page 58: vSphere

vDR: Destination Maintenance

Allows separation of backup and maintenance windows. Some use cases 1) Delay start of integrity checks so backups complete as expected 2) Ensure no activity on dedupe store so files can be safely copied off to alternate media

Page 59: vSphere

Ability To Suspend Backup Jobs

• Backup Job can be suspended individually

• Right click backup job and select Suspend Future Tasks

• Currently running tasks are not affected

Page 60: vSphere

New datarecovery.ini options

Option Description Range FullIntegrityCheckInterval

The number of days between automated full integrity check

1-30; Default is 7 days

FullIntegrityCheckDay Specify the day of the week that the automated full integrity check is run

1=Sunday, 2=Monday, etc

SerializeHotadd Disables parallel SCSI Hot-Add operations and returns hot-add behavior to VDR 1.2 level

0-1; Default is 0

BackupUnusedData Excludes backups of Windows and Linux swap partitions

0-1; Default is 0

Page 61: vSphere

Auto Deploy

Page 62: vSphere

vSphere vSphere vSphere

vSphere 5.0 – Auto Deploy

vCenter Server with Auto Deploy

Host Profiles

Image Profiles

Deploy and patch vSphere hosts in minutes using a new “on the fly” model

Coordination with vSphere Host Profiles

Overview

Benefits

Rapid deploy/recovery/patching of hosts

Centralized host and image management

Reduce manual deployment and patch processes

No bootdisks

vSphere

• Target Audience for

– Customers with large vSphere deployments

– High host refresh rates

Page 63: vSphere

Composition of an ESXi Image

CIM Providers

Drivers

Core Hypervisor

Plug-in Components

Page 64: vSphere

Windows Host with PowerCLI and Image Builder Snap-in

Building an Image

Image Builder

OEM VIBs

Driver VIBs

ESXi VIBs

Image Profile

PXE-bootable Image

ISO Image

Depots

Generate new image

Page 65: vSphere

Auto Deploy

Depots

Auto Depoy Example – Initial Boot

OEM VIBs

Driver VIBs

ESXi VIBs

Rules Engine

“Waiter”

Provision new host

Image Profile

Image Profile

Image Profile

vCenter Server

Host Profile Host Profile

Host Profile

TFTP DHCP

Page 66: vSphere

Auto Deploy

Depots

Auto Depoy Example – Initial Boot

OEM VIBs

Driver VIBs

ESXi VIBs

Rules Engine

“Waiter”

1) PXE Boot server

Image Profile

Image Profile

Image Profile

vCenter Server

Host Profile Host Profile

Host Profile

TFTP DHCP

DHCP request

gPXE image

Page 67: vSphere

Auto Deploy

Depots

Auto Depoy Example – Initial Boot

OEM VIBs

Driver VIBs

ESXi VIBs

Rules Engine

“Waiter”

2) Contact Auto Deploy Server

Image Profile

Image Profile

Image Profile

vCenter Server

Host Profile Host Profile

Host Profile

Cluster A Cluster B

Page 68: vSphere

Auto Deploy

Depots

Auto Depoy Example – Initial Boot

OEM VIBs

Driver VIBs

ESXi VIBs

Rules Engine

“Waiter”

Image Profile

Image Profile

Image Profile

vCenter Server

Host Profile Host Profile

Host Profile

3) Determine Image Profile, Host Profile and cluster

•Image Profile X •Host Profile 1 •Cluster B

Cluster A Cluster B

Page 69: vSphere

Auto Deploy

Depots

Auto Depoy Example – Initial Boot

OEM VIBs

Driver VIBs

ESXi VIBs

Rules Engine

“Waiter”

Image Profile

Image Profile

Image Profile

vCenter Server

Host Profile Host Profile

Host Profile

4) Push image to host, apply host profile

Cluster A Cluster B

Image Profile Host Profile cache

Page 70: vSphere

Auto Deploy

Depots

Auto Depoy Example – Initial Boot

OEM VIBs

Driver VIBs

ESXi VIBs

Rules Engine

“Waiter”

Image Profile

Image Profile

Image Profile

vCenter Server

Host Profile Host Profile

Host Profile

5) Place host into cluster

Cluster A Cluster B

Image Profile Host Profile cache

Page 71: vSphere

Boot Disk

What is Auto Deploy

Configuration: networking, storage, date/time, firewall, admin password, …

Running State: VM Inventory, HA state, License, DPM configuration

Event Recording: log files, core dump

Platform Composition: ESXi base, drivers, CIM providers, …

•No Boot Disk? Where does it go?

Image Profile

Host Profile

vCenter Server

Add-on Components

Page 72: vSphere

Auto Deploy Components

Component Sub-Components Notes

PXE Boot Infrastructure

• DHCP Server • TFTP Server

• Setup independently • gPXE file from vCenter • Can use Auto Deploy

Appliance

Auto Deploy Server

• Rules Engine • PowerCLI Snap-in • Web Server

• Build/Manage Rules • Match server to Image and

Host Profile • Deploy server

Image Builder • Image Profiles, • PowerCLI Snap-in

• Combine ESXi image with 3rd party VIBs to create custom Image Profiles

vCenter Server

• Stores Rules • Host Profiles • Answer Files

• Provides store for rules • Host configs saved in Host

Profiles • Custom Host settings

saved in Answer Files

Page 73: vSphere

Oktatás

Page 74: vSphere

vSphere oktatás - ARROW ECS

vSphere 5: What's New (2 nap) AKCIÓS jelentkezés év végéig • Két mérnök akció: hallgató páronként 338.000,. helyett 255.000.- ft • VCP upgrade : What's New + VCP vizsgakupon 189.000.- ft A tanfolyam ára: 169.000.-

Időpontok: • Okt. 3. • Okt 27. • Nov 24. [email protected]

Page 75: vSphere

vSphere oktatás - ARROW ECS

VMware vSphere: Install, Manage, Configure [v5] (4 nap) Listaár: 290.000.- Ingyenes VCP kupon a Webex résztvevőknek! Kupon kód: webex Tanfolyami időpontok: Okt 17. Nov 14. [email protected]

Page 76: vSphere

Q/A

Page 77: vSphere

SRM v5

Page 78: vSphere

ESXi

Recovery Site Protected Site

ESX ESX

ESXi

VSR Agent vSphere Replication

Server

Tightly Integrated With SRM, vCenter and ESX

Site Recovery Manager

Site Recovery Manager

vSphere Replication Management Server

vSphere Replication Management Server

Any storage supported by

vSphere

Any storage supported by

vSphere

vCenter Server vCenter Server

vSphere Replication Architecture

Page 79: vSphere

Replication UI

Select VMs to replicate from within the vSphere client by right click options

Can do this on one VM, or multiple VMs simultaneously

Page 80: vSphere

vSphere Replication 1.0 Limitations

• Focus on virtual disks of powered-on VMs.

– ISOs and floppy images are not replicated.

– Powered-off/suspended VMs not replicated.

– Non-critical files not replicated (e.g. logs, stats, swap, dumps).

• vSR works at the virtual device layer.

– Snapshots work with vSR, snapshot is replicated, but VM is recovered with collapse snapshots.

– Physical RDMs are not supported.

• FT, linked clones, VM templates are not supported with vSR.

• Automated failback of vSR-protected VMs will be late, but will be supported in the future.

• Virtual Hardware 7, or later, in the VM is required.

Page 81: vSphere

vSphere Replication vs Storage Replication

Replication Provider

Cost Management Performance

vSphere Replication

VMware

• Low-end storage supported

• No additional replication software

• VM’ granularity

• Managed directly in vCenter

• 15 min RPOs

• Scales to 500 VMs

• File-level consistency

• No automated failback, FT, linked clones, physical RDM

Storage-based

Replication

• Higher-end replicating storage

• Additional replication software

• LUN – VM layout

• Storage team coordination

• Synchronous replication

• High data volumes

• Application consistency possible

Page 82: vSphere

Planned Migrations = Consistency & No Data Loss

Overview

Benefits

Two workflows can be applied to recovery plans:

DR failover

Planned migration

Planned migration ensures application

consistency and no data-loss during migration

Graceful shutdown of production VMs in

application consistent state

Data sync to complete replication of VMs

Recover fully replicated VMs

Better support for planned migrations

No loss of data during migration process

Recover ‘application-consistent’ VMs at

recovery site

Planned Migration

Site B Site A

Replication

1 Shut down production VMs

2

Sync data, stop replication and present LUNs to vSphere

3 Recover app-consistent VMs

vSphere vSphere

Page 83: vSphere

Reprotect

After you use planned migration (or DR Event) to migrate to your recovery site, you must reprotect to enable the failback.

Page 84: vSphere

Simplify failback process

Automate replication management

Eliminate need to set up new recovery plan

Streamline frequent bi-directional migarations

Automated Failback

Re-protect VMs from Site B to Site A

Reverse replication

Apply reverse resource mapping

Automate failover from Site B to Site A

Reverse original recovery plan

Restrictions

Does not apply if Site A has undergone major

changes / been rebuilt

Not available with vSphere Replication

Overview

Benefits

Automated Failback

Site B Site A

Reverse

Replication

Reverse original recovery plan

vSphere vSphere

Page 85: vSphere

SRM Scalability

Maximum Enforced

Protected virtual machines total 1000 No

Protected virtual machines in a single

protection group 500 No

Protection groups 250 No

Simultaneous running recovery plans 30 No

vSphere Replicated virtual machines 500 No

Page 86: vSphere

Q/A