Top Banner
VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX End-User Computing solution with Citrix XenDesktop and EMC VNX for up to 2,000 virtual desktops. January 2013 EMC ® VSPEX END-USER COMPUTING Citrix ® XenDesktop™ 5.6 and VMware vSphere ® 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX ® and EMC Next-Generation Backup
132

EMC VSPEX End User Computing

Apr 08, 2016

Download

Documents

javirodz

This document describes the EMC VSPEX End-User Computing Solution with Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2000 Virtual Desktops
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: EMC VSPEX End User Computing

VSPEX Proven Infrastructure

EMC VSPEX

Abstract

This document describes the EMC VSPEX End-User Computing solution with Citrix XenDesktop and EMC VNX for up to 2,000 virtual desktops. January 2013

EMC® VSPEX™ END-USER COMPUTING Citrix® XenDesktop™ 5.6 and VMware vSphere® 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX® and EMC Next-Generation Backup

Page 2: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

2

Copyright © 2013 EMC Corporation. All rights reserved. Published in the USA.

Published January 2013

EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website.

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Part Number H11334.1

Page 3: EMC VSPEX End User Computing

3 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Page 4: EMC VSPEX End User Computing
Page 5: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

5

Contents

Chapter 1 Executive Summary 15

Introduction .................................................................................................. 16

Target audience ............................................................................................ 16

Document purpose ....................................................................................... 16

Business needs............................................................................................. 17

Chapter 2 Solution Overview 19

Solution overview ......................................................................................... 20

Desktop broker .................................................................................................... 20

Virtualization ....................................................................................................... 20

Storage ................................................................................................................ 20

Network ............................................................................................................... 21

Compute .............................................................................................................. 21

Chapter 3 Solution Technology Overview 23

Solution technology ...................................................................................... 24

Summary of key components ........................................................................ 25

Desktop broker ............................................................................................. 26

Overview .............................................................................................................. 26

Citrix XenDesktop 5.6 ........................................................................................... 26

Machine Creation Services ................................................................................... 26

Citrix Personal vDisk ............................................................................................ 26

Citrix Profile Manager 4.1 ..................................................................................... 27

Virtualization ................................................................................................ 27

Overview .............................................................................................................. 27

VMware vSphere 5.1 ........................................................................................... 27

VMware vCenter ................................................................................................... 27

VMware vSphere High Availability ........................................................................ 27

EMC Virtual Storage Integrator for VMware ........................................................... 28

VNX VMware vStorage API for Array Integration support........................................ 28

Compute ....................................................................................................... 29

Page 6: EMC VSPEX End User Computing

Contents

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

6

Network ........................................................................................................ 31

Storage ......................................................................................................... 33

Overview .............................................................................................................. 33

EMC VNX series .................................................................................................... 33

Backup and recovery ..................................................................................... 34

Overview .............................................................................................................. 34

EMC Avamar ......................................................................................................... 34

Security ......................................................................................................... 35

RSA SecurID two-factor authentication ................................................................. 35

SecurID authentication in the VSPEX End-User Computing for Citrix XenDesktop environment......................................................................................................... 35

Required components .......................................................................................... 36

Compute, memory and storage resources ............................................................ 37

Chapter 4 Solution Architectural Overview 41

Solution overview ......................................................................................... 42

Solution architecture..................................................................................... 42

Architecture for up to 500 virtual desktops........................................................... 42

Architecture for up to 1,000 virtual desktops ....................................................... 45

Architecture for up to 2,000 virtual desktops ....................................................... 47

Key components .................................................................................................. 48

Hardware resources ............................................................................................. 51

Software resources .............................................................................................. 53

Sizing for validated configuration ......................................................................... 54

Server configuration guidelines .................................................................... 56

Overview .............................................................................................................. 56

VMware vSphere memory virtualization for VSPEX ................................................ 57

Memory configuration guidelines ......................................................................... 58

Network configuration guidelines ................................................................. 58

Overview .............................................................................................................. 58

VLAN .................................................................................................................... 59

Enable jumbo frames ........................................................................................... 60

Link aggregation .................................................................................................. 60

Storage configuration guidelines .................................................................. 60

Overview .............................................................................................................. 60

VMware vSphere storage virtualization for VSPEX................................................. 61

Storage layout for 500 virtual desktops ................................................................ 62

Storage layout for 1,000 virtual desktops ............................................................. 64

Storage layout for 2,000 virtual desktops ............................................................. 67

High availability and failover ......................................................................... 69

Page 7: EMC VSPEX End User Computing

Contents

7 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Introduction ......................................................................................................... 69

Virtualization layer ............................................................................................... 69

Compute layer ...................................................................................................... 69

Network layer ....................................................................................................... 70

Storage layer ........................................................................................................ 71

Validation test profile.................................................................................... 71

Profile characteristics ........................................................................................... 71

Backup environment configuration guidelines .............................................. 72

Overview .............................................................................................................. 72

Backup characteristics ......................................................................................... 72

Backup layout ...................................................................................................... 73

Sizing guidelines........................................................................................... 73

Reference workload ...................................................................................... 73

Defining the reference workload ........................................................................... 73

Applying the reference workload ................................................................... 74

Implementing the reference architectures ..................................................... 75

Resource types ..................................................................................................... 75

CPU resources ...................................................................................................... 75

Memory resources ................................................................................................ 75

Network resources ............................................................................................... 76

Storage resources ................................................................................................ 76

Implementation summary .................................................................................... 77

Quick assessment ......................................................................................... 77

CPU requirements ................................................................................................ 78

Memory requirements .......................................................................................... 78

Storage performance requirements ...................................................................... 78

Storage capacity requirements ............................................................................. 78

Determining equivalent reference virtual desktops............................................... 78

Fine-tuning hardware resources ........................................................................... 80

Chapter 5 VSPEX Configuration Guidelines 83

Overview ....................................................................................................... 84

Pre-deployment tasks ................................................................................... 84

Overview .............................................................................................................. 84

Deployment prerequisites .................................................................................... 85

Customer configuration data ......................................................................... 87

Prepare switches, connect network, and configure switches ......................... 87

Overview .............................................................................................................. 87

Prepare network switches .................................................................................... 88

Page 8: EMC VSPEX End User Computing

Contents

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

8

Configure infrastructure network .......................................................................... 88

Configure VLANs .................................................................................................. 91

Complete network cabling .................................................................................... 91

Prepare and configure storage array .............................................................. 91

VNX configuration ................................................................................................ 91

Provision core data storage .................................................................................. 93

Provision optional storage for user data ............................................................... 99

Provision optional storage for infrastructure virtual machines ............................ 101

Install and configure VMware vSphere hosts .............................................. 101

Overview ............................................................................................................ 101

Install ESXi ......................................................................................................... 102

Configure ESXi networking ................................................................................. 102

Jumbo frames ..................................................................................................... 103

Connect VMware datastores ............................................................................... 103

Plan virtual machine memory allocations ........................................................... 103

Install and configure SQL Server database .................................................. 105

Overview ............................................................................................................ 105

Create a virtual machine for Microsoft SQL Server .............................................. 106

Install Microsoft Windows on the virtual machine .............................................. 106

Install SQL Server ............................................................................................... 106

Configure database for VMware vCenter ............................................................. 107

Configure database for VMware Update Manager ............................................... 107

Install and configure VMware vCenter Server .............................................. 107

Overview ............................................................................................................ 107

Create the vCenter host virtual machine ............................................................. 109

Install vCenter guest operating system ............................................................... 109

Create vCenter ODBC connections ...................................................................... 109

Install vCenter Server ......................................................................................... 109

Apply vSphere license keys ................................................................................ 109

Deploy the VNX VAAI for NFS plug-in (NFS variant) .............................................. 109

Install the EMC VSI Unified Storage Management feature ................................... 110

Install and configure XenDesktop controller ................................................ 110

Overview ............................................................................................................ 110

Install server-side components of XenDesktop ................................................... 111

Configure a site .................................................................................................. 111

Add a second controller ..................................................................................... 111

Install Desktop Studio ........................................................................................ 111

Prepare master virtual machine .......................................................................... 111

Provision virtual desktops .................................................................................. 112

Summary ..................................................................................................... 112

Page 9: EMC VSPEX End User Computing

Contents

9 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Chapter 6 Validating the Solution 113

Overview ..................................................................................................... 114

Post-install checklist ................................................................................... 114

Deploy and test a single virtual desktop ..................................................... 115

Verify the redundancy of the solution components ..................................... 115

Appendix A Bills of Materials 117

Bill of materials for 500 virtual desktops .................................................... 118

Bill of materials for 1,000 virtual desktops ................................................. 119

Bill of materials for 2,000 virtual desktops ................................................. 120

Appendix B Customer Configuration Data Sheet 123

Customer configuration data sheets ........................................................... 124

Appendix C References 127

References .................................................................................................. 128

EMC documentation ........................................................................................... 128

Other documentation ......................................................................................... 129

Appendix D About VSPEX 131

About VSPEX ............................................................................................... 132

Page 10: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

10

Figures

Figure 1. Solution components ......................................................................... 24 Figure 2. Compute layer flexibility ..................................................................... 30 Figure 3. Example of highly-available network design ....................................... 32 Figure 4. Authentication control flow for XenDesktop access requests originating

on an external network ....................................................................... 36 Figure 5. Authentication control flow for XenDesktop requests originating on local

network .............................................................................................. 36 Figure 6. Logical architecture: VSPEX End-User Computing for Citrix XenDesktop

with RSA ............................................................................................. 38 Figure 7. Logical architecture for 500 virtual desktops – NFS variant ................. 43 Figure 8. Logical architecture for 500 virtual desktops – FC variant ................... 44 Figure 9. Logical architecture for 1,000 virtual desktops – NFS variant .............. 45 Figure 10. Logical architecture for 1,000 virtual desktops – FC variant ................ 46 Figure 11. Logical architecture for 2,000 virtual desktops – NFS variant .............. 47 Figure 12. Logical architecture for 2,000 virtual desktops – FC variant ................ 48 Figure 13. Hypervisor memory consumption ....................................................... 57 Figure 14. Required networks ............................................................................. 59 Figure 15. VMware virtual disk types ................................................................... 62 Figure 16. Core storage layout for 500 virtual desktops ....................................... 63 Figure 17. Optional storage layout for 500 virtual desktops ................................ 64 Figure 18. Core storage layout for 1,000 virtual desktops.................................... 65 Figure 19. Optional storage layout for 1,000 virtual desktops ............................. 66 Figure 20. Core storage layout for 2,000 virtual desktops.................................... 67 Figure 21. Optional storage layout for 2,000 virtual desktops ............................. 68 Figure 22. High availability at the virtualization layer .......................................... 69 Figure 23. Redundant power supplies ................................................................. 70 Figure 24. Network layer high availability ............................................................ 70 Figure 25. VNX series high availability ................................................................ 71 Figure 26. Sample Ethernet network architecture for 500 and 1,000 virtual

desktops ............................................................................................ 89 Figure 27. Sample Ethernet network architecture for 2,000 virtual desktops ....... 90 Figure 28. Set Direct Writes Enabled checkbox .................................................... 95 Figure 29. View all Data Mover parameters ......................................................... 96 Figure 30. Set nthread parameter ........................................................................ 96 Figure 31. Storage System Properties dialog box................................................. 97 Figure 32. Create FAST Cache dialog box ............................................................. 97 Figure 33. Advanced tab in the Create Storage Pool dialog box ........................... 98 Figure 34. Advanced tab in the Storage Pool Properties dialog box ..................... 98 Figure 35. Storage Pool Properties window ......................................................... 99

Page 11: EMC VSPEX End User Computing

Figures

11 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Figure 36. Manage Auto-Tiering window ............................................................ 100 Figure 37. LUN Properties window ..................................................................... 101 Figure 38. Virtual machine memory settings .................................................... 105

Page 12: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

12

Tables

Table 1. VNX customer benefits ....................................................................... 33 Table 2. Minimum hardware resources to support SecurID ............................... 39 Table 3. Solution hardware .............................................................................. 51 Table 4. Solution software ............................................................................... 53 Table 5. Configurations that support this solution ........................................... 55 Table 6. Server hardware ................................................................................. 56 Table 7. Storage hardware ............................................................................... 60 Table 8. Validated environment profile ............................................................ 71 Table 9. Backup profile characteristics ............................................................ 72 Table 10. Virtual desktop characteristics ........................................................... 74 Table 11. Blank worksheet row .......................................................................... 77 Table 12. Reference virtual desktop resources ................................................... 78 Table 13. Example worksheet row ...................................................................... 79 Table 14. Example applications ......................................................................... 79 Table 15. Server resource component totals ...................................................... 81 Table 16. Blank customer worksheet ................................................................. 82 Table 17. Deployment process overview ............................................................ 84 Table 18. Tasks for pre-deployment ................................................................... 85 Table 19. Deployment prerequisites checklist .................................................... 85 Table 20. Tasks for switch and network configuration ........................................ 88 Table 21. Tasks for storage configuration ........................................................... 92 Table 22. Tasks for server installation .............................................................. 101 Table 23. Tasks for SQL Server database setup ................................................ 106 Table 24. Tasks for vCenter configuration ........................................................ 107 Table 25. Tasks for XenDesktop controller setup .............................................. 110 Table 26. Tasks for testing the installation ....................................................... 114 Table 27. List of components used in the VSPEX solution for 500 virtual desktops

......................................................................................................... 118 Table 28. List of components used in the VSPEX solution for 1,000 virtual

desktops .......................................................................................... 119 Table 29. List of components used in the VSPEX solution for 2,000 virtual

desktops .......................................................................................... 120 Table 30. Common server information ............................................................. 124 Table 31. ESXi server information .................................................................... 124 Table 32. Array information .............................................................................. 125 Table 33. Network infrastructure information ................................................... 125

Page 13: EMC VSPEX End User Computing

13 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Table 34. VLAN information ............................................................................. 125 Table 35. Service accounts .............................................................................. 125

Page 14: EMC VSPEX End User Computing
Page 15: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

15

Chapter 1 Executive Summary

This chapter presents the following topics:

Introduction............................................................................................... 16

Target audience ......................................................................................... 16

Document purpose .................................................................................... 16

Business needs ......................................................................................... 17

Page 16: EMC VSPEX End User Computing

Executive Summary

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

16

Introduction VSPEX™ validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX eliminates server virtualization planning and configuration burdens. When you are embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, choice, greater efficiency, and lower risk.

This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; customers are free to select the server and networking hardware of their choice that meet or exceed the stated minimums.

Target audience The reader of this document is expected to have the necessary training and background to install and configure an end-user computing solution based on Citrix® XenDesktop™ with VMware vSphere® as a hypervisor, EMC VNX® series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and EMC recommends that the reader be familiar with these documents.

Readers are also expected to be familiar with the infrastructure and database security policies of the customer installation.

Individuals focused on selling and sizing a VSPEX End-User Computing solution for Citrix XenDesktop should pay particular attention to the first four chapters of this document. Implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices.

Document purpose This document presents an initial introduction to the VSPEX End-User Computing architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy the system.

The VSPEX End-User Computing architecture provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution executes on VMware’s vSphere virtualization layer backed by the highly available VNX storage family for storage and Citrix’s XenDesktop desktop broker. The compute and network components, while vendor-definable, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual machine environment.

The 500, 1,000, and 2,000 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same requirements, this document contains methods and guidance to adjust your system to be cost effective when deployed. A smaller 250 virtual desktop environment based

Page 17: EMC VSPEX End User Computing

Executive Summary

17 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

on the VNXe3300 is described in EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vSphere 5.1 for up to 250 Virtual Desktops.

An end-user computing or virtual desktop architecture is a complex system offering. This document will facilitate its setup by providing up front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. Validation tests are provided to ensure that your system is up and running properly after the last component has been installed. Following the guidance provided by this document will ensure an efficient and painless desktop deployment.

Business needs VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, efficiency, and lower risk.

Business applications are moving into the consolidated compute, network, and storage environment. EMC VSPEX End-User Computing using Citrix reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The following are the business needs addressed by the VSPEX End-User Computing solution for Citrix architecture:

Provides an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components

Provides a solution for efficiently virtualizing 500, 1,000, or 2,000 virtual desktops for varied customer use cases

Provides a reliable, flexible, and scalable reference design

Page 18: EMC VSPEX End User Computing

Executive Summary

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

18

Page 19: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

19

Chapter 2 Solution Overview

This chapter presents the following topic:

Solution overview ...................................................................................... 20

Page 20: EMC VSPEX End User Computing

Solution Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

20

Solution overview The EMC VSPEX End-User Computing solution for Citrix XenDesktop on VMware vSphere 5.1 provides a complete system architecture capable of supporting up to 2,000 virtual desktops with a redundant server/network topology and highly available storage. The core components that make up this particular solution are desktop broker, virtualization, storage, network, and compute.

XenDesktop is the virtual desktop solution from Citrix that allows virtual desktops to be run on the VMware vSphere virtualization environment. It allows for the centralization of desktop management and provides increased control for IT organizations. XenDesktop allows end users to connect to their desktops from multiple devices across a network connection.

VMware vSphere is the leading virtualization platform in the industry, providing flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vSphere components are the VMware vSphere hypervisor and the VMware vCenter control server for system management.

The VMware hypervisor runs on a dedicated server and allows multiple operating systems to execute on the system simultaneously as virtual machines. Connect these hypervisor systems to operate in a clustered configuration. Manage these clustered configuration as a larger resource pool through the vCenter product and allow dynamic allocation of CPU, memory, and storage across the cluster.

Features like vMotion, which allows a virtual machine to move among different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS), which performs vMotion automatically to balance load, make vSphere a solid business choice.

With the release of vSphere 5.1, a VMware virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual RAM.

The EMC VNX storage family is the number one shared storage platform in the industry. Its ability to provide both file and block access with a broad feature set make it an ideal choice for any end-user computing implementation.

The VNX storage includes the following components, which are sized for the stated architecture workloads:

Host adapter ports – Provide host connectivity via fabric into the array

Data Movers – Front-end components that provide file services to hosts (optional if providing CIFS/SMB, NFS services)

Storage Processors – Compute components of the storage array, responsible for all aspects of data moving into, out of, and between arrays

Disk Array Enclosures – Contain the actual disk drives that record the host/application data

Desktop broker

Virtualization

Storage

Page 21: EMC VSPEX End User Computing

Solution Overview

21 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

The End-User Computing solutions for Citrix XenDesktop discussed in this document are based on the VNX5300 (500, 1,000 desktops) and VNX5500™ (2,000 desktops) storage arrays. The VNX5300™ can support a maximum of 125 drives, while the VNX5500 can host up to 250 drives.

The EMC VNX series supports a wide range of business-class features ideal for the end-user computing environment, including:

Fully Automated Storage Tiering for Virtual Pools (FAST VP)

FAST Cache

Data deduplication

Thin provisioning

Replication

Snapshots/checkpoints

File-level retention

Quota management

VSPEX allows the flexibility of designing and implementing the vendor’s choice of network components. The infrastructure must conform to the following attributes:

Redundant network links for the hosts, switches, and storage

Support for link aggregation

Traffic isolation based on industry-accepted best practices

VSPEX allows the flexibility of designing and implementing the vendor’s choice of server components. The infrastructure must conform to the following attributes:

Sufficient RAM, CPU cores, and memory to support the required number and types of virtual machines

Sufficient network connections to enable redundant connectivity to the system switches

Excess capacity to support failover after a server failure in the environment

Network

Compute

Page 22: EMC VSPEX End User Computing

Solution Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

22

Page 23: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

23

Chapter 3 Solution Technology Overview

This chapter presents the following topics:

Solution technology ................................................................................... 24

Summary of key components ..................................................................... 25

Desktop broker .......................................................................................... 26

Virtualization ............................................................................................. 27

Compute ................................................................................................... 29

Network ..................................................................................................... 31

Storage ..................................................................................................... 33

Backup and recovery ................................................................................. 34

Security ..................................................................................................... 35

Page 24: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

24

Solution technology This VSPEX solution uses EMC VNX5300 (for up to 1,000 virtual desktops) or VNX5500 (for up to 2,000 virtual desktops) storage arrays and VMware vSphere 5.1 to provide the storage and computer resources for a Citrix XenDesktop 5.6 environment of Windows 7 virtual desktops provisioned by Machine Creation Services (MCS). Figure 1 shows the components of the solution.

Figure 1. Solution components

In particular, planning and designing the storage infrastructure for the Citrix XenDesktop environment is a critical step because the shared storage must be able to absorb large bursts of input/output (I/O) that occur over the course of a workday. These bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users may adapt to slow performance, but unpredictable performance causes frustration and reduces efficiency.

To provide predictable performance for end-user computing, the storage system must be able to handle peak I/O load from the clients while keeping response time to a minimum. Designing for this workload involves the deployment of many disks to handle brief periods of extreme I/O pressure, which is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required.

Page 25: EMC VSPEX End User Computing

Solution Technology Overview

25 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

EMC next-generation backup enables protection of user data and end-user recoverability. This is accomplished by leveraging EMC Avamar® and its desktop client within the desktop image.

Summary of key components This section describes the key components of this solution.

• Desktop broker

The desktop virtualization broker manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software is critical to enable on-demand creation of desktop images, to allow maintenance to the image without affecting user productivity, and to prevent the environment from growing in an unconstrained way.

• Virtualization

The virtualization layer allows the physical implementation of resources to be decoupled from the applications that use them. In other words, the application’s view of the resources available to it is no longer directly tied to the hardware. This enables many key features in the end-user computing concept.

• Compute

The compute layer provides memory and processing resources for the virtualization layer software as well as the needs of the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resource required, but allows the customer to implement the requirements using any compute hardware that meets these requirements.

• Network

The network layer connects the users of the environment to the resources they need, as well as connecting the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but allows the customer to implement the requirements using any network hardware that meets these requirements.

• Storage

The storage layer is a critical resource for the implementation of the end-user computing environment. Because of the way desktops are used, the storage layer must be able to absorb large bursts of transient activity without having an unduly impact on the user experience. This solution uses EMC VNX FAST Cache to handle this workload efficiently.

• Backup and recovery

The optional backup and recovery component of the solution provide data protection in the event that the data in the primary system is deleted, damaged, or otherwise becomes unusable.

Page 26: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

26

• Security

Security components from RSA provide customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system.

Solution architecture provides details on all the components that make up the reference architecture.

Desktop broker Desktop virtualization encapsulates and delivers the user desktop to a remote client device, which can be thin clients, zero clients, smartphones, or tablets. It allows subscribers from different locations to access virtual desktops hosted on centralized computing resources at remote data centers.

In this solution, Citrix XenDesktop is used to provision, manage, broker, and monitor the desktop virtualization environment.

Citrix XenDesktop transforms Windows desktops as an on-demand service to any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop, or any type of Windows, web, or SaaS application, to all the latest PCs, Macs, tablets, smartphones, laptops, and thin clients—and does so with a high-definition HDX user experience.

Citrix FlexCast delivery technology enables IT to optimize the performance, security, and cost of virtual desktops for any type of user, including task workers, mobile workers, power users, and contractors. XenDesktop helps IT rapidly adapt to business initiatives by simplifying desktop delivery and enabling user self-service. The open, scalable, and proven architecture simplifies management, support, and integration.

Machine Creation Services (MCS) is a provisioning mechanism introduced in XenDesktop 5.0. It is integrated with the XenDesktop management interface, Desktop Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management.

MCS allows several types of machines, including dedicated and pooled machines, to be managed within a catalog in Desktop Studio. Desktop customization is persistent for dedicated machines, while a pooled machine is required if a non-persistent desktop is appropriate.

In this solution, persistent virtual desktops running Windows 7 are provisioned using MCS.

Desktops provisioned using MCS share a common base image within a catalog. Because of this, the base image typically is accessed with sufficient frequency to naturally leverage EMC VNX FAST Cache, where frequently accessed data is promoted to flash drives to provide optimal I/O response time with fewer physical disks.

The Citrix Personal vDisk feature is introduced in Citrix XenDesktop 5.6. With Personal vDisk, users can preserve customization settings and user-installed applications in a

Overview

Citrix XenDesktop 5.6

Machine Creation Services

Citrix Personal vDisk

Page 27: EMC VSPEX End User Computing

Solution Technology Overview

27 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

pooled desktop. This capability is accomplished by redirecting the changes from the user’s pooled virtual machine to a separate disk called Personal vDisk. During runtime, the content of the Personal vDisk is blended with the content from the base virtual machine to provide a unified experience to the end user. The Personal vDisk data is preserved during reboot/refresh operations.

Citrix Profile Manager 4.1 preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix Profile Manager ensures that personal settings are applied to desktops and applications regardless of the user’s login location or client device.

The combination of Citrix Profile Manager and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization.

With Citrix Profile Manager, a user’s remote profile is downloaded dynamically when the user logs in to a Citrix XenDesktop. Profile Manager downloads user profile information only when the user needs it.

Virtualization The virtualization layer is a key component of any end-user computing solution. It allows the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and even allowing the physical capability of the system to change without affecting the hosted applications.

VMware vSphere 5.1 is used to build the virtualization layer for this solution. VMware vSphere 5.1 transforms a computer’s physical resources, by virtualizing the CPU, memory, storage, and network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers.

High-availability features of VMware vSphere 5.1 such as vMotion and Storage vMotion enable seamless migration of virtual machines and stored files from one vSphere server to another with minimal or no performance impact. Coupled with vSphere Distributed Resource Scheduling (DRS) and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources.

VMware vCenter is a centralized management platform for the VMware virtual infrastructure. It provides administrators with a single interface that can be accessed from multiple devices for all aspects of monitoring, managing, and maintaining the virtual infrastructure.

VMware vCenter is also responsible for managing some of the more advanced features of the VMware virtual infrastructure like VMware vSphere High Availability and Distributed Resource Scheduling (DRS), along with vMotion and Update Manager.

The VMware vSphere High Availability feature allows the virtualization layer to restart virtual machines in various failure conditions automatically.

Citrix Profile Manager 4.1

Overview

VMware vSphere 5.1

VMware vCenter

VMware vSphere High Availability

Page 28: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

28

If the virtual machine operating system has an error, the virtual machine can be restarted automatically on the same hardware.

If the physical hardware has an error, the impacted virtual machines can be restarted automatically on other servers in the cluster.

Note For VMware vSphere High Availability to restart virtual machines on different hardware, those servers must have resources available. There are specific recommendations in the Compute section to enable this functionality.

VMware vSphere High Availability allows you to configure policies to determine which machines are restarted automatically and under what conditions these operations should be attempted.

EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the vSphere client that provides a single interface that is used for managing EMC storage within the vSphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which allows new features to be introduced rapidly in response to changing customer requirements.

The following VSI features were used during the validation testing:

Storage Viewer — Extends the vSphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vSphere hosts and virtual machines. Storage Viewer presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.

Unified Storage Management — Simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision new Network File System (NFS) datastores, Virtual Machine File System (VMFS) datastores, and Raw Device Mapping (RDM) volumes seamlessly within vSphere client.

Refer to the product guides for EMC VSI for VMware vSphere, available on the EMC Online Support website, for more information.

Hardware acceleration with VMware vStorage API for Array Integration (VAAI) is a storage enhancement in vSphere 5.1 that enables vSphere to offload specific storage operations to compatible storage hardware such as the VNX series platforms. With storage hardware assistance, vSphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth.

EMC Virtual Storage Integrator for VMware

VNX VMware vStorage API for Array Integration support

Page 29: EMC VSPEX End User Computing

Solution Technology Overview

29 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Compute The choice of a server platform for an EMC VSPEX infrastructure is based not only on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For these reasons, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents a number of processor cores and an amount of RAM that must be provided. This can be implemented with 2 servers—or 20—and still be considered the same VSPEX solution.

For example, let us assume that the compute layer requirements for a given implementation are 25 processor cores and 200 GB of RAM. One customer might want to use white-box servers containing 16 processor cores and 64 GB of RAM, while a second customer might choose a higher-end server with 20 processor cores and 144 GB of RAM.

Page 30: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

30

Figure 2. Compute layer flexibility

The first customer needs four of the servers while the second customer needs two, as shown in Figure 2.

Note To enable high availability at the compute layer, each customer will need one additional server with sufficient capacity to provide a failover platform in the event of a hardware outage.

The following best practices should be observed in the compute layer:

It is a best practice to use a number of identical or, at least, compatible servers. VSPEX implements hypervisor-level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.

Page 31: EMC VSPEX End User Computing

Solution Technology Overview

31 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

If you are implementing hypervisor-layer high availability, then the largest virtual machine you can create is constrained by the smallest physical server in the environment.

Implementing the high-availability features available in the virtualization layer is recommended to ensure that the compute layer has sufficient resources to accommodate at least single-server failures. This allows you to implement minimal-downtime upgrades and tolerate single-unit failures.

Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be quite flexible to meet your specific needs. The key constraint is provision of sufficient processor cores and RAM per core to meet the needs of the target environment.

Network The infrastructure network requires redundant network links for each vSphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. An example of this kind of highly available network topology is depicted in Figure 3.

Note The example is for IP-based networks, but the same underlying principles of multiple connections and elimination of single points of failure also apply to Fibre Channel-based networks.

Page 32: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

32

Figure 3. Example of highly-available network design

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Page 33: EMC VSPEX End User Computing

Solution Technology Overview

33 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Storage The storage layer is also a key component of any cloud infrastructure solution, providing storage efficiency, management flexibility, and reduced total cost of ownership. This VSPEX solution uses the EMC VNX series for providing virtualization at the storage layer.

The EMC VNX family is optimized for virtual applications, delivering industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s enterprises.

The VNX series is powered by Intel® Xeon processors, for intelligent storage that automatically and efficiently scales in performance while ensuring data integrity and security. Table 1 identifies the VNX customer benefits.

Table 1. VNX customer benefits

Feature

Next-generation unified storage, optimized for virtualized applications

Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies

High availability, designed to deliver five 9s availability

Automated tiering with FAST VP (Fully Automated Storage Tiering for Virtual Pools) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously

Simplified management with EMC Unisphere™ for a single management interface for all NAS, SAN, and replication needs

Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for Flash

Software suites available

FAST Suite — Automatically optimizes for the highest system performance and the lowest storage cost simultaneously

Local Protection Suite — Practices safe data protection and repurposing

Remote Protection Suite — Protects data against localized failures, outages, and disasters

Application Protection Suite — Automates application copies and proves compliance

Security and Compliance Suite — Keeps data safe from changes, deletions, and malicious activity

Software packs available

Total Efficiency Pack — Includes all five of the preceding software suites

Overview

EMC VNX series

Page 34: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

34

Total Protection Pack — Includes Local, Remote, and Application Protection Suites

VNX FAST Cache

VNX FAST Cache, a part of the VNX FAST Suite, enables Flash drives to be used as an expanded cache layer for the array.

FAST Cache is an array-wide feature available for both file and block storage. FAST Cache works by examining 64-KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN.

FAST Cache enables XenDesktop to deliver consistent performance at flash drive speeds by absorbing read-heavy activities such as boot storms and antivirus scans, and write-heavy workloads such as operating system patches and application updates. This extended read/write cache is an ideal caching mechanism for MCS in XenDesktop because the base desktop image and other active user data are so frequently accessed that the data is serviced directly from the flash drives without having to access the slower drives at the lower storage tier.

VNX FAST VP (optional)

VNX FAST VP, a part of the VNX FAST Suite, enables you to automatically tier data across multiple types of drives to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 1 GB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation.

Backup and recovery Backup and recovery is another important component in this VSPEX solution, providing data protection by backing up data files or volumes on a defined schedule and restoring data lost by accident or disaster.

In this VSPEX solution, EMC Avamar® software provides backup and recovery services for up to 2,000 virtual desktops.

Avamar software provides rapid backup and restoration capabilities in the virtualized environment. Performance is greatly enhanced by the Avamar software’s seamless integration of deduplication technology, which results in vastly less data traversing the network, and greatly reduced amounts of data being backed up and stored—resulting in storage and bandwidth operational savings.

Two of the most common recovery requests made to backup administrators are the following:

File-level recovery — Object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are

Overview

EMC Avamar

Page 35: EMC VSPEX End User Computing

Solution Technology Overview

35 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

individual users deleting files, applications requiring recoveries, and batch process-related erasures.

System recovery — Although complete system recovery requests occur less frequently than do file-level recovery requests, this bare-metal restore capability is vital to the enterprise. Common root causes for full system recovery requests include viral infestation, registry corruption, and unidentifiable unrecoverable issues.

In both of these scenarios, Avamar functionality in conjunction with VMware implementations adds new capabilities for backup and recovery. Key capabilities added in VMware, such as the vStorage API integration and change block tracking (CBT), enable the Avamar software to protect the virtual environment more efficiently.

Leveraging CBT for both backup and recovery with virtual proxy server pools, this functionality minimizes management needs. Coupling that with Data Domain as the storage platform for image data, this solution enables the most efficient integration with two of the industry-leading next-generation backup appliances.

Security RSA SecurID two-factor authentication can provide enhanced security for the VSPEX end-user computing environment by requiring the user to authenticate with two pieces of information, collectively called a passphrase, consisting of:

Something the user knows: A PIN, which is used like any other PIN or password

Something the user has: A token code, provided by a physical or software “token,” which changes every 60 seconds

The typical use case deploys SecurID to authenticate users accessing protected resources from an external or public network. Access requests originating from within a secure network are authenticated by traditional mechanisms involving Active Directory or LDAP.

SecurID functionality is managed through RSA Authentication Manager, which also controls administrative functions such as token assignment to users, user management, and high availability. The Citrix NetScaler network appliance and Citrix Storefront enable streamlined integration of SecurID into the XenDesktop environment (as well as XenApp and other Citrix virtualization product environments).

For external access requests into the VSPEX End-User Computing with Citrix XenDesktop environment, the user is challenged for a userid, SecurID passphrase, and Active Directory password on a single dialog. Upon successful authentication, the user is logged in directly to his or her virtual desktop. Internal request authentication is carried out against Active Directory only.

Figure 4 describes authentication flow for an external access request to the XenDesktop environment.

RSA SecurID two-factor authentication

SecurID authentication in the VSPEX End-User Computing for Citrix XenDesktop environment

Page 36: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

36

Figure 4. Authentication control flow for XenDesktop access requests originating on an external network

Note Authentication policies set on NetScaler’s Access Gateway Enterprise Edition (AGEE) control authentication against SecurID and Active Directory.

Figure 5 depicts internal access authentication flow. Active Directory authentication is initiated from within Citrix Storefront.

Figure 5. Authentication control flow for XenDesktop requests originating on local network

Note Users are authenticated against Active Directory only.

Enablement of SecurID for VSPEX solutions is described in Securing VSPEX Citrix XenDesktop 5.6 End-User Computing Solutions with RSA Design Guide. The following components are required:

Required components

Page 37: EMC VSPEX End User Computing

Solution Technology Overview

37 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

RSA SecurID Authentication Manager (version 7.1 SP4) Used to configure and manage the SecurID environment and assign tokens to users, Authentication Manager 7.1 SP4 is available as an appliance or as an installable on a Windows Server 2008 R2 instance. Future versions of Authentication Manager will be available as a physical or virtual appliance only.

SecurID tokens for all users SecurID requires something the user knows (a PIN) combined with a constantly changing code from a “token” the user possesses. SecurID tokens may be physical, displaying at 60-second intervals a new code that the user must then enter with a PIN, or software-based, wherein the user supplies a PIN and the token code is supplied programmatically. Hardware and software tokens are registered with Authentication Manager through “token records” supplied on a CD or other media.

Citrix NetScaler network appliance (version 10 or higher) NetScaler’s Access Gateway functionality manages RSA SecurID (primary) and Active Directory (secondary) authentication of access requests originating on public or external networks. NetScaler also provides load balancer capability supporting high availability of Authentication Manager and Citrix Storefront servers.

Citrix Storefront (version 1.2 or higher) Storefront, also known as CloudGateway Express, provides authentication and other services and presents users’ desktops to browser-based or mobile Citrix clients.

Citrix Receiver Receiver provides an interface through which the user interacts with the virtual desktop or other Citrix virtual environment such as XenApp or XenServer. In the context of this solution, the user client is considered a generic user endpoint, so versions of the Receiver client, and options and optimizations for them, are not addressed.

Figure 6 depicts the VSPEX End-User Computing for Citrix XenDesktop environment with added infrastructure to support SecurID. All necessary components can run in a redundant, high-availability configuration on 2 or more VMware ESXi™ hosts with a minimum of 12 CPU cores (16 recommended) and 16 GB of RAM. Table 2 on page 39 summarizes these requirements.

Compute, memory and storage resources

Page 38: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

38

Figure 6. Logical architecture: VSPEX End-User Computing for Citrix XenDesktop

with RSA

Page 39: EMC VSPEX End User Computing

Solution Technology Overview

39 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Table 2. Minimum hardware resources to support SecurID

CPU (cores)

Memory (GB)

Storage (GB)

SQL database* Reference

RSA Authentication Manager

2 8** 60 n/a

RSA Authentication Manager 7.1 Performance and Scalability Guide

Citrix NetScaler VPX 2 4 40 n/a

Citrix NetScaler VPX Getting Started Guide

Citrix Storefront 2 2 20 3.5 MB per

100 users

* It is expected that this capacity can be drawn from pre-existing SQL Server infrastructure.

** RSA recommends an 8 GB minimum for VMware-based deployments. A 4 GB or even 2 GB configuration is acceptable on standalone servers.

Page 40: EMC VSPEX End User Computing

Solution Technology Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

40

Page 41: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

41

Chapter 4 Solution Architectural Overview

This chapter presents the following topics:

Solution overview ...................................................................................... 42

Solution architecture ................................................................................. 42

Server configuration guidelines .................................................................. 56

Network configuration guidelines ............................................................... 58

Storage configuration guidelines ................................................................ 60

High availability and failover ...................................................................... 69

Validation test profile ................................................................................ 71

Backup environment configuration guidelines ............................................ 72

Sizing guidelines ....................................................................................... 73

Reference workload ................................................................................... 73

Applying the reference workload ................................................................ 74

Implementing the reference architectures ................................................... 75

Quick assessment ..................................................................................... 77

Page 42: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

42

Solution overview VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute, and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloud-based computing by enabling faster deployment, more choice, higher efficiency, and lower risk.

This section is intended to be a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meets or exceeds the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your End-User Computing deployment.

Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual desktops that have been validated by EMC. In practice, each virtual desktop type has its own set of requirements that rarely fit a predefined idea of what a virtual desktop should be. In any discussion about end-user computing, a reference workload should first be defined. Not all servers perform the same tasks, and building a reference that takes into account every possible combination of workload characteristics is impractical.

Solution architecture The VSPEX End-User Computing solution with EMC VNX is validated at three different points of scale. These defined configurations form the basis of creating a custom solution. These points of scale are defined in terms of the reference workload.

Note VSPEX uses the concept of a Reference Workload to describe and define a virtual machine. Therefore, one physical or virtual desktop in an existing environment may not be equal to one virtual desktop in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. Applying the reference workload provides a detailed description.

The architecture diagrams in this section show the layout of major components comprising the solutions. Two storage variants, NFS and FC, are shown in the following diagrams.

Architecture for up to 500 virtual desktops

Page 43: EMC VSPEX End User Computing

Solution Architectural Overview

43 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Figure 7 depicts the logical architecture of the NFS variant for 500 virtual desktops, wherein 10 GbE carries storage traffic for servers hosting virtual desktops and 1 GbE carries all other traffic.

Figure 7. Logical architecture for 500 virtual desktops – NFS variant

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements.

Page 44: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

44

Figure 8 depicts the logical architecture of the FC variant for 500 virtual desktops, wherein an FC SAN carries storage traffic and 1 GbE carries management and application traffic.

Figure 8. Logical architecture for 500 virtual desktops – FC variant

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements.

Page 45: EMC VSPEX End User Computing

Solution Architectural Overview

45 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

The architecture diagrams in this section show the layout of major components comprising the solutions. Two storage variants, NFS and FC, are shown in the following diagrams.

Figure 9 depicts the logical architecture of the NFS variant for 1,000 virtual desktops, wherein 10 GbE carries storage traffic for servers hosting virtual desktops and 1 GbE carries all other traffic.

Figure 9. Logical architecture for 1,000 virtual desktops – NFS variant

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that sufficient bandwidth and redundancy are provided to meet the listed requirements.

Architecture for up to 1,000 virtual desktops

Page 46: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

46

Figure 10 depicts the logical architecture of the FC variant for 1,000 virtual desktops, wherein an FC SAN carries storage traffic and 1 GbE carries management and application traffic.

Figure 10. Logical architecture for 1,000 virtual desktops – FC variant

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that sufficient bandwidth and redundancy are provided to meet the listed requirements.

Page 47: EMC VSPEX End User Computing

Solution Architectural Overview

47 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

The architecture diagrams in this section show the layout of major components comprising the solutions. Two storage variants, NFS and FC, are shown in the following diagrams.

Figure 11 depicts the logical architecture of the NFS variant for 2,000 virtual desktops, wherein 10 GbE carries storage traffic for servers hosting virtual desktops and 1 GbE carries all other traffic.

Figure 11. Logical architecture for 2,000 virtual desktops – NFS variant

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements.

Architecture for up to 2,000 virtual desktops

Page 48: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

48

Figure 12 depicts the logical architecture of the FC variant for 2,000 virtual desktops, wherein an FC SAN carries storage traffic and 1GbE carries management and application traffic.

Figure 12. Logical architecture for 2,000 virtual desktops – FC variant

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements.

Citrix XenDesktop 5.6 controller – Two Citrix XenDesktop controllers are used to provide redundant virtual desktop delivery, authenticate users, manage the assembly of users' virtual desktop environments, and broker connections between users and their virtual desktops. In this reference architecture, the controllers are installed on Windows Server 2008 R2 and hosted as virtual machines on VMware vSphere 5.1 servers.

Virtual desktops – Persistent virtual desktops running Windows 7 are provisioned using MCS, a provisioning mechanism introduced in XenDesktop 5.0.

VMware vSphere 5.1 — VMware vSphere provides a common virtualization layer to host a server environment. Table 10 on page 74 lists the specifics of the validated environment. VMware vSphere 5.1 provides a highly available infrastructure through features such as the following:

vMotion — Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption

Storage vMotion — Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption

Key components

Page 49: EMC VSPEX End User Computing

Solution Architectural Overview

49 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

vSphere High Availability (HA) – Detects and provides rapid recovery for a failed virtual machine in a cluster

Distributed Resource Scheduler (DRS) – Provides load balancing of computing capacity in a cluster

Storage Distributed Resource Scheduler (SDRS) – Provides load balancing across multiple datastores, based on space use and I/O latency

VMware vCenter Server 5.1 – vCenter Server provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vSphere 5.1 cluster. All vSphere hosts and their virtual machines are managed through vCenter.

Active Directory server – Active Directory services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose.

DHCP server – The DHCP server centrally manages the IP address scheme for the virtual desktops. This service is hosted on the same virtual machine as the domain controller and DNS server. The Microsoft DHCP Service running on a Windows 2012 server is used for this purpose.

DNS server — DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose.

VSI for VMware vSphere — EMC VSI for VMware vSphere is a plug-in to the vSphere client that provides storage management for EMC arrays directly from the client. VSI is highly customizable and helps provide a unified management interface.

IP/Storage Networks — All network traffic is carried by standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while NFS storage traffic is carried over a private, non-routable subnet.

Mixed 10 and 1 GbE IP network – The Ethernet network infrastructure provides 10 GbE connectivity between virtual desktops, vSphere clusters, and VNX storage. For the NFS variant, the 10 GbE infrastructure allows vSphere servers to access NFS datastores on the VNX with high bandwidth and low latency. It also allows desktop users to redirect their roaming profiles and home directories to the centrally maintained CIFS shares on the VNX. The desktop clients, XenDesktop management components, and Windows server infrastructure can reside on 1 GbE network.

Fibre Channel network –For the FC variant, storage traffic between all vSphere hosts and the VNX storage system is carried over an FC network. All other traffic is carried over 1 GbE.

EMC VNX5300 array — A VNX5300 array provides storage by presenting NFS/FC datastores to vSphere hosts for up to 1,000 virtual desktops.

EMC VNX5500 array — A VNX5500 array provides storage by presenting NFS/FC datastores to vSphere hosts for up to 2,000 virtual desktops.

VNX family storage arrays include the following components:

Page 50: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

50

Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iSCSI, and FCoE protocols. The SPs provide access for all external hosts and for the file side of the VNX array.

The Disk-Processor Enclosure (DPE) is 3U in size and houses each storage processor as well as the first tray of disks. This form factor is used in the VNX5300 and VNX5500.

X-Blades (or Data Movers) access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pNFS protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists.

The Data Mover Enclosure (DME) is 2U in size and houses the Data Movers (X-Blades). The DME is similar in form to the SPE and is used on all VNX models that support file.

Standby power supplies are 1U in size and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted.

Control Stations are 1U in size and provide management functions to the file-side components referred to as X-Blades. The Control Station is responsible for X-Blade failover. The Control Station optionally may be configured with a matching secondary Control Station to ensure redundancy on the VNX array.

Disk-Array Enclosures (DAEs) house the drives used in the array.

EMC Avamar – Avamar software provides the platform for protection of virtual machines. This protection strategy leverages persistent virtual desktops. It also leverages both image protection and end-user recoveries.

Page 51: EMC VSPEX End User Computing

Solution Architectural Overview

51 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Table 3 lists the hardware used in this solution.

Table 3. Solution hardware

Hardware Configuration Notes

Servers for virtual desktops

Memory: 2 GB RAM per desktop

1 TB RAM across all servers for 500 virtual desktops

2 TB RAM across all servers for 1,000 virtual desktops

4 TB RAM across all servers for 2,000 virtual desktops

CPU: 1 vCPU per desktop (8 desktops per core)

63 cores across all servers for 500 virtual desktops

125 cores across all servers for 1,000 virtual desktops

250 cores across all servers for 2,000 virtual desktops

Network:

Six 1 GbE NICs per standalone server for 500 virtual desktops

Three 10 GbE NICs per blade chassis or six 1 GbE NICs per standalone server for 1,000/2,000 desktops

Total server capacity required to host virtual desktops

Network infrastructure

Minimum switching capability for NFS variant:

Two physical switches

Six 1 GbE ports per vSphere server or three 10 GbE ports per blade chassis

One 1 GbE port per Control Station for management

Two 10 GbE ports per Data Mover for data

Redundant LAN configuration

Minimum switching capability for FC variant:

Two 1 GbE ports per vSphere server

Four 4/8 Gb FC ports for VNX back end

Two 4/8 Gb FC ports per vSphere server

Redundant LAN/SAN configuration

Storage Common

Two 10 GbE interfaces per Data Mover Two 8 Gb FC ports per storage processor (FC

variant only)

Hardware resources

Page 52: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

52

Hardware Configuration Notes

For 500 virtual desktops:

Two Data Movers (active/standby NFS variant only)

Fifteen 300 GB, 15 k rpm 3.5-inch SAS disks

Three 100 GB, 3.5-inch flash drives

For 1,000 virtual desktops:

Two Data Movers (active/standby NFS variant only)

Twenty-six 300 GB, 15 k rpm 3.5-inch SAS disks

Three 100 GB, 3.5-inch flash drives

For 2,000 virtual desktops:

Three Data Movers (2 active/1 standby NFS variant only)

Forty-six 300 GB, 15 k rpm 3.5-inch SAS disks

Five 100 GB, 3.5-inch flash drives

VNX shared storage for virtual desktops

For 500 virtual desktops:

Nine 2 TB, 7,200 rpm 3.5-inch NL-SAS disks

For 1,000 virtual desktops:

Seventeen 2 TB, 7,200 rpm 3.5-inch NL-SAS disks

For 2,000 virtual desktops:

Thirty-four 2 TB, 7,200 rpm 3.5-inch NL-SAS disks

Optional for user data

For 500 virtual desktops:

Five 300 GB, 15 k rpm 3.5-inch SAS disks

For 1,000 virtual desktops:

Five 300 GB, 15 k rpm 3.5-inch SAS disks

For 2,000 virtual desktops:

Five 300 GB, 15 k rpm 3.5-inch SAS disks

Optional for infrastructure storage

Page 53: EMC VSPEX End User Computing

Solution Architectural Overview

53 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Hardware Configuration Notes

Shared infrastructure

In most cases, a customer environment will already have infrastructure services such as Active Directory and DNS configured. The setup of these services is beyond the scope of this document.

If this solution is being implemented with no existing infrastructure, a minimum number of additional servers is required:

Two physical servers

20 GB RAM per server

Four processor cores per server

Two 1 GbE ports per server

Services can be migrated into VSPEX post-deployment but must exist before VSPEX can be deployed

EMC next-generation backup

Avamar

One Gen4 utility node

One Gen4 3.9TB spare node

Three Gen4 3.9TB storage nodes

Servers for customer infrastructure

Minimum number required:

Two physical servers

20 GB RAM per server

Four processor cores per server

Two 1 GbE ports per server

Servers and the roles they fulfill may already exist in the customer environment

Table 4 lists the software used in this solution.

Table 4. Solution software

Software Configuration

VNX5300 (shared storage, file systems)

VNX OE for file Release 7.1.47-5

VNX OE for block Release 32 (05.32.000.5.006)

EMC VSI for VMware vSphere: Unified Storage Management

Version 5.3

EMC VSI for VMware vSphere: Storage Viewer

Version 5.3

EMC PowerPath Viewer (FC variant only)

Version 1.0.SP2.b019

XenDesktop Desktop Virtualization

Citrix XenDesktop Controller Version 5.6 Platinum Edition

Software resources

Page 54: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

54

Software Configuration

Operating system for XenDesktop Controller

Windows Server 2008 R2 Standard Edition

Microsoft SQL Server Version 2008 R2 Standard Edition

Next-generation backup

Avamar 6.1 SP1

VMware vSphere

vSphere Server 5.1

vCenter Server 5.1

Operating system for vCenter Server Windows Server 2008 R2 Standard Edition

vStorage API for Array Integration Plugin (VAAI) (NFS variant only)

1.0-10

PowerPath Virtual Edition (FC variant only)

5.7.0.2

Virtual Desktops Note Beyond base OS, software was used for solution validation and is not required.

Base operating system Microsoft Windows 7 Enterprise (32-bit) SP1

Microsoft Office Office Enterprise 2007 SP3

Internet Explorer 8.0.7601.17514

Adobe Reader 9.1

McAfee Virus Scan 8.7.0i Enterprise

Adobe Flash Player 11

Bullzip PDF Printer 6.0.0.865

FreeMind 0.8.1

When selecting servers for this solution, ensure that the processor core meets or exceeds the performance of the Intel Nehalem family at 2.66 GHz. As servers with greater processor speeds, performance, and higher core density become available, servers may be consolidated as long as the required total core and memory count is met and a sufficient number of servers are incorporated to support the necessary level of high availability.

As with servers, you may also consolidate network interface card (NIC) speed and quantity as long as you maintain the overall bandwidth requirements for this solution and sufficient redundancy to support high availability.

Sizing for validated configuration

Page 55: EMC VSPEX End User Computing

Solution Architectural Overview

55 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Table 5 shows the configurations of each server having two sockets of four cores and 128 GB of RAM, and one 10 GbE per four blades plus one 10 GbE for each blade chassis, that support this solution:

Table 5. Configurations that support this solution

Number of servers

Number of virtual desktops

Total cores Total RAM

8 500 64 1 TB

16 1,000 128 2 TB

32 2,000 256 4 TB

As shown in Table 10 on page 74, a minimum of one core is required to support eight virtual desktops and a minimum of 2 GB of RAM for each. The correct balance of memory and cores for the expected number of virtual desktops to be supported by a server must also be taken into account. For example, a server expected to support 24 virtual desktops requires a minimum of three cores but also a minimum of 48 GB of RAM.

IP network switches used to implement this reference architecture must have a minimum backplane capacity of 96 (for 500 virtual desktops), 192 (for 1,000 virtual desktops), or 320 (for 2,000 virtual desktops) Gb/s non-blocking and support the following features:

IEEE 802.1x Ethernet flow control

802.1q VLAN tagging

Ethernet link aggregation using IEEE 802.1ax (802.3ad) Link Aggregation Control Protocol

SNMP management capability

Jumbo frames

The quantity and type of switches chosen should support high availability; choosing a network vendor based on the availability of parts, service, and support contracts is also recommended. In addition to the above features, the network configuration should include the following:

A minimum of two switches to support redundancy

Redundant power supplies

A minimum of 40 1 GbE ports (for 500 virtual desktops), two 1 GbE and fourteen 10 GbE ports (for 1,000 virtual desktops), or two 1 GbE and twenty-two 10 GbE ports (for 2,000 virtual desktops),distributed for high availability

The appropriate uplink ports for customer connectivity

Use of 10 GbE ports should align with those on the server and storage while keeping in mind the overall network requirements for this solution and a level of redundancy to support high availability. Additional server NICs and storage connections should also be considered based on customer or specific implementation requirements.

Page 56: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

56

The management infrastructure (Active Directory, DNS, DHCP, and SQL Server) can be supported on two servers similar to those previously defined, but require a minimum of only 20 GB of RAM instead of 128 GB.

Storage configuration guidelines describes the disk storage layout.

Server configuration guidelines When you are designing and ordering the compute/server layer of the VSPEX solution, you should consider several factors that may alter the final purchase. From a virtualization perspective, if a system’s workload is well understood, features like Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement.

If the virtual desktop pool does not have a high level of peak or concurrent usage, the number of vCPUs may be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased may need to be increased. Table 6 provides configuration details for the virtual desktop servers and network hardware.

Table 6. Server hardware

Hardware Configuration Notes

Servers for virtual desktops

Memory: 2 GB RAM per desktop

1 TB RAM across all servers for 500 virtual desktops

2 TB RAM across all servers for 1,000 virtual desktops

4 TB RAM across all servers for 2,000 virtual desktops

CPU: 1 vCPU per desktop (8 desktops per core)

63 cores across all servers for 500 virtual desktops

125 cores across all servers for 1,000 virtual desktops

250 cores across all servers for 2,000 virtual desktops

Network:

Six 1GbE NICs per standalone server for 500 virtual desktops

Three 10 GbE NICs per blade chassis or Six 1 GbE NICs per standalone server for 1,000/2,000 desktops

Total server capacity required to host virtual desktops

Overview

Page 57: EMC VSPEX End User Computing

Solution Architectural Overview

57 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

VMware vSphere 5 has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features and the items you need to consider when using them in the environment.

In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources. Figure 13 shows an example of memory consumption at the hypervisor level.

Figure 13. Hypervisor memory consumption

Memory over-commitment

Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vSphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vSphere can handle memory over-commitment without any performance degradation. However, if actively using more memory than those present on the server, vSphere might resort to swapping out portions of a virtual machine's memory.

VMware vSphere memory virtualization for VSPEX

Page 58: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

58

Non-Uniform Memory Access

vSphere uses a Non-Uniform Memory Access (NUMA) load-balancer to assign a home node to a virtual machine. Because memory for the virtual machine is allocated from the home node, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature.

Transparent page sharing

Virtual machines running similar operating systems and applications typically have identical sets of memory content. Page sharing allows the hypervisor to reclaim the redundant copies and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries, then total memory usage can be reduced to increase consolidation ratios.

Memory ballooning

By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention. This is accomplished with little or no impact to the performance of the application.

This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vSphere memory overhead and the virtual machine memory settings.

vSphere memory overhead

The virtualization of memory resources has associated overhead. The memory space overhead has two components:

Fixed system overhead for the VMkernel

Additional overhead for each virtual machine

The amount of additional overhead memory for the VMkernel is fixed, while each virtual machine depends on the number of virtual CPUs and the memory configured for the guest operating system.

Allocating memory to virtual machines

The proper sizing of memory for a virtual machine in VSPEX architectures is based on many factors. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments, as discussed later in this paper. Table 10 outlines the resources used by a single virtual machine.

Network configuration guidelines This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here take into account jumbo frames, VLAN, and Link Aggregation Control Protocol (LACP) on EMC unified storage. Table 3 on page 51 provides detailed network resource requirements.

Memory configuration guidelines

Overview

Page 59: EMC VSPEX End User Computing

Solution Architectural Overview

59 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

It is a best practice to isolate network traffic so that the traffic between hosts and storage and hosts and clients, as well as management traffic, all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons, but in many cases logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs:

Client access

Storage

Management

These VLANs are illustrated in Figure 14.

Figure 14. Required networks

Note The diagram demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. A similar topology should be created for an array using 1 GbE network connections.

The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network is used for communication between the compute layer and the storage layer. The management network is used for administrators to have a dedicated way to access the management connections on the storage array, network switches, and hosts.

VLAN

Page 60: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

60

Note Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks may be implemented if desired, but they are not required.

Note If the Fibre Channel storage network option is chosen for the deployment, similar best practices and design principles apply.

This EMC VSPEX End-User Computing solution recommends that MTU be set at 9,000 (jumbo frames) for efficient storage and migration traffic.

A link aggregation resembles an Ethernet channel, but uses the Link Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Storage configuration guidelines vSphere allows more than one method of using storage when hosting virtual machines. The solutions described in this section and in Table 7 were tested using NFS, and the storage layout described adheres to all current best practices. Educated customers and architects can make modifications based on their understanding of the systems usage and load if required.

Table 7. Storage hardware

Hardware Configuration Notes

Storage Common

Two 10 GbE interfaces per Data Mover

Two 8 Gb FC ports per storage processor (FC variant only)

Enable jumbo frames

Link aggregation

Overview

Page 61: EMC VSPEX End User Computing

Solution Architectural Overview

61 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Hardware Configuration Notes

For 500 virtual desktops

Two Data Movers (active/standby NFS variant only)

Fifteen 300 GB, 15 k rpm 3.5-inch SAS disks

Three 100 GB, 3.5-inch flash drives

For 1,000 virtual desktops

Two Data Movers (active/standby NFS variant only)

Twenty-six 300 GB, 15 k rpm 3.5-inch SAS disks

Three 100 GB, 3.5-inch flash drives

For 2,000 virtual desktops

Three Data Movers (2 active/1 standby NFS variant only)

Forty-six 300 GB, 15 k rpm 3.5-inch SAS disks

Five 100 GB, 3.5-inch flash drives

VNX shared storage for virtual desktops

For 500 virtual desktops

Nine 2 TB, 7,200 rpm 3.5-inch NL-SAS disks

For 1,000 virtual desktops

Seventeen 2 TB, 7,200 rpm 3.5-inch NL-SAS disks

For 2,000 virtual desktops

Thirty-four 2 TB, 7,200 rpm 3.5-inch NL-SAS disks

Optional for user data

For 500 virtual desktops

Five 300 GB, 15 k rpm 3.5-inch SAS disks

For 1,000 virtual desktops

Five 300 GB, 15 k rpm 3.5-inch SAS disks

For 2,000 virtual desktops

Five 300 GB, 15 k rpm 3.5-inch SAS disks

Optional for infrastructure storage

VMware ESXi provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to the virtual machine.

A virtual machine stores its operating system, and all other files that are related to the virtual machine activities, in a virtual disk. The virtual disk itself is one file or multiple files. VMware uses a virtual SCSI controller to present the virtual disk to the guest operating system running inside the virtual machine.

VMware vSphere storage virtualization for VSPEX

Page 62: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

62

The virtual disk resides in a datastore. Depending on the type used, the virtual disk can reside in either a VMware Virtual Machine File System (VMFS) datastore or an NFS datastore. Figure 15 shows the details.

Figure 15. VMware virtual disk types

VMFS

VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI-based local or network storage.

Raw Device Mapping

In addition, VMware provides a mechanism named Raw Device Mapping (RDM). RDM allows a virtual machine to directly access a volume on the physical storage and can be used only with Fibre Channel or iSCSI.

NFS

VMware also supports use of NFS file systems from external NAS storage systems or devices as virtual machine datastores.

In this VSPEX solution, VMFS is used for FC variant; NFS is used for NFS variant.

Core storage layout

Figure 16 on page 63 illustrates the layout of the disks that are required to store 500 virtual desktops. This layout does not include space for user profile data.

Storage layout for 500 virtual desktops

Page 63: EMC VSPEX End User Computing

Solution Architectural Overview

63 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

FAST Cache

RAID 1

Virtual Desktops

Storage Pool 1

RAID 5

UN-BOUND

Hot

Spare

VNX OE

RAID 5 (3+1)

Hot

Spare

0 1 42 3 5

10 11 120 1 42 3 5 6 97 8

SAS SSD NL SAS

Bus 1

Enclosure 0

Bus 0

Enclosure 0

UNBOUND

6 7 108 9 11 12 13 14

1413

Figure 16. Core storage layout for 500 virtual desktops

Core storage layout overview

The following core configuration is used in the reference architecture for 500 desktop virtual machines:

Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.

The disks shown here as 0_0_4 and 1_0_0 are hot spares. These disks are marked as hot spares in the storage layout diagram.

Ten SAS disks (shown here as 0_0_5 to 0_0_14) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool.

For NAS, ten LUNs of 200 GB each are carved out of the pool to provide the storage required to create four NFS file systems. The file systems are presented to the vSphere servers as four NFS datastores.

For FC, four LUNs of 500 GB each are carved out of the pool to present to the vSphere servers as four VMFS datastores.

Two Flash drives (shown here as 1_0_1 and 1_0_2) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.

Disks shown here as 1_0_3 to 1_0_14 are unbound. They were not used for testing this solution.

Note Larger drives may be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms may give sub-optimal results.

Optional user data storage layout

In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 17 on page 64. This storage is in addition to the core storage shown above. If storage for user data exists elsewhere in the production environment, this storage is not required.

Page 64: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

64

Figure 17. Optional storage layout for 500 virtual desktops

Optional storage layout overview

The virtual desktops use two shared filed systems – one for user profiles, and the other to redirect user storage that resides in home directories.

In general, redirecting users’ data out of the base image of VNX for file enables centralized administration, backup, and recovery, and makes the desktops more stateless.

Each file system is exported to the environment through a CIFS share.

The following optional configuration is used in the reference architecture for 500 virtual desktops:

The disk shown here as 0_1_13 is a hot spare. This disk is marked as hot spare in the storage layout diagram.

Five SAS disks (shown here as 0_1_0 to 0_1_4) on the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. A 1 TB LUN or NFS file system is carved out of the pool to present to the vSphere servers as a VMFS or an NFS datastore.

Eight NL-SAS disks (shown here as 0_1_5 to 0_1_12) on the RAID 6 storage pool 3 are used to store user data and roaming profiles. Ten LUNs of 1 TB each are carved out of the pool to provide the storage required to create two CIFS file systems.

The disk shown here as 0_1_14 is unbound. It was not used for testing this solution.

If multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1GB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop datastores, but it can provide performance improvements when implemented for user data and roaming profiles.

Core storage layout

Figure 18 on page 65 illustrates the layout of the disks that are required to store 1,000 desktop virtual machines. This layout does not include space for user profile data.

Storage layout for 1,000 virtual desktops

Page 65: EMC VSPEX End User Computing

Solution Architectural Overview

65 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Figure 18. Core storage layout for 1,000 virtual desktops

Core storage layout overview

The following core configuration is used in the reference architecture for 1,000 virtual desktops:

Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.

The disks shown here as 0_0_4 and 1_0_10 are hot spares. These disks are marked as hot spare in the storage layout diagram.

Twenty SAS disks (shown here as 0_0_5 to 0_0_14 and 1_0_0 to 1_0_9) in the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool.

For NAS, ten LUNs of 400 GB each are carved out of the pool to provide the storage required to create eight NFS file systems. The file systems are presented to the vSphere servers as eight NFS datastores.

For FC, eight LUNs of 500GB each are carved out of the pool to present to the vSphere servers as eight VMFS datastores.

Two Flash drives (shown here as 1_0_11 and 1_0_12) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.

The disks shown here as 1_0_13 and 1_0_14 are unbound. They were not used for testing this solution.

Note Larger drives may be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms may give sub-optimal results.

Optional user data storage layout

In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 19 on page 66. This storage is in addition to the core storage shown above. If storage for user data exists elsewhere in the production environment, this storage is not required.

Page 66: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

66

Figure 19. Optional storage layout for 1,000 virtual desktops

Optional storage layout overview

The virtual desktops use two shared file systems—one for user profiles and the other to redirect user storage that resides in home directories.

In general, redirecting users’ data out of the base image of VNX for file enables centralized administration, backup, and recovery, and makes the desktops more stateless.

Each file system is exported to the environment through a CIFS share.

The following optional configuration is used in the solution stack architecture:

The disk shown here as 1_1_6 is a hot spare. This disk is marked as hot spare in the storage layout diagram.

Five SAS disks (shown here as 0_1_0 to 0_1_4) in the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. A 1 TB LUN or NFS file system is carved out of the pool to present to the vSphere servers as a VMFS or NFS datastore.

Sixteen NL-SAS disks (shown here as 0_1_5 to 0_1_14, and 1_1_0 to 1_1_5) in the RAID 6 storage pool 3 are used to store user data and roaming profiles. Ten LUNs of 1.5 TB each are carved out of the pool to provide the storage required to create two CIFS file systems.

The disks shown here as 1_1_7 to 1_1_14 are unbound. They were not used for testing this solution.

If multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1 GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop datastores, but it can provide performance improvements when implemented for user data and roaming profiles.

Page 67: EMC VSPEX End User Computing

Solution Architectural Overview

67 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Core storage layout

Figure 20 illustrates the layout of the disks that are required to store 2,000 desktop virtual machines. This layout does not include space for user profile data.

Figure 20. Core storage layout for 2,000 virtual desktops

Core storage layout overview

The following core configuration is used in the reference architecture for 2,000 virtual desktops:

Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.

The disks shown here as 0_0_4, 1_0_12, and 1_1_5 are hot spares. These disks are marked as hot spare in the storage layout diagram.

Forty SAS disks (shown here as 0_0_5 to 0_0_14, 1_0_0 to 1_0_11, 0_1_0 to 0_1_12, and 1_1_0 to 1_1_4) in the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool.

For NAS, 10 LUNs of 800 GB each are carved out of the pool to provide the storage required to create 16 NFS file systems. The file systems are presented to the vSphere servers as 16 NFS datastores.

For FC, 16 LUNs of 500GB each are carved out of the pool to present to the vSphere servers as 16 VMFS datastores.

Four Flash drives (shown here as 1_0_13 to 1_0_14 and 0_1_13 to 0_1_14) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.

The disks shown here as 1_1_6 to 1_1_14 are unbound. They were not used for testing this solution.

Storage layout for 2,000 virtual desktops

Page 68: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

68

Note Larger drives may be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms may give sub-optimal results.

Optional user data storage layout

In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 21. This storage is in addition to the core storage shown above. If storage for user data exists elsewhere in the production environment, this storage is not required.

Figure 21. Optional storage layout for 2,000 virtual desktops

Optional storage layout overview

The virtual desktops use two shared filed systems—one for user profiles, and the other to redirect user storage that resides in home directories.

In general, redirecting users’ data out of the base image of VNX for file enables centralized administration, backup, and recovery, and makes the desktops more stateless.

Each file system is exported to the environment through a CIFS share.

The following optional configuration is used in the solution stack architecture:

The disks shown here as 1_2_14 and 0_3_8 are hot spares. These disks are marked as hot spare in the storage layout diagram.

Five SAS disks (shown here as 0_2_0 to 0_2_4) in the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. A 1 TB LUN or NFS file system is carved out of the pool to present to the vSphere servers as a VMFS or NFS datastore.

Thirty two NL-SAS disks (shown here as 0_2_5 to 0_2_14, 1_2_0 to 1_2_13, and 0_3_0 to 0_3_7) in the RAID 6 storage pool 3 are used to store user data

Page 69: EMC VSPEX End User Computing

Solution Architectural Overview

69 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

and roaming profiles. Ten LUNs of 3 TB each are carved out of the pool to provide the storage required to create two CIFS file systems.

The disks shown here as 0_3_9 to 0_3_14 are unbound. They were not used for testing this solution.

If multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1GB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop datastores, but it can provide performance improvements when implemented for user data and roaming profiles.

High availability and failover This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide it provides the ability to survive most single-unit failures with minimal to no impact to business operations.

As indicated earlier, configuring high availability in the virtualization layer and allowing the hypervisor to automatically restart virtual machines that fail is recommended. Figure 22 illustrates the hypervisor layer responding to a failure in the compute layer.

Figure 22. High availability at the virtualization layer

Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure will attempt to keep as many services running as possible.

While this solution offers flexibility in the type of servers to be used in the compute layer, using enterprise class servers designed for the datacenter is recommended. These servers, with redundant power supplies, should be connected to separate Power Distribution Units (PDUs) in accordance with your server vendor’s best practices.

Introduction

Virtualization layer

Compute layer

Page 70: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

70

Figure 23. Redundant power supplies

Configuring high availability in the virtualization layer is also recommended. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, as demonstrated in Figure 22.

The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vSphere host has multiple connections to user and storage Ethernet networks to guard against link failures. These connections should be spread across multiple Ethernet switches to guard against component failure in the network.

Figure 24. Network layer high availability

Network layer

Page 71: EMC VSPEX End User Computing

Solution Architectural Overview

71 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

By designing the network with no single points of failure, you can ensure that the compute layer will be able to access storage and communicate with users even if a component fails.

The VNX family is designed for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in the event of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be allocated dynamically to replace a failing disk. This is shown in Figure 25.

Figure 25. VNX series high availability

EMC storage arrays are designed to be highly available by default. When they are configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability.

Validation test profile

The VSPEX solution was validated with following environment profile in Table 8.

Table 8. Validated environment profile

Profile characteristic Value

Number of virtual desktops 500 for 500 virtual desktops

1,000 for 1,000 virtual desktops

2,000 for 2,000 virtual desktops

Storage layer

Profile characteristics

Page 72: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

72

Profile characteristic Value

Virtual desktop OS Windows 7 Enterprise (32-bit) SP1

CPU per virtual desktop 1 vCPU

Number of virtual desktops per CPU core 8

RAM per virtual desktop 2 GB

Desktop provisioning method Machine Creation Services (MCS)

Average storage available for each virtual desktop 4.8 GB (VMDK and VSwap)

Average IOPS per virtual desktop at steady state 8 IOPS

Average peak IOPS per virtual desktop during boot storm

65 IOPS (NFS variant)

84 IOPS (FC variant)

Number of datastores to store virtual desktops 4 for 500 virtual desktops

8 for 1,000 virtual desktops

16 for 2,000 virtual desktops

Number of virtual desktops per datastore 125

Disk and RAID type for datastores RAID 5, 300 GB, 15k rpm, 3.5-inch SAS disks

Disk and RAID type for CIFS shares to host roaming user profiles and home directories (optional for user data)

RAID 6, 2 TB, 7,200 rpm, 3.5-inch NL-SAS disks

Backup environment configuration guidelines This section provides guidelines to set up the backup and recovery environment for this VSPEX solution.

Table 9 shows how the backup environment profile of three stacks in this VSPEX solution were sized.

Table 9. Backup profile characteristics

Profile characteristic Value

Number of virtual machines 500 for 500 virtual desktops

1,000 for 1,000 virtual desktops

2,000 for 2,000 virtual desktops

User data

5 TB for 500 virtual desktops

10 TB for 1,000 virtual desktops

20 TB for 2,000 virtual desktops

Note 10.0 GB per desktop

Daily change rate for the applications

Overview

Backup characteristics

Page 73: EMC VSPEX End User Computing

Solution Architectural Overview

73 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Profile characteristic Value

User data 2%

Retention per data types

# Daily 30 Daily

# Weekly 4 Weekly

# Monthly 1 Monthly

Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with an Avamar Data Store. This enables the unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. This backup solution unifies the backup process with industry-leading deduplication backup software and system, and achieves the highest levels of performance and efficiency.

Sizing guidelines The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures discussed in this document. They provide guidance on how to correlate those reference workloads to actual customer workloads and how that may change the end delivery from the server and network perspective.

You can modify the storage definition adding drives for greater capacity and performance as well as by adding features like FAST Cache for desktops and FAST VP for improved user data performance. The disk layouts were created to provide support for the appropriate number of virtual desktops at the defined performance level. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per desktop and a reduced user experience because of higher response time.

Reference workload Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines that have been validated by EMC. In practice, each virtual machine has its own set of requirements, which rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics.

To simplify the discussion, we have defined a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can extrapolate which reference architecture to choose.

Backup layout

Defining the reference workload

Page 74: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

74

For the VSPEX End-User Computing solution, the reference workload is defined as a single virtual desktop. Table 10 shows the characteristics of the reference virtual machine.

Table 10. Virtual desktop characteristics

Characteristic Value

Virtual desktop operating system Microsoft Windows 7 Enterprise Edition (32-bit) SP1

Virtual processors per virtual desktop 1

RAM per virtual desktop 2 GB

Available storage capacity per virtual desktop 4 GB (VMDK and VSwap)

Average IOPS per virtual desktop at steady state

8

Average peak IOPS per virtual desktop during boot storm

65 IOPS (NFS variant)

84 IOPS (FC variant)

This desktop definition is based on user data that resides on shared storage. The I/O profile is defined by using a test framework that runs all desktops concurrently, with a steady load generated by the constant use of office-based applications like browsers, office productivity software, and other standard task worker utilities.

Applying the reference workload You may need to consider other factors, in addition to the supported desktop numbers (500, 1,000, and 2,000), when deciding which end-user computing solution to deploy.

Concurrency

The workloads used to validate VSPEX solutions assume that all desktop users will be active at all times. In other words, the 1,000-desktop architecture was tested with 1,000 desktops, all generating workload in parallel, all booted at the same time, and so on. If your customer expects to have 1,200 users, but only 50 percent of them will be logged on at any given time because of time zone differences or alternate shifts, the 600 active users out of the total 1,200 users can be supported by the 1,000-desktop architecture.

Heavier desktop workloads

The workload defined in Table 10 and used to test these VSPEX end-user computing configurations is considered a typical office worker load. However, some customers may think that their users have a more active profile.

If a company has 800 users, and because of custom corporate applications each user generates 12 IOPS as compared to 8 IOPS used in the VSPEX workload, it will need

Page 75: EMC VSPEX End User Computing

Solution Architectural Overview

75 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

9,600 IOPS (800 users * 12 IOPS per desktop). The 1,000-desktop configuration would be underpowered in this case because it has been rated to 8,000 IOPS (1,000 desktops * 8 IOPS per desktop). This customer should move up to the 2,000-desktop solution.

Implementing the reference architectures The reference architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements.

The reference architectures define the hardware requirements for the solution in terms of four basic types of resources:

CPU resources

Memory resources

Network resources

Storage resources

This section describes the resource types, how they are used in the reference architectures, and key considerations for implementing them in a customer environment.

The architectures define the number of CPU cores that are required, but not a specific type or configuration. It is assumed that new deployments use recent revisions of common processor technologies, and it is assumed that these will perform as well as, or better than, the systems used to validate the solution.

In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual desktop and required hardware resources in the reference architectures assume that there will be no more than eight virtual CPUs for each physical processor core (8:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktops. However, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required.

Each virtual desktop in the reference architecture is defined as having 2 GB of memory. In a virtual environment, it is not uncommon to provision virtual desktops with more memory than the hypervisor physically has, because of budget constraints. The memory over-commitment technique takes advantage of the fact that each virtual desktop does not fully utilize the amount of memory allocated to it. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem.

If VMware vSphere runs out of memory for the guest operating systems, paging will begin to take place, resulting in extra I/O activity going to the VSwap files. If the

Resource types

CPU resources

Memory resources

Page 76: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

76

storage subsystem is sized correctly, occasional spikes because of VSwap activity may not cause performance issues, as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of VSwap activity, more disks must be added, not because of capacity requirement but because of the demand of increased performance. It is then up to the administrator to decide whether it is more cost effective to add more physical memory to the server or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option.

This solution was validated with statically assigned memory and no over-commitment of memory resources. If memory over-commitment is used in a real-world environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results.

The reference architectures outline the minimum needs of the system. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports and provide the option of adding ports using EMC FLEX I/O modules.

For reference purposes in the validated environment, EMC assumes that each virtual desktop generates 8 IOs per second with an average size of 4 KB. Each virtual desktop is generating at least 32 KB/s of traffic on the storage network. For an environment rated for 500 virtual desktops, this equates to a minimum of approximately 16 MB/sec. This is well within the bounds of modern networks. However, this does not take into account other operations. For example, additional bandwidth is needed for:

User network traffic

Virtual desktop migration

Administrative and management operations

The requirements for each of these vary depending on how the environment is being used, so it is not practical to provide concrete numbers in this context. However, the network described in the reference architecture for each solution should be sufficient to handle average workloads for the described use cases.

Regardless of the network traffic requirements, always have at least two physical network connections that are shared for a logical network so that a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload.

The reference architectures contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. There are a few layers to consider when examining storage sizing. Specifically, the array has a collection of disks that are assigned to a storage pool. From that storage pool, you can provision datastores to the VMware vSphere

Network resources

Storage resources

Page 77: EMC VSPEX End User Computing

Solution Architectural Overview

77 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5.

It is generally acceptable to replace drive types with a type that has more capacity and the same performance characteristics or with ones that have higher performance characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements.

In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system.

The requirements stated in the reference architectures are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual desktop. In any customer implementation, the load of a system will vary over time as users interact with the system. However, if the customer virtual desktops differ significantly from the reference definition, and vary in the same resource group, then you may need to add more of that resource to the system.

Quick assessment An assessment of the customer environment will help ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment.

First, summarize the user types that you plan to migrate into the VSPEX End-User Computing environment. For each group, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual desktops required from the resource pool.

Applying the reference workload provides examples of this process.

Fill out a row in the worksheet for each application, as shown in Table 11.

Table 11. Blank worksheet row

Application CPU (Virtual CPUs)

Memory (GB)

IOPS Equivalent Reference Virtual Desktops

Number of Users

Total Reference Desktops

Example User Type

Resource Requirements

Equivalent Reference Desktops

Fill out the resource requirements for the User Type. The row requires inputs on three different resources: CPU, Memory, and IOPS.

Implementation summary

Page 78: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

78

The reference virtual desktop assumes most desktop applications are optimized for a single CPU. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to account for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, then consider that your pool needs to provide 120 virtual desktops of capability.

Memory plays a key role in ensuring application functionality and performance. Therefore, each group of desktops will have different targets for the acceptable amount of available memory. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of desktops you are planning for to accommodate the additional resource requirements.

For example, if you have 200 desktops that will be virtualized but each one needs 4 GB of memory, instead of the 2 GB that is provided in the reference virtual desktop, plan for 400 reference virtual desktops.

The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations.

The storage capacity requirements for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops presented in this solution rely on additional shared storage for user profile data and user documents. This requirement is covered as an optional component that can be met with the addition of specific storage hardware from the reference architecture or with existing file shares in the environment.

With all of the resources defined, determine an appropriate value for the “Equivalent Reference Virtual Desktops” row in Table 10 by using the relationships in Table 12. Round all values up to the nearest whole number.

Table 12. Reference virtual desktop resources

Resource Value for Reference Virtual Desktop

Relationship between requirements and equivalent reference virtual desktops

CPU 1 Equivalent Reference Virtual Desktops = Resource Requirements

Memory 2 Equivalent Reference Virtual Desktops = (Resource Requirements)/2

CPU requirements

Memory requirements

Storage performance requirements

Storage capacity requirements

Determining equivalent reference virtual desktops

Page 79: EMC VSPEX End User Computing

Solution Architectural Overview

79 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Resource Value for Reference Virtual Desktop

Relationship between requirements and equivalent reference virtual desktops

IOPS 8 Equivalent Reference Virtual desktops = (Resource Requirements)/8

For example, if a group of 100 users need the two virtual CPUs and 12 IOPS per desktop described earlier, along with 8 GB of memory, describe them as needing two reference desktops of CPU, four reference desktops of memory, and two reference desktops of IOPS based on the virtual desktop characteristics in Table 10 on page 74. These figures go in the “Equivalent Reference Virtual Desktops” row, as shown in Table 13. Use the maximum value in the row to complete the “Equivalent Reference Virtual Desktops” column.

Multiply the number of equivalent reference virtual desktops by the number of users to arrive at the total resource needs for that type of user.

Table 13. Example worksheet row

User Type CPU (Virtual CPUs)

Memory (GB)

IOPS Equivalent Reference Virtual Desktops

Number of Users

Total Reference Desktops

Heavy Users

Resource Requirements

2 8 12

Equivalent Reference Virtual Desktops

2 4 2 4 100 400

After completing the worksheet for each user type to be migrated into the virtual infrastructure, compute the total number of reference virtual desktops that are required in the pool by computing the sum of the “Total” column on the right side of the worksheet, as shown in Table 14.

Table 14. Example applications

User Type CPU (Virtual CPUs)

Memory (GB)

IOPS Equivalent Reference Virtual Desktops

Number of Users

Total Reference Desktops

Heavy Users

Resource Requirements

2 8 12

Equivalent Reference Virtual Desktops

2 4 2 4 100 400

Page 80: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

80

User Type CPU (Virtual CPUs)

Memory (GB)

IOPS Equivalent Reference Virtual Desktops

Number of Users

Total Reference Desktops

Moderate Users

Resource Requirements

2 4 8

Equivalent Reference Virtual Desktops

2 2 1 2 100 200

Typical Users

Resource Requirements

1 2 8

Equivalent Reference Virtual Desktops

1 1 1 1 300 300

Total 900

The VSPEX End-User Computing solutions define discrete resource pool sizes. For this solution set, the pool sizes are 500, 1,000, and 2,000. In the case of Table 14, the customer requires 900 virtual desktops of capability from the pool. Therefore, the resource pool of 1,000 virtual desktops provides sufficient resources for the current needs as well as room for growth.

In most cases, the recommended hardware for servers and storage can be sized appropriately based on the process described. However, in some cases further customization of available hardware resources may be desired. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point.

Storage resources

In some applications, separating some storage workloads from other workloads may be necessary. The storage layouts in the VSPEX architectures put all of the virtual desktops in a single resource pool. To achieve workload separation, purchase additional disk drives for each group that needs workload isolation, and add them to a dedicated pool.

It is not appropriate to reduce the size of the main storage resource pool in order to support isolation, or to reduce the capability of the pool, without additional guidance beyond this document. The storage layouts presented in this paper are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and difficult-to-predict impacts on other areas of the system.

Fine-tuning hardware resources

Page 81: EMC VSPEX End User Computing

Solution Architectural Overview

81 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Server resources

In the VSPEX End-User Computing solution, it is possible to customize the server hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 15. Note the addition of the “Total CPU Resources” and “Total Memory Resources” columns on the right side of the table.

Table 15. Server resource component totals

User Type CPU (Virtual CPUs)

Memory (GB)

Number of Users

Total CPU Resources

Total Memory Resources

Heavy Users

Resource Requirements

2 8 100 200 800

Moderate Users

Resource Requirements

2 4 100 200 400

Typical Users

Resource Requirements

1 2 300 300 600

Total 700 1800

In this example, the target architecture required 700 virtual CPUs and 1800 GB of memory. With the stated assumptions of eight desktops per physical processor core, and no memory over-provisioning, this translates to 88 physical processor cores and 1800 GB of memory. In contrast, the 1,000 virtual-desktop resource pool as documented in the reference architecture calls for 2000 GB of memory and at least 125 physical processor cores. In this environment, the solution can be implemented effectively with fewer server resources.

Note Keep high availability requirements in mind when customizing the resource pool hardware.

Table 16 is a blank worksheet.

Page 82: EMC VSPEX End User Computing

Solution Architectural Overview

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

82

Table 16. Blank customer worksheet

User Type CPU (Virtual CPUs)

Memory (GB)

IOPS Equivalent Reference Virtual Desktops

Number of Users

Total Reference Desktops

Resource Requirements

Equivalent Reference Virtual Desktops

Resource Requirements

Equivalent Reference Virtual Desktops

Resource Requirements

Equivalent Reference Virtual Desktops

Resource Requirements

Equivalent Reference Virtual Desktops

Resource Requirements

Equivalent Reference Virtual Desktops

Total

Page 83: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

83

Chapter 5 VSPEX Configuration Guidelines

This chapter presents the following topics:

Overview ................................................................................................... 84

Pre-deployment tasks ................................................................................ 84

Customer configuration data ...................................................................... 87

Prepare switches, connect network, and configure switches ....................... 87

Prepare and configure storage array ........................................................... 91

Install and configure VMware vSphere hosts............................................. 101

Install and configure SQL Server database ................................................ 105

Install and configure VMware vCenter Server ............................................ 107

Install and configure XenDesktop controller .............................................. 110

Summary ................................................................................................. 112

Page 84: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

84

Overview Table 17 describes the stages of the solution deployment process. When the deployment is completed, the VSPEX infrastructure will be ready for integration with the existing customer network and server infrastructure.

Table 17. Deployment process overview

Stage Description Reference 1 Verify prerequisites. Pre-deployment tasks

2 Obtain the deployment tools. Pre-deployment tasks

3 Gather customer configuration data.

Pre-deployment tasks

4 Rack and cable the components. Vendor’s documentation

5 Configure the switches and networks; connect to the customer network.

Prepare switches, connect network, and configure switches

6 Install and configure the VNX. Prepare and configure storage array

7 Configure virtual machine datastores.

Prepare and configure storage array

8 Install and configure the servers.

Install and configure VMware vCenter Server

9 Set up SQL Server (used by VMware vCenter and XenDesktop).

Install and configure SQL Server database

10 Install and configure vCenter and virtual machine networking.

Install and configure VMware vCenter Server

11 Set up XenDesktop Controller.

Install and configure XenDesktop controller

12 Test and install. Validating the Solution

Pre-deployment tasks

Pre-deployment tasks include procedures that do not directly relate to environment installation and configuration, but whose results will be needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. Be sure to perform these tasks, as shown in Table 18, before the customer visit to decrease the time required onsite.

Overview

Page 85: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

85 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Table 18. Tasks for pre-deployment

Task Description Reference

Gather documents

Gather the related documents listed in the references. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution.

EMC documentation

Other documentation

Gather tools

Gather the required and optional tools for the deployment. Use Table 19 to confirm that all equipment, software, and appropriate licenses are available before the deployment process.

Table 19

Gather data

Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information on the customer configuration data worksheet for reference during the deployment process.

Appendix B

Complete the VNX Block Configuration Worksheet for Fibre Channel variant or VNX File and Unified Worksheet for NFS variant, available on the EMC Online Support website, to provide the most comprehensive array-specific information.

Table 19 itemizes the hardware, software, and license requirements for the solution. For additional information, refer to the hardware and software tables in this guide.

Table 19. Deployment prerequisites checklist

Requirement Description Reference

Hardware Physical servers to host virtual desktops: Sufficient physical server capacity to host desktops

VMware vSphere 5.1 servers to host virtual infrastructure servers

Note This requirement may be covered by existing infrastructure.

Networking: Switch port capacity and capabilities as required by the end-user computing

Deployment prerequisites

Page 86: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

86

Requirement Description Reference

EMC VNX: Multiprotocol storage array with the required disk layout

Software VMware ESXi 5.1 installation media

VMware vCenter Server 5.1 installation media

Citrix XenDesktop 5.6 installation media

EMC VSI for VMware vSphere: Unified Storage Management

EMC Online Support

EMC VSI for VMware vSphere: Storage Viewer

Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vCenter and Citrix Desktop Controller)

Microsoft Windows Server 2012 installation media (AD/DHCP/DNS)

Microsoft Windows 7 SP1 installation media

Microsoft SQL Server 2008 or newer installation media

Note This requirement may be covered in the existing infrastructure.

Software – FC variant only

EMC PowerPath Viewer EMC Online Support

EMC PowerPath Virtual Edition

Software – NFS variant only

EMC vStorage API for Array Integration plug-in

EMC Online Support

Licenses

VMware vCenter 5.1 license key

VMware vSphere 5.1 Desktop license keys

Citrix XenDesktop 5.6 license files

Page 87: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

87 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Requirement Description Reference

Microsoft Windows Server 2008 R2 Standard (or higher) license keys

Microsoft Windows Server 2012 Standard (or higher) license keys

Note This requirement may be covered in the existing Microsoft Key Management Server (KMS).

Microsoft Windows 7 license keys

Note This requirement may be covered in the existing Microsoft Key Management Server (KMS).

Microsoft SQL Server license key

Note This requirement may be covered in the existing infrastructure.

Licenses - FC variant only

EMC PowerPath Virtual Edition license files

Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process.

Appendix B provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information may be added, modified, and recorded as deployment progresses.

Additionally, complete the VNX Block Configuration Worksheet for Fibre Channel variant or VNX File and Unified Worksheets for NFS variant, available on the EMC Online Support website, to provide the most comprehensive array-specific information.

Prepare switches, connect network, and configure switches

This section provides the requirements for network infrastructure to support this architecture. Table 20 offers a summary of the tasks to complete along with references for further information.

Overview

Page 88: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

88

Table 20. Tasks for switch and network configuration

Task Description Reference

Configure infrastructure network

Configure storage array and ESXi host infrastructure networking as specified in Solution architecture on page 42.

Configure storage network (FC variant)

Configure Fibre Channel switch ports, zoning for ESXi hosts, and the storage array.

Vendor’s switch configuration guide

Configure VLANs

Configure private and public VLANs as required.

Your vendor’s switch configuration guide

Complete network cabling

Connect switch interconnect ports.

Connect VNX ports.

Connect ESXi server ports.

For validated levels of performance and high availability, this solution requires the switching capacity provided in this document’s Solution hardware table. If existing infrastructure meets the requirements, new hardware installation is not necessary.

The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution.

Figure 26 and Figure 27 show a sample redundant Ethernet infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that no single points of failure exist in network connectivity.

Prepare network switches

Configure infrastructure network

Page 89: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

89 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Figure 26. Sample Ethernet network architecture for 500 and 1,000 virtual desktops

Page 90: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

90

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

CHS A56

FAN 1 FAN 5 FAN 2 FAN 6 FAN 3 FAN 7 FAN 4 FAN 8

!

Cus

tom

er d

ata

(CIF

S)

Sto

rage

(N

FS)

VM mgmt/vMotion

Sto

rag

e (

NF

S)

Storage Network

VM mgmt/vMotion

Storage Network

Virtual Desktop Network

Cu

sto

me

r Da

ta (C

IFS

)

Storage (either variant)

Ma

na

ge

me

nt

Custom

er data (CIFS

)

1

X

18

X

17

X

16

X

2

X

15

X

31

X

32

X

34

X

33

X

47

X

48

X

1

1

1

2

1

3

1

4

1

5

1

6

1

7

1

8

1

9

2

0

2

1

2

2

2

3

2

4

2

5

2

6

2

7

2

8

2

9

3

0

3

1

3

2

3

3

3

4

3

5

3

6

3

7

3

8

3

9

4

0

4

1

4

2

4

3

4

4

4

5

4

6

4

7

4

81 2 3 4 5 6 7 8 9

1

0

1 3

2 4

1

X

18

X

17

X

16

X

2

X

15

X

31

X

32

X

34

X

33

X

47

X

48

X

1

1

1

2

1

3

1

4

1

5

1

6

1

7

1

8

1

9

2

0

2

1

2

2

2

3

2

4

2

5

2

6

2

7

2

8

2

9

3

0

3

1

3

2

3

3

3

4

3

5

3

6

3

7

3

8

3

9

4

0

4

1

4

2

4

3

4

4

4

5

4

6

4

7

4

81 2 3 4 5 6 7 8 9

1

0

1 3

2 4

Cu

sto

me

r N

etw

ork

Customer network uplink

Virtual Desktop Network

ISL

Sto

rag

e (e

ithe

r

va

rian

t)

Cu

sto

me

r Da

ta

(CIF

S)

B

B

Ma

na

ge

me

nt

Secondary Control Station (standby)

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

CHS A56

FAN 1 FAN 5 FAN 2 FAN 6 FAN 3 FAN 7 FAN 4 FAN 8

!

Additional ESXi

Blades

01

23

01

23

01

23

01

23

01

23

01

23

01

23

01

23

01

23

01

23

Figure 27. Sample Ethernet network architecture for 2,000 virtual desktops

Page 91: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

91 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Ensure that you have an adequate number of switch ports for the storage array and ESXi hosts configured with a minimum of three VLANs for:

Virtual machine networking, ESXi management, and CIFS traffic (customer-facing networks, which may be separated if desired)

NFS networking (private network)

vMotion (private network)

Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is complete connection to the existing customer network.

Note At this point, the new equipment is being connected to the existing customer network. Take care to ensure that unforeseen interactions do not cause service issues on the customer network.

Prepare and configure storage array

Overview

This section describes how to configure the VNX storage array. In this solution, the VNX series provides Network File System (NFS) or Fibre Channel SAN-connected block storage for VMware hosts.

Table 21 shows the tasks for the storage configuration.

Configure VLANs

Complete network cabling

VNX configuration

Page 92: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

92

Table 21. Tasks for storage configuration

Task Description Reference

Set up initial VNX configuration

Configure the IP address information and other key parameters on the VNX.

VNX5300 Unified Installation Guide

VNX5500 Unified Installation Guide

VNX File and Unified Worksheet

Unisphere System Getting Started Guide

Your vendor’s switch configuration guide

Provision storage for VMFS datastores (FC only)

Create FC LUNs that will be presented to the ESXi servers as VMFS datastores hosting the virtual desktops.

Provision storage for NFS datastores (NFS only)

Create NFS file systems that will be presented to the ESXi servers as NFS datastores hosting the virtual desktops.

Provision optional storage for user data

Create CIFS file systems that will be used to store roaming user profiles and home directories.

Provision optional storage for infrastructure virtual machines

Create optional VMFS/NFS datastores to host SQL Server, domain controller, vCenter Server, and/or XenDesktop controller virtual machines.

Prepare VNX

VNX5300 Unified Installation Guide provides instructions on assembly, racking, cabling, and powering the VNX. For 2,000 virtual desktops, refer toVNX5500 Unified Installation Guide instead. There are no specific setup steps for this solution.

Set up the initial VNX configuration

After completing the initial VNX setup, you must configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information:

DNS

NTP

Storage network interfaces

Storage network IP address

CIFS services and Active Directory Domain membership

The reference documents listed in Table 21 provide more information on how to configure the VNX platform. Storage configuration guidelines on page 60 provides more information on the disk layout.

Page 93: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

93 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Overview

Core data storage is a repository for virtual desktops’ operating system data. It can be VMFS datastores for FC variant and NFS datastores for NFS variant.

Figure 16 , Figure 18, and Figure 20 depict the target storage layout for both Fibre Channel (FC) and NFS variants of the three solution stacks in this VSPEX solution. The following sections describe provision steps for both FC and NFS variants.

Provision storage for VMFS datastores (FC variant only)

Complete the following steps in the EMC Unisphere interface to configure FC LUNs on VNX that will be used to store virtual desktops:

1. Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty 300 GB SAS drives (10 drives for 500 virtual desktops, twenty for 1,000 virtual desktops, or forty for 2,000 virtual desktops). Enable FAST Cache for the storage pool.

a. Log in to EMC Unisphere.

b. Choose the array that will be used in this solution.

c. Go to Storage -> Storage Configuration -> Storage Pools.

d. Go to the Pools tab.

e. Click Create.

Note Create your hot spare disks at this time. Consult the EMC VNX Unified Installation Guide for additional information.

2. In the block storage pool, create four, eight, or sixteen LUNSs of 500 GB each (four LUNs for 500 virtual desktops, eight LUNs for 1,000 virtual desktops, or sixteen LUNs for 2,000 virtual desktops), and present them to the ESXi servers as VMFS datastores.

a. Go to Storage -> LUNs.

b. Click Create.

c. In the dialog box, choose the pool created in step1; MAX for User Capacity; and 4, 8, or 16 for Number of LUNs to create. LUNs will be provisioned after this operation.

3. Configure a storage group to allow ESXi servers access to the newly created LUNs.

a. Go to Hosts -> Storage Groups.

b. Create a new storage group.

c. Select LUNs and ESXi hosts to be added in this storage group.

Provision storage for NFS datastores (NFS variant only)

Complete the following steps in EMC Unisphere to configure NFS file systems on VNX that will be used to store virtual desktops:

1. Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty

300 GB SAS drives (ten drives for 500 virtual desktops, twenty drives for

Provision core data storage

Page 94: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

94

1,000 virtual desktops, or forty drives for 2,000 virtual desktops). Enable FAST Cache for the storage pool.

a. Log in to EMC Unisphere.

b. Choose the array that will be used in this solution.

c. Go to Storage -> Storage Configuration -> Storage Pools.

d. Go to the Pools tab.

e. Click Create.

Note Create your hot spare disks at this time. Consult the EMC VNX Unified Installation Guide for additional information.

2. Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. Each LUN should be 200 GB for 500 virtual desktops, 400 GB for 1,000 virtual desktops, or 800 GB for 2,000 virtual desktops.

a. Go to Storage -> LUNs.

b. Click Create.

c. In the dialog box, choose the pool created in step 1, MAX for User Capacity, and 10 for Number of LUNs to create.

Note Ten LUNs are created because EMC Performance Engineering recommends creating approximately one LUN for every four drives in the storage pool and creating LUNs in even multiples of ten. Refer to EMC VNX Unified Best Practices for Performance — Applied Best Practices Guide.

d. Go to Hosts -> Storage Groups.

e. Choose filestorage.

f. Click Connect LUNs.

g. In the Available LUNs panel, choose the 10 LUNs you just created.

h. The LUNS immediately appear in the Selected LUNs panel.

i. The Volume Manager automatically detects a new storage pool for file, or you can click Rescan Storage System under Storage Pool for File to scan for it immediately.

j. Do not proceed until the new storage pool for file is present in the GUI.

3. Create four, eight, or sixteen files systems of 500 GB each (four file systems for 500 virtual desktops, eight for 1,000, or sixteen for 2,000), and present them to the ESXi servers as NFS datastores.

a. Go to Storage -> Storage Configuration -> File Systems.

b. Click Create.

c. In the dialog box, choose Create from Storage Pool.

d. Enter the Storage Capacity, for example, 500GB.

e. Keep everything else set to their default values.

Page 95: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

95 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Note To enable an NFS performance fix for VNX file that significantly reduces NFS write latency, the file systems must be mounted on the Data Mover using the Direct Writes mode, as shown in Figure 28. The Set Advanced Options checkbox must be selected to enable the Direct Writes Enabled checkbox.

Figure 28. Set Direct Writes Enabled checkbox

2. Export the file systems using NFS, and give root access to ESXi servers.

a. Go to Storage -> Shared Folders -> NFS.

b. Click Create.

c. In the dialog window, add the IP addresses, separated by colons, of all ESXi servers in Root Hosts.

3. In Unisphere:

a. Click Settings > Data Mover Parameters to make changes to the Data Mover configuration.

b. Click the list menu to the right of Set Parameters and choose All Parameters, as shown in Figure 29.

c. Scroll down to the nthreads parameter as shown in Figure 30.

d. Click Properties to update the setting.

Page 96: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

96

The default number of threads dedicated to serve NFS requests is 384 per Data Mover on VNX. Because this solution requires up to 2,000 desktop connections, increase the number of active NFS threads to a maximum of 1,024 (for 500 virtual desktops), or 2,048 (for 1,000 and 2,000 virtual desktops) on each Data Mover.

Figure 29. View all Data Mover parameters

Figure 30. Set nthread parameter

Fast Cache configuration

To configure FAST Cache on the storage pool(s) for this solution, complete the following steps:

4. Configure flash drives as FAST Cache.

a. Click Properties (in the dashboard of the Unisphere window) or Manage Cache (in the left-hand pane of the Unisphere window) to open the Storage System Properties dialog box, which is shown in Figure 31

b. Click the FAST Cache tab to view FAST Cache information.

Page 97: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

97 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Figure 31. Storage System Properties dialog box

c. Click Create to open the Create FAST Cache dialog box, which is shown in Figure 32.

d. The RAID Type field is displayed as RAID 1 when the FAST Cache has been created.

e. You can also choose the number of flash drives. The bottom portion of the window shows the flash drives that will be used for creating FAST Cache. You can choose the drives manually by selecting the Manual option. Refer to Storage configuration guidelines to determine the number of flash drives that are used in this solution.

Note If a sufficient number of flash drives are not available, an error message is displayed and FAST Cache cannot be created.

Figure 32. Create FAST Cache dialog box

5. Enable FAST Cache on the storage pool.

Page 98: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

98

If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. In other words, all the LUNs created in the storage pool will have FAST Cache enabled or disabled. You can configure them under the Advanced tab in the Create Storage Pool dialog box shown in Figure 33.

After FAST Cache is installed in the VNX series, it is enabled by default when a storage pool is created.

Figure 33. Advanced tab in the Create Storage Pool dialog box

If the storage pool has already been created, you can use the Advanced tab in the Storage Pool Properties dialog box, as shown in Figure 34, to configure FAST Cache.

Figure 34. Advanced tab in the Storage Pool Properties dialog box

Note The FAST Cache feature on the VNX series array does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves.

Page 99: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

99 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

If storage required for user data (that is, roaming user profiles and home directories) does not already exist in the production environment and the optional user data disk pack has been purchased, complete the following steps in Unisphere to configure two CIFS file systems on VNX:

1. Create a block-based RAID 6 storage pool that consists of eight, sixteen, or

twenty-two 2 TB NL-SAS drives (eight drives for 500 virtual desktops, sixteen drives for 1,000 virtual desktops, or twenty-two for 2,000 virtual desktops).

Figure 17, Figure 19, and Figure 21 depict the target user data storage layout for the solution.

2. Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. Each LUN should be 1 TB for 500 virtual desktops, 1.5 TB for 1,000 virtual desktops, or 3 TB for 2,000 virtual desktops.

3. Create two file systems from the system-defined NAS pool containing the two new LUNs. Export the file systems as CIFS shares.

FAST VP configuration (optional)

Optionally you can configure FAST VP to automate data movement between storage tiers. Following are two ways to configure FAST VP.

Configure FAST VP at the pool level.

Click Properties for a specific storage pool to open the Storage Pool Properties dialog box. Figure 35 shows the tiering information for a specific FAST VP enabled pool.

Figure 35. Storage Pool Properties window

The Tier Status section of the window shows FAST VP relocation information specific to the pool selected. Scheduled relocation can be selected at the

Provision optional storage for user data

Page 100: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

100

pool level from the list menu labelled Auto-Tiering. This can be set to either Automatic or Manual.

In the Tier Details section, users can see the exact distribution of their data. Users can also connect to the array-wide Relocation Schedule using the button located in the top-right corner, which presents the Manage Auto-Tiering window as shown in Figure 36.

Figure 36. Manage Auto-Tiering window

From this status window, users can control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O.

Note As its name implies, FAST VP is a completely automated tool. To this end, relocations can be scheduled to occur automatically. Scheduling relocations during off-hours, to minimize any potential performance impact they may cause, is recommended.

Configure FAST VP at the LUN level

Some FAST VP properties are managed at the LUN level. Click Properties for a specific LUN. In this dialog box, click the Tiering tab to view tiering information for this single LUN , as shown in Figure 37.

Page 101: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

101 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Figure 37. LUN Properties window

The Tier Details section displays the current distribution of slices within the LUN. Tiering policy can be selected at the LUN level from the list menu labelled Tiering Policy.

If storage required for infrastructure virtual machines (that is, SQL Server, domain controller, vCenter Server, and/or XenDesktop controllers) does not already exist in the production environment and the optional user data disk pack has been purchased, configure a NFS file system on VNX to be used as an NFS datastore in which the infrastructure virtual machines reside. Repeat the configuration steps shown in Provision storage for NFS datastores (NFS variant only) to provision the optional storage, while taking into account the smaller number of drives.

Install and configure VMware vSphere hosts

This section provides information about installation and configuration of ESXi hosts and infrastructure servers required to support the architecture. Table 22 lists the tasks that must be completed.

Table 22. Tasks for server installation

Task Description Reference

Install ESXi Install the ESXi 5.1 hypervisor on the physical servers deployed for the solution.

vSphere Installation and Setup Guide

Provision optional storage for infrastructure virtual machines

Overview

Page 102: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

102

Task Description Reference

Configure ESXi networking

Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and jumbo frames.

vSphere Networking

Add ESXi hosts to VNX storage groups (FC variant)

Use the Unisphere console to add the ESXi hosts to the storage groups.

Connect VMware datastores

Connect the VMware datastores to the ESXi hosts deployed for the solution.

vSphere Storage Guide

Upon initial power-up of the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in the BIOS of each of the servers. If the servers are equipped with a RAID controller, configuring mirroring on the local disks is recommended.

Boot the ESXi 5.1 installation media and install the hypervisor on each of the servers. ESXi hostnames, IP addresses, and a root password are required for installation. Appendix B provides appropriate values.

During the installation of VMware ESXi, a standard virtual switch (vSwitch) is created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and meet bandwidth requirements, an additional NIC must be added either by using the ESXi console or by connecting to the ESXi host from the vSphere Client.

Each VMware ESXi server should have multiple interface cards for each virtual network to ensure redundancy and provide for the use of network load balancing, link aggregation, and network adapter failover.

VMware ESXi networking configuration including load balancing, link aggregation, and failover options are described in vSphere Networking. Refer to the list of documents in Appendix C of this guide for more information.

Choose the appropriate load-balancing option based on what is supported by the network infrastructure.

Create VMkernel ports as required, based on the infrastructure configuration:

VMkernel port for NFS traffic (NFS variant only)

VMkernel port for VMware vMotion

Virtual desktop port groups (used by the virtual desktops to communicate on the network)

vSphere Networking describes the procedure for configuring these settings. Refer to the list of documents in Appendix C of this guide for more information.

Install ESXi

Configure ESXi networking

Page 103: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

103 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

A jumbo frame is an Ethernet frame with a “payload” greater than 1500 bytes and up to ~9000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames reduces processing overhead by reducing the number of frames to be sent. This increases the network throughput. Jumbo frames should be enabled end-to-end. This includes the network switches, ESXi servers, and VNX SPs.

Jumbo frames can be enabled on the ESXi server into two different levels. If all the portals on the virtual switch need to be enabled for jumbo frames, this can be achieved by selecting properties of virtual switch and editing the MTU settings from the vCenter. If specific VMkernel ports are to be jumbo frames enabled, edit the VMkernel port under network properties from vCenter.

To enable jumbo frames on the VNX:

1. Use Unisphere -> Settings -> Network ->Settings for File.

2. Select the appropriate network interface under the Interfaces tab.

3. Select Properties.

4. Set the MTU size to 9000.

5. Select OK to apply the changes.

Jumbo frames may also need to be enabled on each network switch. Consult your switch configuration guide for instructions.

Connect the datastores configured in Prepare and configure storage array to the appropriate ESXi servers. These include the datastores configured for:

Virtual desktop storage

Infrastructure virtual machine storage (if required)

SQL Server storage (if required)

vSphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. Refer to the list of documents in Appendix C of this guide for more information.

The ESXi EMC PowerPath VE (FC variant) and NFS VAAI (NFS variant) plug-ins must be installed after VMware Virtual Center has been deployed as described in Install and configure VMware vCenter Server.

Server capacity is required for two purposes in the solution:

To support the new virtualized server infrastructure

To support the required infrastructure services such as authentication/authorization, DNS, and database

Jumbo frames

Connect VMware datastores

Plan virtual machine memory allocations

Page 104: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

104

For information on minimum requirements for infrastructure services hosting, refer to Table 3 on page 51. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services will not be required.

Memory configuration

Proper sizing and configuration of the solution necessitates that care be taken when configuring server memory. The following section provides general guidance on memory allocation for the virtual machines, factoring in vSphere overhead and the virtual machine configuration. We begin with an overview of how memory is managed in a VMware environment.

ESX/ESXi memory management

Memory virtualization techniques allow the vSphere hypervisor to abstract physical host resources such as memory in order to provide resource isolation across multiple virtual machines while avoiding resource exhaustion. In cases where advanced processors (such as Intel processors with EPT support) are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself through a feature known as shadow page tables.

vSphere employs the following memory management techniques:

Allocation of memory resources greater than those physically available to the virtual machine is known as memory overcommitment.

Identical memory pages that are shared across virtual machines are merged through a feature known as transparent page sharing. Duplicate pages are returned to the host free memory pool for reuse.

ESXi stores pages, which otherwise would be swapped out to disk through host swapping, in a compression cache located in the main memory.

Host resource exhaustion can be relieved through a process known as memory ballooning. This process requests free pages be allocated from the virtual machine to the host for reuse.

Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk.

You can obtain additional information at: http://www.vmware.com/files/pdf/mem_mgmt_perf_vsphere5.pdf

Page 105: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

105 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Virtual machine memory concepts

Figure 38 shows parameters for memory settings in the virtual machine.

Figure 38. Virtual machine memory settings

Configured memory — Physical memory allocated to the virtual machine at the time of creation.

Reserved memory — Memory that is guaranteed to the virtual machine.

Touched memory — Memory that is active or in use by the virtual machine.

Swappable — Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machines through ballooning, compression, or swapping.

Following are the recommended best practices:

Do not disable the default memory reclamation techniques. These are lightweight processes that enable flexibility with minimal impact to workloads.

Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machines sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, virtual machine performance likely will be adversely affected. Having performance baselines of your virtual machine workloads assists in this process.

Install and configure SQL Server database

This section describes how to set up and configure a SQL Server database for the solution. At the end of this section, you will have Microsoft SQL server on a virtual machine, with the databases required by VMware vCenter and XenDesktop configured for use. Table 23 identifies the tasks for the SQL Server database setup.

Overview

Page 106: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

106

Table 23. Tasks for SQL Server database setup

Task Description Reference

Create a virtual machine for Microsoft SQL Server

Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements.

http://msdn.microsoft.com

Install Microsoft Windows on the virtual machine

Install Microsoft Windows Server 2008 R2 Standard Edition on the virtual machine created to host SQL Server.

http://technet.microsoft.com

Install Microsoft SQL Server

Install Microsoft SQL Server on the virtual machine designated for that purpose.

http://technet.microsoft.com

Configure database for VMware vCenter

Create the database required for the vCenter Server on the appropriate datastore.

Preparing vCenter Server Databases

Configure database for VMware Update Manager

Create the database required for Update Manager on the appropriate datastore.

Preparing the Update Manager Database

Configure XenDesktop database permissions

Configure the database server with appropriate permissions for the XenDesktop installer.

Database Access and Permissions for XenDesktop 5

Note The customer environment may already contain a SQL Server designated for this role. In that case, refer to Configure database for VMware vCenter.

The requirements for processor, memory, and operating system vary for different versions of SQL Server. The virtual machine should be created on one of the ESXi servers designated for infrastructure virtual machines, and it should use the datastore designated for the shared infrastructure.

The SQL Server service must run on Microsoft Windows. Install Windows on the virtual machine and select the appropriate network, time, and authentication settings.

Install SQL Server on the virtual machine from the SQL Server installation media. The Microsoft TechNet website provides information on how to install SQL Server.

One of the components in the SQL Server installer is the SQL Server Management Studio (SSMS). You can install this component on the SQL server directly as well as on an administrator’s console. Be sure to install SSMS on at least one system.

Create a virtual machine for Microsoft SQL Server

Install Microsoft Windows on the virtual machine

Install SQL Server

Page 107: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

107 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

In many implementations, you may want to store data files in locations other than the default path. To change the default path, right-click the server object in SSMS and select Database Properties. This action opens a properties interface from which you can change the default data and log directories for new databases created on the server.

Note For high availability, SQL Server can be installed in a Microsoft Failover Cluster or on a virtual machine protected by VMware vSphere High Availability clustering. We recommend not to combine these technologies.

To use VMware vCenter in this solution, you must create a database for the service to use. Preparing vCenter Server Databases provides the requirements and steps for configuring the vCenter Server database correctly. Refer to the list of documents in Appendix C of this guide for more information.

Note Do not use the Microsoft SQL Server Express-based database option for this solution.

It is a best practice to create individual login accounts for each service accessing a database on SQL Server.

To use VMware Update Manager in this solution, you must create a database for the service to use. Preparing the Update Manager Database provides the requirements and steps for configuring the Update Manager database correctly. Refer to the list of documents in Appendix C of this guide for more information. It is a best practice to create individual login accounts for each service accessing a database on SQL Server. Consult your database administrator for your organization’s policy.

Install and configure VMware vCenter Server

This section provides information on how to configure VMware vCenter. Table 24 describes the tasks that must be completed.

Table 24. Tasks for vCenter configuration

Task Description Reference

Create the vCenter host virtual machine

Create a virtual machine to be used for the VMware vCenter Server.

vSphere Virtual Machine Administration

Install vCenter guest operating system

Install Windows Server 2008 R2 Standard Edition on the vCenter host virtual machine.

Configure database for VMware vCenter

Configure database for VMware Update Manager

Overview

Page 108: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

108

Task Description Reference

Update the virtual machine

Install VMware Tools, enable hardware acceleration, and allow remote console access.

vSphere Virtual Machine Administration

Create vCenter ODBC connections

Create the 64-bit vCenter and 32-bit vCenter Update Manager ODBC connections.

vSphere Installation and Setup

Installing and Administering VMware vSphere Update Manager

Install vCenter Server

Install vCenter Server software. vSphere Installation and Setup

Install vCenter Update Manager

Install vCenter Update Manager software.

Installing and Administering VMware vSphere Update Manager

Create a virtual datacenter

Create a virtual datacenter. vCenter Server and Host Management

Apply vSphere license keys

Type the vSphere license keys in the vCenter licensing menu.

vSphere Installation and Setup

Add ESXi Hosts Connect vCenter to ESXi hosts. vCenter Server and Host Management

Configure vSphere clustering

Create a vSphere cluster and move the ESXi hosts into it.

vSphere Resource Management

Perform array ESXi host discovery

Perform ESXi host discovery within the Unisphere console.

Using EMC VNX Storage with VMware vSphere–TechBook

Install the vCenter Update Manager plug-in

Install the vCenter Update Manager plug-in on the administration console.

Installing and Administering VMware vSphere Update Manager

Deploy the VNX VAAI for NFS plug-in (NFS Variant)

Using VMware Update Manager, deploy the VNX VAAI for NFS plug-in to all ESXi hosts.

EMC VNX VAAI NFS Plug-in–Installation HOWTO video available on www.youtube.com

vSphere Storage APIs for Array Integration (VAAI) Plug-in

Installing and Administering VMware vSphere Update Manager

Deploy PowerPath/VE (FC Variant)

Using VMware Update Manager, deploy the PowerPath/VE plug-in to all ESXi hosts.

PowerPath/VE for VMware vSphere Installation and Administration Guide

Install the EMC VNX UEM CLI

Install the EMC VNX UEM CLI on the administration console.

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

Page 109: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

109 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Task Description Reference

Install the EMC VSI plug-in

Install the EMC Virtual Storage Integration plug-in on the administration console.

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

Install the EMC PowerPath Viewer (FC Variant)

Install the EMC PowerPath Viewer on the administration console.

PowerPath Viewer Installation and Administration Guide

If deploying VMware vCenter Server as a virtual machine on an ESXi server installed as part of this solution, connect directly to an infrastructure ESXi server using the vSphere Client. Create a virtual machine on the ESXi server with the customer’s guest operating system configuration, using the infrastructure server datastore presented from the storage array. The memory and processor requirements for the vCenter Server are dependent on the number of ESXi hosts and virtual machines being managed. The requirements are outlined in the vSphere Installation and Setup Guide. Refer to the list of documents in Appendix C of this guide for more information.

Install the guest operating system on the vCenter host virtual machine. VMware recommends using Windows Server 2008 R2 Standard Edition. Refer to the list of documents in Appendix C of this guide for more information.

Before installing vCenter Server and vCenter Update Manager, you must create the ODBC connections required for database communication. These ODBC connections will use SQL Server authentication for database authentication. Configure database for VMware vCenter provides SQL login information.

Refer to the list of documents in Appendix C of this guide for more information.

Install vCenter Server using the VMware VIMSetup installation media. Use the customer-provided username, organization, and vCenter license key when installing vCenter.

To perform license maintenance, log into the vCenter Server and select the Administration - Licensing menu from the vSphere Client. Use the vCenter License console to enter the license keys for the ESXi hosts. Subsequently, apply these settings to the ESXi hosts as they are imported into vCenter.

The VAAI for NFS plug-in enables support for the vSphere 5.1 NFS primitives. These primitives reduce the load on the hypervisor from specific storage-related tasks to free resources for other operations. Additional information about the VAAI for NFS plug-in is available in the downloadable vSphere Storage APIs for Array Integration (VAAI) Plug-in. Refer to the list of documents in Appendix C of this guide for more information.

The VAAI for NFS plug-in is installed using vSphere Update Manager. The process for installing the plug-in is demonstrated in the EMC VNX VAAI NFS plug-in – installation

Create the vCenter host virtual machine

Install vCenter guest operating system

Create vCenter ODBC connections

Install vCenter Server

Apply vSphere license keys

Deploy the VNX VAAI for NFS plug-in (NFS variant)

Page 110: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

110

HOWTO video available on the www.youtube.com website. To enable the plug-in after installation, you must reboot the ESXi server.

The VNX storage system can be integrated with VMware vCenter using the Unified Storage Management feature of EMC Virtual Storage Integrator (VSI) for VMware vSphere. Unified Storage Management provides administrators the ability to manage VNX storage tasks from the vCenter. After installing the feature on the vSphere console, administrators can use vCenter to:

Create datastores on VNX and mount them on ESXi servers

Extend datastores

FAST/Full clone virtual machines

Install and configure XenDesktop controller

This section provides information on how to set up and configure Citrix XenDesktop controllers for the solution. For a new installation of XenDesktop, Citrix recommends that you complete the tasks in Table 25 in the order shown.

Table 25. Tasks for XenDesktop controller setup

Task Description Reference

Create virtual machines for XenDesktop controllers

Create two virtual machines in vSphere Client. These virtual machines are used as XenDesktop controllers.

Install guest operating system for XenDesktop controllers

Install Windows Server 2008 R2 guest operating system.

Install server-side components of XenDesktop

Install XenDesktop server components on the first controller.

www.citrix.com

Install Desktop Studio Install Desktop Studio to manage XenDesktop deployment remotely.

Configure a site Configure a site in Desktop Studio.

Add a second controller Install additional controller for high availability.

Prepare a master virtual machine

Create a master virtual machine as the base image for the virtual desktops.

Provision virtual desktops Provision desktops using Machine Creation Services (MCS).

Install the EMC VSI Unified Storage Management feature

Overview

Page 111: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

111 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Install the following server-side components of XenDesktop on the first controller:

Controller — Creates and manages virtual desktops for users

Web interface — Provides users with web access to their virtual desktops

License server — Manages XenDesktop licenses

Desktop Studio — XenDesktop configuration and management console

Desktop Director — XenDesktop daily operations and helpdesk website

Note Citrix supports installation of XenDesktop components only through the procedures described in Citrix documentation.

Start Desktop Studio and configure a site. For site configuration, do the following:

1. License the site and specify which edition of XenDesktop to use.

2. Set up the site database using a designated login credential for SQL Server.

3. Provide information about your virtual infrastructure, including the vCenter SDK path that the controller will use to establish a connection to the VMware infrastructure.

After you have configured a site, you can add a second controller to provide high availability. The server-side components of XenDesktop required for the second controller are:

Controller

Web Interface

Desktop Studio

Desktop Director

Do not install the license server component on the second controller because it is centrally managed on the first controller.

Install Desktop Studio on appropriate administrator consoles to manage your XenDesktop deployment remotely.

Optimize the master virtual machine to avoid unnecessary background services generating extraneous I/O operations that adversely affect the overall performance of the storage array.

Complete the following steps to prepare the master virtual machine:

1. Install the Windows 7 guest operating system.

2. Install appropriate integration tools such as VMware Tools.

3. Optimize the operating system settings by referring to the following document: Citrix Windows 7 Optimization Guide for Desktop Virtualization (http://support.citrix.com/servlet/KbServlet/download/25161-102-665153/XD%20-%20Windows%207%20Optimization%20Guide.pdf)

Install server-side components of XenDesktop

Configure a site

Add a second controller

Install Desktop Studio

Prepare master virtual machine

Page 112: EMC VSPEX End User Computing

VSPEX Configuration Guidelines

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

112

4. Install the Virtual Desktop Agent.

5. Install third-party tools or applications, such as Microsoft Office, relevant to your environment.

Complete the following steps to deploy virtual desktops using Machine Creation Services (MCS) in Desktop Studio:

1. Create a machine catalog using the master virtual machine as the base image.

MCS allows two types of machine catalogs—pooled and dedicated.

Pooled machines: User customizations for pooled machines are reset when the user logs out.

Dedicated machines: User customizations for dedicated machines are not reset when the user logs out.

2. Add the machines created in the catalog to a desktop group so that the virtual desktops are available to the end users.

Summary In this chapter, we presented the requisite steps required to deploy and configure the various aspects of the VSPEX solution, which included both the physical and logical components. At this point, you should have a fully functional VSPEX solution. The following chapter covers post-installation and validation activities.

Provision virtual desktops

Page 113: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

113

Chapter 6 Validating the Solution

This chapter presents the following topics:

Overview ................................................................................................. 114

Post-install checklist ................................................................................ 114

Deploy and test a single virtual desktop ................................................... 115

Verify the redundancy of the solution components ................................... 115

Page 114: EMC VSPEX End User Computing

Validating the Solution

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

114

Overview This section provides a list of items that should be reviewed once the solution has been configured. The goal of this section is to verify the configuration and functionality of specific aspects of the solution, and ensure that the configuration supports core availability requirements.

Table 26 describes the tasks that must be completed.

Table 26. Tasks for testing the installation

Task Description Reference

Post install checklist

Verify that adequate virtual ports exist on each vSphere host virtual switch.

vSphere Networking

Verify that each vSphere host has access to the required datastores and VLANs.

vSphere Storage Guide

vSphere Networking

Verify that the vMotion interfaces are configured correctly on all vSphere hosts.

vSphere Networking

Deploy and test a single virtual desktop

Deploy a single virtual machine using the vSphere interface by utilizing the customization specification.

vCenter Server and Host Management

vSphere Virtual Machine Management

Verify redundancy of the solution components

Perform a reboot of each storage processor in turn, and ensure that LUN connectivity is maintained.

Steps shown below

Disable each of the redundant switches in turn and verify that the vSphere host, virtual machine, and storage array connectivity remains intact.

Reference vendor’s documentation

On a vSphere host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host.

vCenter server and host management

Post-install checklist Prior to deployment into production, verify the following configuration items since they are critical to the solution functionality.

Page 115: EMC VSPEX End User Computing

Validating the Solution

115 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

On each vSphere server, verify that the vSwitch that hosts the client VLANs is configured with sufficient ports to accommodate the maximum number of virtual machines it may host.

On each vSphere server used as part of this solution, verify that all required virtual machine port groups is configured and that each server has access to the required VMware datastores.

On each vSphere server used in the solution, verify that an interface is configured correctly for vMotion using the material in the vSphere Networking guide. Refer to the list of documents in Appendix C of this document for more information.

Deploy and test a single virtual desktop To verify the operation of the solution it is important to perform a deployment of a virtual machine in order to verify the procedure completes as expected. Verify that the virtual machine joins the applicable domain, has access to the expected networks, and is able to log in.

Verify the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, it is important to test specific scenarios related to maintenance or hardware failure.

Perform a reboot of each VNX Storage Processor in turn and verify that connectivity to VMware datastores is maintained throughout each reboot. Use these steps:

1. Log on the Control Station with administrator rights.

2. Navigate to /nas/sbin.

3. Reboot SPA: use command ./navicli –h spa rebootsp.

4. During the reboot cycle, check for presence of datastores on ESXi hosts.

5. When the cycle completes, reboot SPB: ./navicli –h spb rebootsp.

Perform a failover of each VNX Data Mover in turn and verify that connectivity to VMware datastores is maintained and that connections to CIFS file systems are reestablished. For simplicity, use the following approach for each data mover; reboot can also be accomplished through the Unisphere interface.

From the Control Station $ prompt, use command server_cpu <movername> -reboot where <movername> is the name of the data mover.

To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure as well.

On a vSphere host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host.

Page 116: EMC VSPEX End User Computing

Validating the Solution

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

116

Page 117: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

117

Appendix A Bills of Materials

This appendix presents the following topics:

Bill of materials for 500 virtual desktops .................................................. 118

Bill of materials for 1,000 virtual desktops ............................................... 119

Bill of materials for 2,000 virtual desktops ............................................... 120

Page 118: EMC VSPEX End User Computing

Bills of Materials

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

118

Bill of materials for 500 virtual desktops Table 27. List of components used in the VSPEX solution for 500 virtual desktops

Component Solution for 500 Virtual Desktops

VMware vSphere servers

CPU 1 x vCPU per virtual machine

8 x vCPUs per physical core

500 x vCPUs

Minimum of 63 Physical Cores

Memory 2 GB RAM per desktop

Minimum of 1 TB RAM

Network – FC option

2 x 4/8 GB FC HBAs per server

Network – 1Gb option

6 x 1 GbE NICs per server

Note To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Fibre Channel 2 x physical switches

2 x 1 GbE ports per vSphere server

4 x 4/8 Gb FC ports for VNX back end (Two per SP)

2 x 4/8 Gb FC ports per vSphere server

1Gb network 2 x physical switches

1 x 1 GbE port per Control Station for management

6 x 1 GbE ports per vSphere server

10Gb network 2 x physical switches

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per data mover for data

Note When choosing the Fibre Channel option for storage, you will still need to choose one of the IP network options to have full connectivity.

EMC next-generation backup

Avamar 1 x Gen4 utility node

1 x Gen4 3.9 TB spare node

3 x Gen4 3.9 TB storage nodes

Page 119: EMC VSPEX End User Computing

Bills of Materials

119 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Component Solution for 500 Virtual Desktops

EMC VNX series storage array

Common EMC VNX5300

2 x Data Movers (active / standby)

15 x 300GB, 15k rpm 3.5-inch SAS drives – Core Desktops

3 x 100GB, 3.5-inch flash drives – FAST Cache

9 x 2TB, 3.5-inch NL-SAS drives (optional) – User Data

FC option 2x 8Gb FC ports per Storage Processor

1 Gb Network option

4x1Gb IO module for each Data Mover

(each module includes four ports)

10 Gb Network option

2x10Gb IO module for each Data Mover

(each module includes two ports)

Bill of materials for 1,000 virtual desktops Table 28. List of components used in the VSPEX solution for 1,000 virtual desktops

Component Solution for 1,000 Virtual Desktops

VMware vSphere servers

CPU 1 x vCPU per virtual machine

8 x vCPUs per physical core

1,000 x vCPUs

Minimum of 125 Physical Cores

Memory 2 GB RAM per desktop

Minimum of 2 TB RAM

Network – FC option 2 x 4/8 GB FC HBAs per server

Network – 1Gb option 6 x 1 GbE NICs per server

Network – 10 Gb option 3 x 10 GbE NICs per blade chassis

Note To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Fibre Channel 2 x physical switches

2 x 1 GbE ports per vSphere server

4 x 4/8 Gb FC ports for VNX back end (two per SP)

2 x 4/8 Gb FC ports per vSphere server

Page 120: EMC VSPEX End User Computing

Bills of Materials

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

120

Component Solution for 1,000 Virtual Desktops

1Gb network option 2 x physical switches

1 x 1 GbE port per Control Station for management

6 x 1 GbE ports per vSphere server

2 x 10GbE ports per data mover for data

10Gb network option 2 x physical switches

1 x 1 GbE port per Control Station for management

3 x 10 GbE ports per blade chassis

2 x 10GbE ports per data mover for data

Note When choosing the Fibre Channel option for storage, you still need to choose one of the IP network options to have full connectivity.

EMC next-generation backup

Avamar 1 x Gen4 utility node

1 x Gen4 3.9 TB spare node

3 x Gen4 3.9 TB storage nodes

EMC VNX series storage array

Common EMC VNX5300

2 x Data Movers (active / standby)

26 x 300 GB, 15k rpm 3.5-inch SAS drives – Core Desktops

3 x 100 GB, 3.5-inch flash drives – FAST Cache

17 x 2 TB, 3.5-inch NL-SAS drives (optional) – User Data

FC option 2 x 8 Gb FC ports per Storage Processor

1 Gb network option 4 x 1 Gb IO module for each Data Mover

(each module includes four ports)

10 Gb network option 2 x 10 Gb IO module for each Data Mover

(each module includes two ports)

Bill of materials for 2,000 virtual desktops Table 29. List of components used in the VSPEX solution for 2,000 virtual desktops

Component Solution for 2,000 Virtual Desktops

VMware vSphere servers

CPU 1 x vCPU per virtual machine

8 x vCPUs per physical core

2,000 x vCPUs

Minimum of 250 Physical Cores

Memory 2 GB RAM per desktop

Page 121: EMC VSPEX End User Computing

Bills of Materials

121 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Component Solution for 2,000 Virtual Desktops

Minimum of 4 TB RAM

Network – FC option 2 x 4/8 GB FC HBAs per server

Network – 1Gb option 6 x 1 GbE NICs per server

Network – 10 Gb option 3 x 10 GbE NICs per blade chassis

Note To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Fibre Channel 2 x physical switches

2 x 1 GbE ports per vSphere server

4 x 4/8 Gb FC ports for VNX back end (two per SP)

2 x 4/8 Gb FC ports per vSphere server

1Gb network option 2 x physical switches

1 x 1 GbE port per Control Station for management

6 x 1 GbE ports per vSphere server

2 x 10GbE ports per data mover for data

10Gb network option 2 x physical switches

1 x 1 GbE port per Control Station for management

3 x 10 GbE ports per blade chassis

2 x 10GbE ports per data mover for data

Note When choosing the Fibre Channel option for storage, you still need to choose one of the IP network options to have full connectivity.

EMC next-generation backup

Avamar 1 x Gen4 utility node

1 x Gen4 3.9TB spare node

3 x Gen4 3.9TB storage nodes

EMC VNX series storage array

Common EMC VNX5500

2 x Data Movers (active / standby)

46 x 300GB, 15k rpm 3.5-inch SAS drives — Core Desktops

5 x 100GB, 3.5-inch flash drives – FAST Cache

34 x 2TB, 3.5-inch NL-SAS drives (optional) – User Data

FC option 2x 8Gb FC ports per Storage Processor

Page 122: EMC VSPEX End User Computing

Bills of Materials

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

122

Component Solution for 2,000 Virtual Desktops

1 Gb network option 4x1Gb IO module for each Data Mover

(each module includes four ports)

10 Gb network option 2x10Gb IO module for each Data Mover

(each module includes two ports)

Page 123: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

123

Appendix B Customer Configuration Data Sheet

This appendix presents the following topic:

Customer configuration data sheets ......................................................... 124

Page 124: EMC VSPEX End User Computing

Customer Configuration Data Sheet

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

124

Customer configuration data sheets Before you start the configuration, gather some customer-specific network and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a “leave behind” document for future reference.

The VNX File and Unified Worksheet should be cross-referenced to confirm customer information.

Table 30. Common server information

Server Name Purpose Primary IP

Domain Controller

DNS Primary

DNS Secondary

DHCP

NTP

SMTP

SNMP

vCenter Console

XenDesktop Console

SQL Server

Table 31. ESXi server information

Server Name Purpose Primary IP Private Net (storage) Addresses VMkernel IP vMotion IP

ESXi

Host 1

ESXi

Host 2

Page 125: EMC VSPEX End User Computing

Customer Configuration Data Sheet

125 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Table 32. Array information

Array name

Admin account

Management IP

Storage pool name

Datastore name

NFS Server IP

Table 33. Network infrastructure information

Name Purpose IP Subnet Mask Default Gateway

Ethernet Switch 1

Ethernet Switch 2

Table 34. VLAN information

Name Network Purpose VLAN ID Allowed Subnets

Virtual Machine Networking

ESXi Management

NFS Storage Network

vMotion

Table 35. Service accounts

Account Purpose Password (optional, secure appropriately)

Windows Server administrator

root ESXi root

Array administrator

vCenter administrator

XenDesktop administrator

SQL Server administrator

Page 126: EMC VSPEX End User Computing

Customer Configuration Data Sheet

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

126

Account Purpose Password (optional, secure appropriately)

Windows Server administrator

root ESXi root

root Array root

Array administrator

vCenter administrator

XenDesktop administrator

SQL Server administrator

Page 127: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

127

Appendix C References

This appendix presents the following topic:

References .............................................................................................. 128

Page 128: EMC VSPEX End User Computing

References

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

128

References

The following documents, located on the EMC Online Support website, provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative:

EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5 — Proven Solution Guide

EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (NFS), VMware vSphere 4.1, and Citrix XenDesktop 5 — Proven Solution Guide

EMC Performance Optimization for Microsoft Windows XP for the End user computing — Applied Best Practices

Deploying Microsoft Windows 7 Virtual Desktops with VMware View — Applied Best Practices Guide

EMC VSI for VMware vSphere: Storage Viewer — Product Guide

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

EMC VNX Unified Best Practices for Performance — Applied Best Practices Guide

VNX FAST Cache: A Detailed Review

Sizing EMC VNX Series for VDI Workload

Reference Architecture: EMC Infrastructure for Citrix XenDesktop 5.6, EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1

Proven Solutions Guide: EMC Infrastructure for Citrix XenDesktop 5.6 — EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1

Reference Architecture: EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) — EMC VNX Series (NFS), Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6

Proven Solution Guide: EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS), Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6

EMC Infrastructure for Citrix XenDesktop 5.5 — EMC VNX Series (NFS), Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6

Proven Solution Guide: EMC Infrastructure for Citrix XenDesktop 5.5 — EMC VNX Series (NFS), Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6

Reference Architecture: EMC Infrastructure for VMware View 5.1 — EMC VNX Series (FC), VMware vSphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.0

EMC documentation

Page 129: EMC VSPEX End User Computing

References

129 Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

Proven Solutions Guide: EMC Infrastructure for VMware View 5.1 — EMC VNX Series (FC), VMware vSphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.0

Reference Architecture: EMC Infrastructure for VMware View 5.1 — EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.0

Proven Solutions Guide: EMC Infrastructure for VMware View 5.1 — EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.0

For Citrix or VMware documentation, refer to the Citrix and VMware websites at www.Citrix.com and www.VMware.com

Other documentation

Page 130: EMC VSPEX End User Computing

References

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

130

Page 131: EMC VSPEX End User Computing

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

131

Appendix D About VSPEX

This appendix presents the following topic:

About VSPEX ........................................................................................... 132

Page 132: EMC VSPEX End User Computing

About VSPEX

Citrix XenDesktop 5.6 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup

132

About VSPEX

EMC has joined forces with the industry’s leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-of-breed technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk. Validation by EMC ensures predictable performance and enables customers to select technology that leverages their existing IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure for customers looking to gain simplicity that is characteristic of truly converged infrastructures while at the same time gaining more choice in individual stack components.

VSPEX solutions are proven by EMC and packaged and sold exclusively by EMC channel partners. VSPEX provides channel partners more opportunity, a faster sales cycle, and end-to-end enablement. By working even more closely together, EMC and its channel partners can now deliver infrastructure that accelerates the journey to the cloud for even more customers.