Proven Solution Guide EMC GLOBAL SOLUTIONS Abstract This Proven Solution Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware View™ 4.5, with an EMC ® VNX5700™ unified storage platform. This paper focuses on sizing and scalability, and highlights new features introduced in EMC VNX™, VMware vSphere™, and VMware ® View. EMC unified storage uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize performance of the virtual desktop environment, helping to support service- level agreements. May 2011 EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vSPHERE 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEMS Proven Solution Guide
166
Embed
VMware View, EMC VNX, and Cisco UCS Proven Solution · PDF fileCISCO UNIFIED COMPUTING SYSTEMS . Proven Solution Guide . EMC Infrastructure for Virtual Desktops enabled by EMC VNX,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Proven Solution Guide
EMC GLOBAL SOLUTIONS
Abstract
This Proven Solution Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware View™ 4.5, with an EMC® VNX5700™ unified storage platform. This paper focuses on sizing and scalability, and highlights new features introduced in EMC VNX™, VMware vSphere™, and VMware® View. EMC unified storage uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize performance of the virtual desktop environment, helping to support service-level agreements.
May 2011
EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vSPHERE 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEMS Proven Solution Guide
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
VMware, VMware vCenter, VMware View, and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Iomega and IomegaWare are registered trademarks or trademarks of Iomega Corporation. All other trademarks used herein are the property of their respective owners.
Part Number H8197
Table of contents
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
EMC FAST VP ............................................................................................................................. 20
EMC FAST Cache ........................................................................................................................ 21 Block data compression ............................................................................................................ 21
Cisco Unified Computing System (UCS) ..................................................................................... 21
Use Case 1: with FAST Cache and no dedicated replica LUN .................................................... 149 With the Auto Tiering option enabled .............................................................................................. 149 With Performance Tiering option enabled ........................................................................................ 150
Use Case 2: with FAST Cache and with a dedicated replica LUNs ............................................. 156
Use Case 3: A dedicated replica LUN with no FAST Cache ........................................................ 162
Figure 195. Auto Tiering Login VSI results ...........................................................................................149 Figure 196. Performance Tiering Login VSI results ..............................................................................150 Figure 197. LUN IOPS and response times ..........................................................................................150 Figure 198. Physical disk IOPS and response time ..............................................................................151 Figure 199. FAST Cache read hit ratio .................................................................................................151 Figure 200. FAST Cache write hit ratio ................................................................................................152 Figure 201. FAST Cache hit ratio ..........................................................................................................152 Figure 202. Service processor utilization .............................................................................................153 Figure 203. ESX CPU utilization............................................................................................................153 Figure 204. ESX memory utilization .....................................................................................................154 Figure 205. ESX disk IOPS and average guest latency .........................................................................154 Figure 206. ESX disk VAAI statistics .....................................................................................................155 Figure 207. Virtual machine disk IOPS and latency .............................................................................155 Figure 208. Login VSI test results ........................................................................................................156 Figure 209. LUN IOPS and response times ..........................................................................................156 Figure 210. Replica LUN IOPS and response times ..............................................................................157 Figure 211. FAST Cache read hit ratio .................................................................................................157 Figure 212. FAST Cache write hit ratio ................................................................................................158 Figure 213. FAST Cache hit ratio ..........................................................................................................158 Figure 214. Service processor utilization .............................................................................................159 Figure 215. ESX server CPU utilization.................................................................................................159 Figure 216. ESX server memory utilization ..........................................................................................160 Figure 217. ESX linked clone LUN IOPS and average guest latency ....................................................160 Figure 218. ESX linked clone LUN VAAI statistics ................................................................................161 Figure 219. ESX replica LUN IOPS and average guest latency .............................................................161 Figure 220. ESX replica LUN VAAI statistics .........................................................................................162 Figure 221. Login VSI test results ........................................................................................................162 Figure 222. Linked clone LUN IOPS and response times .....................................................................163 Figure 223. Replica LUN IOPS and response times ..............................................................................163 Figure 224. Physical disk IOPS and response times .............................................................................164 Figure 225. Service process utilization ................................................................................................164
Chapter 1: Executive Summary
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
1 Executive Summary This chapter summarizes the proven solution described in this document and includes the following sections:
• Introduction to the VNX family of unified storage platforms
• Business case
• Solution overview
• Key results and recommendations
Introduction to the VNX family of unified storage platforms The EMC® VNX™ family delivers industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s enterprises.
All of this is available in a choice of systems ranging from affordable entry-level solutions to high-performance, petabyte-capacity configurations servicing the most demanding application requirements. The VNXe™ series is purpose-built for the IT manager in entry-level environments, and the VNX series is designed to meet the high-performance, high-scalability requirements of midsize and large enterprises.
The VNX family includes two platform series:
• The VNX series, delivering leadership performance, efficiency, and simplicity for demanding virtual application environments that includes VNX7500™, VNX5700™, VNX5500™, VNX5300™, and VNX5100™
• The VNXe (entry) series with breakthrough simplicity for small and medium businesses that includes VNXe3300™ and VNXe3100™
Customers can benefit from new VNX features described in Table 1:
Table 1. VNX features
Features VNX series
VNXe series
Next-generation unified storage, optimized for virtualized applications
Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies
High availability, designed to deliver five 9s availability
Chapter 1: Executive Summary
13 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Automated tiering with Fully Automated Storage Tiering for Virtual Pools (FAST VP) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously
Multiprotocol support for file and block protocols
Object access through Atmos™ Virtual Edition (Atmos VE)
Simplified management with EMC Unisphere™ for a single management framework for all NAS, SAN, and replication needs
Up to three times improvement in performance with the latest Intel multicore CPUs, optimized for Flash
Note: VNXe does not support block compression.
EMC provides a single, unified storage plug-in to view, provision, and manage storage resources from VMware vSphere™ across EMC Symmetrix®, VNX family, CLARiiON®, and Celerra® storage systems, helping users to simplify and speed up VMware storage management tasks.
The VNX family includes five new software suites and three new software packs, making it easier and simpler to attain the maximum overall benefits.
• FAST Suite—Automatically optimizes for the highest system performance and the lowest storage cost simultaneously (not available for the VNXe series or the VNX5100).
• Local Protection Suite—Practices safe data protection and repurposing (not applicable to the VNXe3100 as this functionality is provided at no additional cost as part of the base software).
• Remote Protection Suite—Protects data against localized failures, outages, and disasters.
• Application Protection Suite—Automates application copies and proves compliance.
• Security and Compliance Suite—Keeps data safe from changes, deletions, and malicious activity.
• Total Efficiency Pack—Includes all five software suites (not available for the VNX5100 and VNXe series).
• Total Protection Pack—Includes local, remote, and application protection suites (not applicable to the VNXe3100).
Software suites available
Software packs available
Chapter 1: Executive Summary
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
• Total Value Pack—Includes all three protection software suites and the Security and Compliance Suite (the VNX5100 and VNXe3100 exclusively support this package).
Business case Customers require a scalable, tiered, and highly available infrastructure on which to deploy their virtual desktop environment. There are several new technologies available to assist them in architecting a virtual desktop solution, but they need to know how to best use these technologies to maximize their investment, support service-level agreements, and reduce their desktop total cost of ownership (TCO).
The purpose of this solution is to build a replica of a common customer virtual desktop infrastructure (VDI) environment and to validate the environment for performance, scalability, and functionality. Customers will realize:
• Increased control and security of their global, mobile desktop environment, typically their most at-risk environment
• Better end-user productivity with a more consistent environment
• Simplified management with the environment contained in the data center
• Better support of service-level agreements and compliance initiatives
• Lower operational and maintenance costs
Solution overview This solution provides a detailed summary and characterization of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware View 4.5 on an EMC VNX series platform. It involves building a 2,000-seat VMware View 4.5 environment on the EMC unified storage platform and integrates the new features of each of these systems to provide a compelling, cost-effective VDI platform.
This solution incorporates the following components and the EMC VNX5700 platform:
• 2,000 Microsoft Windows 7 virtual desktops
• VMware View Composer 2.5-based linked clones
• Storage tiering (SAS and NL-SAS)
• EMC FAST Cache
• EMC FAST VP
• Sizing and layout of the 2,000-seat VMware View 4.5 environment
• Multipathing and load balancing by EMC PowerPath®/VE
• User data on the CIFS share
• Redundant View Connection Manager
Chapter 1: Executive Summary
15 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Key results and recommendations VMware View 4.5 virtualization technology meets user and IT needs, providing compelling advantages compared to traditional physical desktops and terminal services.
EMC VNX5700 brings flexibility to multiprotocol environments. With EMC unified storage, you can connect to multiple storage networks using NAS, iSCSI, and Fibre Channel SAN. EMC unified storage uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize performance for the virtual desktop environment. EMC unified storage supports vStorage APIs for Array Integration (VAAI), which were introduced in VMware vSphere 4.1. VAAI enables hosts to support more virtual machines per LUN and allows quicker virtual desktop provisioning. Zero paging recognition and transparent page sharing of the vSphere 4.1 feature helps you save memory and therefore allows you to host more virtual desktops per host.
Our team found the following key results during the testing of this solution:
• By using FAST Cache and VAAI, the time to concurrently boot all 2,000 desktops to a usable start was significantly reduced by 25 percent.
• By using a VAAI-enabled storage platform, we were able to store up to 512 virtual machines compared to 64 virtual machines per LUN.
• With VMware transparent page sharing, we observed memory savings of up to 92 GB on a host with 96 GB of RAM, and with less than 2 percent of it swapping to a FAST Cache-enabled LUN.
• Using Flash as FAST Cache for the read and write I/O operations reduced the number of spindles needed to support the required IOPS.
Chapter 2: Introduction
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
2 Introduction This chapter introduces the solution and its components, and includes the following sections:
• Introduction to the EMC VNX series
• Document overview
• Technology overview
• Solution diagram
• Configuration
Introduction to the EMC VNX series The EMC VNX series delivers uncompromising scalability and flexibility for the midtier storage users while providing market-leading simplicity and efficiency to minimize total cost of ownership. Customers can benefit from the new VNX features such as:
• Next-generation unified storage, optimized for virtualized applications
• Extended cache using Flash drives with FAST VP that can be optimized for the highest system performance and lowest storage cost simultaneously on both block and file.
• Multiprotocol support for file, block, and object with object access through Atmos Virtual Edition (Atmos VE).
• Simplified management with EMC Unisphere for a single management framework for all NAS, SAN, and replication needs.
• Up to three times improvement in performance with the latest Intel multicore CPUs, optimized for Flash.
• 6 Gb/s SAS back end with the latest drive technologies supported:
3.5-inch 100 GB and 200 GB Flash, 3.5-inch 300 GB, and 600 GB 15k or 10k rpm SAS, and 3.5-inch 2 TB 7.2k rpm NL-SAS
2.5-inch 300 GB and 600 GB 10k rpm SAS
• Expanded EMC UltraFlex™ I/O connectivity—Fibre Channel (FC), Internet Small Computer System Interface (iSCSI), Common Internet File System (CIFS), Network File System (NFS) including parallel NFS (pNFS), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet.
Chapter 2: Introduction
17 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Document overview This document provides a detailed summary of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware View 4.5, with an EMC VNX5700 unified storage platform. It focuses on the sizing and scalability using features introduced in EMC’s VNX series, VMware vSphere 4.1, and VMware View 4.5. EMC unified storage uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize the performance of a virtual desktop environment, helping to support service-level agreements.
By integrating EMC VNX unified storage and the new features available in EMC’s VNX series and VMware View 4.5, desktop administrators are able to reduce costs by simplifying storage management and increase capacity utilization.
The purpose of this use case is to provide a virtualized solution for virtual desktops that is powered by VMware View 4.5, View Composer 2.5, VMware vSphere 4.1, EMC VNX series, EMC VNX FAST VP, VNX FAST Cache, and storage pools.
This solution includes all the attributes required to run this environment, such as hardware and software and the required VMware View configuration.
Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. It can also be used by other EMC organizations (for example, the technical services or sales organizations) as the basis for producing documentation for a technical services or sales kit.
The paper contains the results of testing the EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series, VMware vSphere 4.1, VMware View 4.5, and VMware View Composer 2.5 solution.
Throughout this paper, we assume that you have some familiarity with the concepts and operations related to virtualization technologies and their use in information infrastructure.
This paper discusses multiple EMC products as well as those from other vendors. Some general configuration and operational procedures are outlined. However, for detailed product installation information, refer to the user documentation for those products.
The intended audience of this paper includes:
• Customers
• EMC partners
• Internal EMC personnel
Purpose
Scope
Audience
Chapter 2: Introduction
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Table 2 provides terms frequently used in this paper.
Table 2. Terminology
Term Description
Block data compression
EMC unified storage introduces Block data compression, which allows customers to save and reclaim space anywhere in their production environment with no restrictions. This capability makes storage even more efficient by compressing data and reclaiming valuable storage capacity. Data compression works as a background task to minimize performance overhead. Block data compression also supports thin LUNs, and automatically migrates thick LUNs to thin during compression, freeing valuable storage capacity.
EMC FAST Cache This feature was introduced early with FLARE® release 30 and allows customers to use Flash drives as an expanded cache layer for the array. FAST Cache is an array-wide feature that you can enable for any LUN or storage pool. FAST Cache provides read and write access to the array.
EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP)
EMC has enhanced its FAST technology to work at the sub-LUN level on both file and bock data. This feature works at the storage pool level, below the LUN abstraction. It supports scheduled migration of data to different storage tiers based on the performance requirements of individual 1 GB slices in a storage pool.
VMware Transparent Page Sharing
Transparent page sharing is a method by which redundant copies of memory pages are eliminated. Refer to http://kb.vmware.com/kb/1021095 for more information.
Linked clone A virtual desktop created by VMware View Composer from a writeable snapshot paired with a read-only replica of a master image.
Login VSI A third-party benchmarking tool developed by Login Consultants that simulates a real-world VDI workload by using an AutoIT script and determines the maximum system capacity based on the response time of the users.
Replica A read-only copy of a master image used to deploy linked clones.
Unisphere The centralized interface of the unified storage platforms. Unisphere includes integration with data protection services, provides built-in online access to key support tools, and is fully integrated with VMware.
VDI platform Virtual desktop infrastructure. The server computing model enabling desktop virtualization, encompassing the hardware and software system required to support the virtualized environment.
Virtual desktop Desktop virtualization (sometimes called client virtualization), that separates a personal computer desktop environment from a physical machine using a client–server model of computing. The model stores the resulting "virtualized" desktop on a remote central server, instead of on the local storage of a remote client; therefore, when users work from their remote desktop client, all of the programs, applications, processes, and data used are kept and run centrally. This scenario allows users to access their desktops on any capable device, such as a traditional personal computer, notebook computer, smartphone, or thin client.
Technology overview
This section identifies and briefly describes the major components of the validated solution environment. The components are:
• EMC VNX platform
• EMC Unisphere
• EMC FAST Cache
• EMC FAST VP
• Block data compression
The EMC VNX platform brings flexibility to multiprotocol environments. With EMC unified storage, you can connect to multiple storage networks using NAS, iSCSI, and Fibre Channel SAN. EMC unified storage leverages advanced technologies like EMC FAST VP and EMC FAST Cache on VNX OE for block to optimize performance for the virtual desktop environment, helping support service-level agreements. EMC unified storage supports vStorage APIs for Array Integration (VAAI), which were introduced in VMware vSphere 4.1. VAAI enables quicker virtual desktop provisioning and start-up.
EMC Unisphere provides a flexible, integrated experience for managing CLARiiON, Celerra, and VNX platforms in a single pane of glass. This new approach to midtier storage management fosters simplicity, flexibility, and automation. Unisphere's unprecedented ease of use is reflected in intuitive task-based controls, customizable dashboards, and single-click access to real-time support tools and online customer communities.
Unisphere features include:
• Task-based navigation and controls that offer an intuitive, context-based approach to configuring storage, creating replicas, monitoring the environment, managing host connections, and accessing the Unisphere support ecosystem.
• A self-service Unisphere support ecosystem, accessible with one click from Unisphere, that provides users with quick access to real-time support tools, including live chat support, software downloads, product documentation, best
Component list
EMC VNX platform
EMC Unisphere
Chapter 2: Introduction
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
practices, FAQs, online communities, ordering spares, and submitting service requests.
• Customizable dashboard views and reporting capabilities that enable at-a-glance management by automatically presenting users with valuable information in terms of how they manage their storage. For example, customers can develop custom reports up to 18 times faster with EMC Unisphere.
• Common management provides a single sign-on and integrated experience for managing both block and file features.
Figure 1 provides an example of the Unisphere Summary page that gives administrators a wealth of detailed information on connected storage systems, from LUN pool and tiering summaries to physical capacity and RAID group information.
Figure 1. Unisphere Summary page
With EMC FAST VP, EMC has enhanced its FAST technology to be more automated with sub-LUN tiering and to support file as well as block. This feature works at the storage pool level, below the LUN abstraction. Where earlier versions of FAST VP operated above the LUN level, FAST VP now analyzes data patterns at a far more granular level. As an example, rather than move an 800 GB LUN to enterprise Flash drives, FAST VP now identifies and monitors the entire storage pool in 1 GB chunks. If data becomes active, then FAST VP automatically moves only these “hot” chunks to a higher tier like Flash. As data cools, FAST VP also correctly identifies which chunks to migrate to lower tiers and proactively moves them. With such granular tiering, it is now possible to reduce storage acquisition while at the same time improve performance and response time. In addition, because FAST VP is fully automated and policy-driven, no manual intervention is required to make this happen, so you save on operating costs as well.
EMC FAST VP
Chapter 2: Introduction
21 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
VNX FAST Cache, a part of the VNX FAST suite, enables Flash drives to be used as an expanded cache layer for the array. FAST Cache has array-wide features available for both file and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to that data chunk are serviced by FAST Cache. This allows immediate promotion of very active data to the Flash drives. This dramatically improves the response time for very active data and reduces the data hot spots that can occur within the LUN.
FAST Cache is an extended read/write cache that can absorb read-heavy activities such as boot storms and antivirus scans, and write-heavy workloads such as operating system patches and application updates.
EMC unified storage introduces block data compression, which allows customers to save and reclaim space anywhere in their production environment with no restrictions. This capability makes storage even more efficient by compressing data and reclaiming valuable storage capacity. Data Compression works as a background task to minimize performance overhead. Block data compression also supports thin LUNs, and automatically migrates thick LUNs to thin during compression, freeing valuable storage capacity.
Cisco UCS provides the computing platform purpose-built for virtualization, delivering a cohesive system that unites computing, networking, and storage access. Cisco UCS integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers that scale to the demands of virtualized desktop workloads, without sacrificing performance or application responsiveness. Cisco UCS Manager enables a stateless computing model employing Service Profile Templates that can scale up large pools of fully provisioned computing resources from “bare metal,” within a fraction of the time required by traditional server solutions.
EMC FAST Cache
Block data compression
Cisco Unified Computing System (UCS)
Chapter 2: Introduction
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
3 Solution Infrastructure This chapter details the infrastructure of each component and includes the following sections:
• VMware View infrastructure
• VMware View virtual desktop infrastructure
• vSphere 4.1 infrastructure
• Windows infrastructure
VMware View infrastructure
VMware View delivers rich and personalized virtual desktops as a managed service from a virtualization platform built to deliver the entire desktop, including the operating system, applications, and user data. VMware View 4.5 provides centralized automated management of these components with increased control and cost savings. VMware View 4.5 improves business agility while providing a flexible high-performance desktop experience for end users across a variety of network conditions.
To provide a virtual desktop experience, VMware View uses various components, each with its own purpose. The components that make up the View environment are:
• Hypervisor
• VMware View Connection server
• VMware vSphere vCenter Server/View Composer
• VMware View Security server
• VMware View Transfer server
• Supported database server like Microsoft SQL Server
• VMware View Agent
• VMware View client
• VMware View Admin Console
• View PowerCLI
• ThinApp
Introduction
VMware View components
Chapter 3: Solution Infrastructure
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Figure 3 shows the VMware components described in the following sections.
Figure 3. VMware components
Hypervisor is used to host the virtual desktops. To get most of the features, we recommend that you use VMware vSphere 4. The vSphere 4 features such as vSphere API for Array Integration (VAAI), Memory Compression, and Ballooning help to host more virtual desktops on a host.
The VMware View Connection server hosts the LDAP directory and keeps the configuration information of the VMware View Desktop Pools, its associated virtual desktops, and VMware View. This data information can be replicated to other View Connection Replica servers. The Connection server also acts as a connection broker that maintains the desktop assignment. It supports a secure socket layer (SSL) connection to the desktop using remote desktop protocol (RDP) or protocol PC over IP (PCoIP). It also supports RSA® SecurID® two-factor authentication and smart card authentication.
The VMware vSphere vCenter server helps you manage your virtual machines and vSphere ESX hosts and provides high availability (HA) and Distributed Resource Scheduling (DRS) clusters. The VMware vCenter server hosts customization specification that permits cloned virtual machines to join the Active Directory (AD) domain. The View Composer service is installed on the vCenter server that provides storage savings by using linked clone technology to share the hard disk of parent virtual machines as shown in Figure 4.
Hypervisor
VMware View Connection server
VMware vSphere vCenter/View Composer
Chapter 3: Solution Infrastructure
27 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
The operating system reads from the common read-only replica image and writes to the linked clone. Any unique data created by the virtual desktop is also stored in the linked clone. A logical representation of this relationship is shown in Figure 5.
Figure 5. Linked clone virtual machine
The View Security server is a different type of View Connection server. It supports two network interfaces—one to a private enterprise network and the other to the public network. It is typically used in a DMZ and enables users outside the organization to securely connect to their virtual desktops.
The VMware View Transfer server is another type of View Connection server that is required when you use the local mode feature. The Transfer server can use the CIFS share on VNX files to store the published image. The local mode allows users to work on a virtual desktop disconnected from the network and later synchronizes the changes with the View environment.
The VMware View supported database server is used to host the tables used by View Composer and can optionally store the VMware View events.
VMware View Agent is installed on the virtual desktop template and is deployed to all virtual desktops. It provides communication to the View Connection server and enables options for USB redirection, virtual printing, PCoIP server, and Smartcard over PCoIP.
VMware View Client software is used to connect to the virtual desktops using the connection broker. View Client allows users to print locally from their virtual desktop, and with the proper configuration, users can access USB devices locally.
View Security server
VMware View Transfer server
Database server
VMware View Agent
VMware View Client
Chapter 3: Solution Infrastructure
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
VMware View Admin Console is a browser-based administration tool for VMware View and is hosted on the View Connection server.
VMware View PowerCLI provides the basic management of VMware View using Windows Powershell. It allows administrators to script some basic VMware View operations and can be used along with other Powershell scripts.
VMware ThinApp is an application virtualization product for enterprise desktop administrators and application owners. It enables rapid deployment of applications to physical and virtual desktops. ThinApp links the application, the ThinApp runtime, the virtual file system, and the virtual registry into a single package. The CIFS share on EMC VNX file can be used as a repository and to deploy the ThinApp to the virtual desktops.
VMware View Admin Console
VMware View PowerCLI
VMware ThinApp
Chapter 3: Solution Infrastructure
29 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
This section provides information on how we designed our solution for hosting 2,000 users in a VMware View environment on EMC VNX series.
A Windows 7 desktop is loaded with the required applications and fine tuned for the virtual machine load. This includes removing unnecessary scheduled tasks, configurations, and services. For further details, refer to http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf
The configuration of the Windows 7 virtual machine is defined in Table 5.
Table 5. Windows 7 configuration
Device Configuration Notes
Processor 1 vCPU
Memory 1.5 GB
Hard disk 20 GB Replica on Flash, delta on SAS. No FAST Cache. No disposable disk. 64 K allocation unit.
Network interface card 1 vNIC
We ran a medium workload on a single virtual machine using Login VSI and observed the workload with a two-second interval during the execution of the test as described in Table 6.
The server used in this solution has two quad-core Intel Xeon 5500 processors. The average CPU load during the test is 9 percent. Therefore, we can run approximately 10 virtual machines per core. One host can run 2 × 4 × 10=80 virtual machines. The Intel Nehalem architecture is very efficient with hyper-threading and allows 50 to 80 percent more clients. This means it can run 1.5 × 80=120 to 1.8 × 80=144 virtual machines per host.
While using linked clones, up to eight hosts are allowed in a cluster. Leaving one node as failover capacity, with seven hosts, we can run 144×7=1008 virtual machines. One cluster can host 1,000 virtual desktops. Without considering the Intel Nehalem features, the cluster can host 80×7=560 virtual desktops. To host 2,000 virtual desktops, we need two to four clusters, which are about 128 to 256 processors in total. In a non-VDI environment, deploying 2,000 desktops would require 2,000 processors.
With hyper-threading, we are able to host 1,000 VMs per cluster and without hyper-threading, we are able to host only 500 VMs per cluster. Thus, we are able to double the number of hosts per cluster when using hyper-threading. In our solution, we use hyper-threading with three clusters; one with 1,000 users and other two with 500 users each. The 500-user cluster has extra room for processor intensive workloads.
Table 7 provides a summary of virtual machines per core.
Table 7. Virtual machine per core
Case Complete cluster Cluster with one node down
1000-user cluster 16 virtual machines per core 18 virtual machines per core
500-user cluster 8 virtual machines per core 9 virtual machines per core
One Windows 7 virtual machine is assigned 1.5 GB of memory. Without using VMware vSphere 4.1 features, it would require at least 9×8×1.5=108 GB to 18×8×1.5= 216 GB per host. VMware vSphere 4.1 provides features such as Transparent Page Sharing, ballooning, compression, recognition of zeroed pages, and memory compression that allows us to overcommit the memory to obtain a better consolidation ratio.
During the baseline workload, we observed about 540 MB used in active memory. The memory overhead was 179 MB; the hypervisor used 578 MB for the 48 GB host and 990 MB for the 96 GB host, and the service console memory was 561 MB. Based on this workload, we require 103 GB determined by the following calculation:
VMware vSphere uses the above-mentioned features before it uses the swap memory. The FAST Cache on EMC’s VNX series storage platform does provide better response time compared to swapping the memory to SAS disks. Another option is to consider having a solid-state drive (SSD) on each host to host the vSwap. This may impact vMotion and also adds complexity to the environment. It is, therefore, advantageous to have the swap served by the FAST Cache on the EMC array.
Processor
Memory
Chapter 3: Solution Infrastructure
31 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Table 8 provides a summary of the memory required per host.
Table 8. Required memory per host
Case
RAM/host min required (Complete Cluster)
RAM / host min required (Cluster with one node down)
RAM/ host used on this solution
1,000 User Cluster 91 GB 103 GB 96 GB
500 User Cluster 46 GB 52 GB 48 GB
We used 1,536 GB in total for hosting 2,000 virtual desktops. In a typical case, 1.5 GB per desktop will not be available, instead 2 GB will be used, which would require 4,000 GB in total. Still, virtual desktops can provide better boot-up time compared to that of the traditional personal computer.
Based on the workload, we found one virtual machine requires approximately 18 Mb/s. So, a 100 Mb/s card can support five to six virtual machines per NIC, a 1 GB NIC can support 50 to 60 virtual machines per NIC, and a 10 GB NIC can support 500 to 600 virtual machines per NIC. The Converged Network Adapter (CNA) running at 50 percent bandwidth can support 250 to 300 virtual machines per CNA.
Note: This is just a rough estimate and we must always watch for the network load and look for the percent packet drops. If the value is high, check the network configuration and consider adding another NIC. In this solution, we used two CNAs per host to provide fault tolerance. For 2,000 virtual desktops, we used 2×8×3= 48 NICs. In a traditional desktop scenario, 2,000 desktops require 2,000 NICs.
The number of spindles required for hosting 2,000 user desktops is calculated using both the IOPS requirement and the capacity needed. Based on the workload, we observed 8.3 IOPS per virtual desktop on average. The maximum and 95th percentile is based on the time interval of the data. The sizing on the average IOPS can yield good performance for the virtual desktops operating in a steady state. However, this leaves insufficient headroom in the array to absorb high I/O peaks. To combat the issue of I/O storms, there should be two to three times the average to absorb that load. Table 9 details the IOPS requirement and Table 10 describes the disks needed by various RAID levels to meet that IOPS.
Table 9. IOPS requirement and disks needed (multiple RAID scenarios)
Item Value
Number of Windows 7 desktops 2,000
IOPS per Windows 7 virtual machines 9
Total host IOPS (HI) 18,000
% Read 65
% Write 35
Network
Storage
Chapter 3: Solution Infrastructure
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Total disk IOPS for RAID 5 (R5IO = HI × %R + HI×4×%W) 36,900
Number of SAS drives alone (R5IO/180) 205
Number of NL-SAS drives alone (R5IO/80) 462
Total disk IOPS for RAID 10 (R10IO = HI×%R + HI×2×%W) 24,300
Number of SAS drives alone (R10IO/180) 135
Number of Flash drives alone (R10IO/2500) 10
Number of NL-SAS drives (R10IO/80) 304
Total Disk IOPS for RAID 6 (R6IO = HI×%R + HI×6×%W) 49,500
Number of SAS drives alone (R6IO/180) 275
Number of NL-SAS drives alone (R6IO/80) 619
In keeping with the same IOPS and to increase performance or capacity, four SAS drives can be replaced with 9 NL-SAS, 125 SAS drives can be replaced with 9 Flash drives, and 125 NL-SAS drives can be replaced with 4 Flash drives. For a mix of 68 percent SAS, 1 percent NL-SAS, and 31 percent Flash, we need the disks as shown in Table 11 for various RAID options.
When considering the storage size of virtual desktops, VMware View Composer reduces the size required by using linked clone technology. Linked clones are dependent virtual machines linked to the replica virtual machine. A replica virtual machine is a thin provisioned copy of the master virtual machine. We deployed a 20 GB hard disk for the operating system to the master virtual machine. The files occupy 13 GB and, therefore, the replica virtual machine disk size is 13 GB.
In the desktop pool, we use a file share on the VNX array to host the user profile and data. A disposable disk that contains the temporary files and windows paging file is used to minimize the expansion of delta disks and, therefore, reduces the refresh frequency to the virtual machine.
Chapter 3: Solution Infrastructure
33 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
The size of a virtual desktop is the size of the delta disk plus two times the memory size of the virtual machine plus 2 MB for internal disk plus the disposable disk and log size.
Considering 1 GB for the delta disk, it requires approximately 6 GB for one linked clone.
VMware View 4.5 supports 512 linked clones from a single replica. To host 500 virtual desktops, we need 3 TB. With the current VMFS version, the maximum size that is supported is 2 TB minus 512 bytes. This means that we need to split them into two datastores. To host 2,000 virtual desktops, we need eight datastores of 2 TB to allow additional space for growth. Therefore, we need 16 TB in total for the linked clones.
If linked clones are not used, it requires 25 GB per virtual machine in thick format or 18 GB per virtual machine using thin disks.
This solution uses 200 GB Flash, 300 GB SAS, and 2 TB NL-SAS disks. The usable raw capacities are 180 GB Flash, 268 GB SAS, and 1.8 TB NL-SAS.
With four Flash drives in RAID 10, 360 GB is dedicated for the replica with a RAID 5 mix of SAS and NL-SAS, and it gives 37 TB. With a RAID 10 mix of SAS and NL-SAS, we have 16 TB. With RAID 10, we use fewer spindles and that data does not grow much compared to user data.
With a dedicated datastore for the replica, the space that is required on the replica LUN is approximately 39 GB for three virtual desktop pools. Any data accessed three times in a given period normally resides in FAST Cache. To maximize the use of Flash, we elected to use it as FAST Cache. Table 12 describes the drives used in this solution.
Table 12. Spindles used in this solution
Drive Linked clone – RAID 10
User data- RAID 6
Hot spare FAST Cache
Flash 0 0 1 14
NL-SAS 4 32 2 NA
SAS 92 48 5 NA
Chapter 3: Solution Infrastructure
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
VMware vSphere 4.1 is the market-leading virtualization hypervisor used across thousands of IT environments around the world. VMware vSphere 4.1 can transform or virtualize computer hardware resources, including CPU, RAM, hard disk, and network controller, to create a fully functional virtual machine that runs its own operating system and applications just like a physical computer.
The high-availability features in VMware vSphere 4.1 along with VMware Distributed Resource Scheduler (DRS) and Storage vMotion® enable seamless migration of virtual desktops from one ESX server to another with minimal or no impact to customer usage.
Figure 6 shows the cluster configuration from vCenter Server. The clusters View-Cluster-1 and View-Cluster-2 host 500 virtual desktops, while View-Cluster-5 hosts 1,000 virtual desktops.
Figure 6. Cluster configuration from vCenter Server
vCenter Server cluster
Chapter 3: Solution Infrastructure
35 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Figure 9 displays the Cisco UCS components described in this section.
Figure 9. Cisco Unified Computing System
Cisco UCS B-Series Blade Servers are designed for compatibility, performance, energy efficiency, large memory footprints, manageability, and unified I/O connectivity:
• Compatibility: Cisco UCS B-Series Blade Servers are designed around multicore Intel Xeon 5500, 5600, 6500 and 7500 Series processors, DDR3 memory, and an I/O bridge. Each blade server's front panel provides direct access for video, two USB, and console connections.
• Performance: Cisco's blade servers use the Intel Xeon next-generation server processors, which deliver intelligent performance, automated energy efficiency, and flexible virtualization. Intel Turbo Boost Technology automatically boosts processing power through increased frequency and use of hyper-threading to deliver high performance when workloads demand and thermal conditions permit. Intel Virtualization Technology provides best-in-class support for virtualized environments, including hardware support for direct connections between virtual machines and physical I/O devices.
• Energy efficiency: Most workloads vary over time. Some workloads are bursty on a moment-by-moment basis, while others have predictable daily, weekly, or monthly cycles. Intel Intelligent Power Technology monitors the CPU utilization and automatically reduces energy consumption by putting processor cores into a low-power state based on real-time workload characteristics.
Overview
Cisco Unified Computing System (UCS) B-Series Blade Servers
Chapter 3: Solution Infrastructure
37 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
• Large-memory-footprint support: As each processor generation delivers even more power to applications, the demand for memory capacity to balance CPU performance increases as well. The widespread use of virtualization increases memory demands even further due to the need to run multiple OS instances on the same server. Cisco blade servers with Cisco Extended Memory Technology can support up to 384 GB per blade.
• Manageability: The Cisco Unified Computing System is managed as a cohesive system. Blade servers are designed to be configured and managed by Cisco UCS Manager, which can access and update blade firmware, BIOS settings, and RAID controller settings from the parent Cisco UCS 6100 Series Fabric Interconnect. Environmental parameters are also monitored by Cisco UCS Manager, reducing the number of points of management.
• Unified I/O: Cisco UCS B-Series Blade Servers are designed to support up to two network adapters. This design can reduce the number of adapters, cables, and access-layer switches by as much as half because it eliminates the need for multiple parallel infrastructure for both LAN and SAN at the server, chassis, and rack levels. This design results in reduced capital and operating expenses through lower administrative overhead and power and cooling requirements.
A core part of the Cisco Unified Computing System, the Cisco UCS 6100 Series Fabric Interconnects provide both network connectivity and management capabilities to all attached blades and chassis. The Cisco UCS 6100 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet and FCoE functions. The interconnects provide the management and communication backbone for the Cisco UCS B-Series Blades and UCS 5100 Series Blade Server Chassis.
The Cisco Nexus 7000 Series offers an end-to-end solution for data center core, aggregation, and high-density end-of-row and top-of-rack server connectivity in a single platform. The Cisco Nexus 7000 Series platform is run by Cisco NX-OS software. It is specifically designed for the most mission-critical place in the network, the data center.
Cisco Nexus 1000V Series Switches are virtual machine access switches that are an intelligent software switch implementation based on IEEE 802.1Q standard for VMware vSphere environments running the Cisco NX-OS Software operating system. Operating inside the VMware ESX hypervisor, the Cisco Nexus 1000V Series supports Cisco VN-Link server virtualization technology to provide:
• Policy-based virtual machine connectivity
• Mobile virtual machine security and network policy
• An undisruptive operational model for server virtualization and networking teams
Cisco UCS 6100 Series Fabric Interconnects
Cisco Nexus 7000 Series Switches
Cisco Nexus 1000V Series Switches, Cisco VN-Link technology
Chapter 3: Solution Infrastructure
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
The Cisco MDS 9500 Series Multilayer Director layers a broad set of intelligent features onto a high-performance, open-protocol switch fabric. Addressing the stringent requirements of large data center storage environments, it provides high availability, security, scalability, ease of management, and transparent integration of new technologies.
Cisco MDS 9500 Series Multilayer Directors
Chapter 3: Solution Infrastructure
39 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Windows infrastructure Microsoft Windows provides the infrastructure used to support the virtual desktops and includes the following components:
• Microsoft Active Directory
• Microsoft SQL Server
• DNS Server
• DHCP Server
The Windows domain controller runs the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory provides several functions to help you:
• Manage the identities of users and their information
• Apply group policy objects
• Deploy software and updates
Microsoft SQL Server is a relational database management system (RDBMS). SQL Server 2008 is used to provide the required databases to vCenter Server, View Composer, and View Events as shown in Figure 10.
Figure 10. SQL server databases
DNS is the backbone of Active Directory and provides the primary name resolution mechanism of Windows servers and clients. In this solution, the DNS role is enabled on the domain controller.
The DHCP Server provides the IP address, DNS Server name, gateway address, and other information to the virtual desktops. In this solution, we enabled the DHCP role on the domain controller.
Introduction
Microsoft Active Directory
Microsoft SQL Server
DNS Server
DHCP Server
Chapter 4: Network Design
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
4 Network Design This chapter describes the network design used in this solution and contains the following sections:
• Considerations
• VNX for file network configuration
• Enterprise switch configuration
• Fibre Channel network configuration
Considerations
EMC recommends that switches support gigabit Ethernet (GbE) connections and Link Aggregation Control Protocol (LACP), and the ports on switches support copper-based media.
This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.
The IP scheme for the virtual desktop network must be designed so that there are enough IP addresses in one or more subnets for the DHCP Server to assign them to each virtual desktop.
VNX platforms provide network high availability or redundancy by using link aggregation. This is one of the methods used to address the problem of link or switch failure.
Link aggregation is a high-availability feature that enables multiple active Ethernet connections to appear as a single link with a single MAC address and potentially multiple IP addresses.
In this solution, LACP is configured on VNX, which combines eight GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Figure 11 shows the LACP configuration of the Data Mover ports on the Ethernet switch.
Physical design considerations
Logical design considerations
Link aggregation
Chapter 4: Network Design
41 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
VNX5700 consists of two Data Movers, which can be configured in an active/active or active/passive configuration. In this solution, the Data Movers operate in the active/passive mode. In the active/passive configuration, the passive Data Mover serves as a failover device for the active Data Mover.
The VNX5700 Data Mover is configured with two UltraFlex™ I/O modules with each consisting of four 1 Gb interfaces. It is configured to use LACP with all Data Mover ports as shown in Figure 12.
Figure 12. VNX5700 Data Mover configuration
The lacp0 device was used to support virtual machine traffic, home folder access, and external access for roaming profiles. The virtual interface devices were created on the same LACP for each VLAN that requires access to the Data Mover interfaces as shown in Figure 13.
Figure 13. Virtual interface devices
Data Mover ports
Chapter 4: Network Design
43 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
All network interfaces in this solution use 10 GbE connections. The server Ethernet ports on the switch are configured as trunk ports and use VLAN tagging at the port group to separate the network traffic between various port groups. Figure 15 shows the vSwitch configuration in vCenter Server.
Figure 15. vSwitch configuration in vCenter Server
Table 13 lists the configured port groups.
Table 13. Port groups
Configured port groups Used to
Virtual machine network Provide external access for administrative virtual machines
Service Console Manage public network administration traffic
Desktop-Network Provide a network connection for virtual desktops and LAN traffic
ESX NIC teaming
Port groups
Chapter 4: Network Design
45 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Enterprise switch configuration In this solution, we spread the ESX server and VNX Data Mover cabling evenly across two line cards to provide redundancy and load balancing of the network traffic.
The server uplinks to the switch are configured in a port channel group to increase the utilization of server network resources and to provide redundancy. The vSwitches are configured to load balance the network traffic on the originating port ID.
We used the following configuration for one of the server ports in this solution:
switchport switchport trunk encapsulation dot1q switchport mode trunk no ip address
spanning-tree portfast trunk
The network ports for each VNX5700 Data Mover are connected to the Ethernet switch. The ports are configured with LACP, which provides redundancy in case of a NIC or port failure.
Figure 16 shows an example of the switch configuration for one of the Data Mover ports.
Figure 16. Data Mover port switch configuration
Cabling
Server uplinks
Data Movers
Chapter 4: Network Design
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Enterprise-class FC switches are used to provide the storage network for this solution. The switches are configured in a SAN A/SAN B configuration to provide fully redundant fabrics.
Each server has a single connection to each fabric to provide load-balancing and failover capabilities. Each storage processor has two links to the SAN fabrics for a total of four available front-end ports. The zoning is configured so that each server has four available paths to the storage array. Figure 17 confirms that information from the vCenter interface.
Figure 17. Zoning configuration
Introduction
Chapter 4: Network Design
47 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
This solution uses single initiator and multiple target zoning. Each server initiator is zoned to two storage targets on the array. Figure 18 shows the zone configuration for the SAN B fabric.
Figure 18. Zone configuration for the SAN B fabric
Zone configuration
Chapter 5: Installation and Configuration
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
The VMware View Installation Guide available on the VMware website has detailed procedures to install View Connection Server and View Composer 2.5. There are no special configuration instructions required for this solution.
The ESX Installable and vCenter Server Setup Guide available on the VMware website has detailed procedures to install vCenter Server and ESX and is not covered in further detail in this paper. There are no special configuration instructions required for this solution.
Before deploying the desktop pools, ensure that the following steps from the VMware View Installation Guide have been completed:
• Prepare Active Directory
• Install View Composer 2.5 on vCenter Server
• Install View Connection Server (standard and replica)
• Add a vCenter Server instance to View Manager
One desktop pool is created for each vSphere cluster. Two pools will host 500 desktops and the other will host 1,000 desktops. In this solution, persistent automated desktop pools are used as shown in Figure 19.
Figure 19. Persistent automated desktop pools
To create a persistent automated desktop pool as configured for this solution, complete the following steps:
1. Log in to the VMware View Administration page, which is located at https://server/admin, where “server” is the IP address or DNS name of the View Manager server.
2. Click the Pools link in the left pane.
3. Click Add under the Pools banner.
4. In the Type page, select Automated Pool as shown in Figure 20, and click Next.
6. In the vCenter Server page, select View Composer linked clones and select a vCenter Server that supports View Composer, as shown in Figure 22. Click Next.
Figure 22. Select View Composer linked clones
7. In the Pool Identification page, enter the required information as shown in Figure 23, and click Next. The pool ID is used by the View Administrators and the Display name is what the users will see in the View Client.
Chapter 5: Installation and Configuration
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
10. In the Provisioning Settings page, select a name for the desktop pool and enter the number of desktops to provision, as shown in Figure 26. Click Next. {n:fixed=4} increments the desktop numbering with 4 digits padded. We used the pool ID at the end to easily associate the desktop name to its pool.
Figure 26. Provisioning settings
Chapter 5: Installation and Configuration
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
11. In the vCenter Settings page, browse to select a default image, a folder for the virtual machines, the cluster hosting the virtual desktops, the resource pool to hold the desktops, and the data stores that will be used to deploy the desktops as shown in Figure 27, and then click Next.
Figure 27. vCenter settings
12. In the Select Datastores page, select the datastores for linked clone images, and then click OK. We used Aggressive as the Storage Overcommit option to allow more desktops per virtual provisioned datastore as shown in Figure 28.
Chapter 5: Installation and Configuration
55 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Figure 28. Select the datastores for linked clone images
13. In the Guest Customization page, select the domain and AD container, and then select Use a customization specification (Sysprep). Click Next.
Figure 29. Guest customization
14. In the Ready to Complete page (shown in Figure 30), verify the settings for the pool, and then click Finish to start the deployment of the virtual desktops.
Chapter 5: Installation and Configuration
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
PowerPath/VE 5.4.1 supports ESX 4.1. The EMC PowerPath/VE for VMware vSphere Installation and Administration Guide available on Powerlink® provides the procedure to install and configure PowerPath/VE. There are no special configuration instructions required for this solution.
The PowerPath/VE binaries and support documentation are available on Powerlink. Figure 31 shows that PowerPath is managing the block devices on the ESX host.
Figure 31. PowerPath as the owner for managing the path of block devices
PowerPath Virtual Edition
Chapter 5: Installation and Configuration
57 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Storage pools in the EMC VNX OE support heterogeneous drive pools. In this solution, we configured a 96-disk pool with RAID 10 from 92 SAS disks and four near-line SAS drives. From this storage pool, we created 8 thin LUNs, each 2,047 GB in size as shown in Figure 32.
Figure 32. Thin LUNs created
Create storage pools
Chapter 5: Installation and Configuration
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
For each LUN in the storage pool, the tiering policy is set to Auto Tiering as shown in Figure 33. As data ages and is used infrequently, it is moved to the near-line SAS drives in the pool.
Figure 33. Auto-Tiering
Chapter 5: Installation and Configuration
59 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
From the Storage System Properties dialog box, click the FAST Cache tab, click Create, and then select the eligible Flash drives to create the FAST Cache as shown in Figure 35. There are no user-configurable parameters for the FAST Cache.
Figure 35. FAST Cache configuration
FAST Cache is enabled for all LUNs in this solution. The replica images are provisioned on all datastores allocated to that pool. But, as the data gets frequently accessed, it ends up in FAST Cache.
Chapter 5: Installation and Configuration
61 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
To configure the FAST VP feature for a pool LUN, go to the properties for a pool LUN in Unisphere, click the Tiering tab, and set the tiering policy for the LUN as shown in Figure 36.
Figure 36. Configuring FAST VP
Configure FAST VP
Chapter 5: Installation and Configuration
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
The VNX Home Directory installer is available on the NAS Tools website and the application CD for each VNX OE for file release. You can also download the software from Powerlink.
With this feature, you can create a unique share called “HOME,” redirect data to this path based on specific criteria, and provide the user with exclusive rights to the folder.
After installing the VNX Home Directory feature, use the Microsoft Management Console (MMC) snap-in to configure the feature. Figure 37 shows a sample configuration.
Figure 37. Configure the VNX Home Directory feature
The sample configuration shown in Figure 38 shows how to automatically create a user Home Directory for any user in domain view45 in the Homedirs folder on the View45 file system.
\View45\Homedirs\<user>
For example, when user1 logs in, that user will see that \\VNXFILE\HOME points to \View45\Homedirs\User1 on the Data Mover. For user2, \\VNXFILE\Home points to \View45\Homedirs\User2.
Configure VNX Home Directory
Chapter 5: Installation and Configuration
63 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
6 Testing and Validation This chapter compares how the following use cases performed in the boot storm, Login VSI, and antivirus scan test scenarios.
• FAST Cache with no dedicated replica LUN
• FAST Cache and dedicated replica LUN
• No FAST Cache with a dedicated replica LUN
Use case descriptions
In Use Case 1, we created the linked clone desktop pool without a dedicated replica LUN and used 14 flash drives for the FAST Cache configuration. We created a replica virtual machine created for every LUN that hosts the linked clone as shown in Figure 39.
Figure 39. FAST Cache with no dedicated replica LUN
Use Case 1: FAST Cache with no dedicated replica LUN
Chapter 6: Testing and Validation
65 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
In Use Case 2, we created one replica virtual machine for every linked clone desktop pool and stored that replica on a different LUN than the linked clones. Figure 40 shows this configuration. We used four Flash drives to host the replica virtual machine and configured FAST Cache to use 10 Flash drives.
Figure 40. FAST Cache with dedicated replica LUNs
Use Case 2: FAST Cache with dedicated replica LUNs
Chapter 6: Testing and Validation
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
In Use Case 3, we did not use FAST Cache and reduced the dedicated replica LUN configuration, hosting 1,000 users in this environment as shown in Figure 41.
Figure 41. Dedicated replica LUN with no FAST Cache
Use Case 3: A dedicated replica LUN with no FAST Cache
Chapter 6: Testing and Validation
67 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
This section describes the boot storm results for each of the three use cases when powering up the desktop pools.
For Use Case 1, the virtual desktops took an average of 1.5 seconds to boot. Figure 42 shows the LUN IOPS and response times. The LUN response time stayed below 2 ms.
Figure 42. LUN IOPS and response times for FAST Cache with no dedicated replica LUN
Figure 47 shows the memory activity from one of the ESX servers. When the virtual machine boots, it consumes the free available memory. The amount of swap memory it uses is very low compared to the memory gain achieved by the Transparent Page Sharing.
Figure 47. ESX memory activity
Figure 48 shows the ESX disk IOPS and response time for one of the LUNs.
Figure 48. ESX physical disk IOPS and guest latency
05000
100001500020000250003000035000400004500050000
1 9 17 25 33 41 49 57 65 73 81 89 97 105
113
121
129
137
145
153
161
169
ESX memory
\\c1b1\Memory\Free MBytes
\\c1b1\Memory\PShare Shared MBytes
\\c1b1\Memory\Swap Used MBytes
\\c1b1\Memory\Total Compressed MBytes
0
5
10
15
20
0
200
400
600
800
1000
1200
1 10 19 28 37 46 55 64 73 82 91 100
109
118
127
136
145
154
163
172
Avg
. Gue
st L
aten
cy M
illis
ec/c
omm
and
IOPS
ESX physical disk IOPS and guest latency
\\c1b1\Physical Disk SCSI Device(naa.6006016007b029003225bc67dc25e011)\Reads/sec
\\c1b1\Physical Disk SCSI Device(naa.6006016007b029003225bc67dc25e011)\Writes/sec
Chapter 6: Testing and Validation
71 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Figure 49 shows the number of SCSI reservations prevented by using the Autonomic Test and Set (ATS) feature used by VAAI and the zero requests for the array.
Figure 49. ESX VAAI statistics
This graphic shows that there are approximately 1,670 ATS requests during the boot and about 125 zeroing requests on this LUN.
Figure 50 shows the LUN IOPS and response times during the virtual desktop boot process. The response times stayed below 4 ms most of the time.
Without FAST Cache, we need more spindles to host 2,000 users. The existing spindles can support up to 1,000 users, so we performed our testing with 1,000 users for this use case.
Figure 61 shows the linked clone LUN IOPS and response times during the boot process.
Figure 61. Linked clone LUN IOPS and response times
We installed the McAfee Enterprise Virus Scan command line utility on all of the virtual desktops in our test environment, and executed the script remotely from a central machine.
Note: Although this is not the preferred way to implement an antivirus scanner in a VDI environment, the purpose of this test is to simulate a traditional customer implementation.
This section describes the antivirus scan results when powering up the desktop pools for each of the three use cases. It includes a results summary graph and graphs showing individual results from scanning 100, 200, 300, 500, and 1,000 desktops in each of the three scenarios.
Summary Figure 71 shows the summary results from an antivirus scan for 500, 300, 200, and 100 desktops for a scenario with FAST Cache and no dedicated replicated LUN.
Figure 71. Antivirus scan summary with FAST Cash and no dedicated replica LUN
The graphs in this section shown the antivirus scan response times for each of the above desktop configurations.
0:00:00
0:07:12
0:14:24
0:21:36
0:28:48
0:36:00
0:43:12
1 23 45 67 89 111
133
155
177
199
221
243
265
287
309
331
353
375
397
419
441
463
485
Tim
e Ta
ken
to S
can
(h:m
m:s
s)
Anti-virus scan summary with FAST Cash and no dedicated replica LUN
500 Scan 300 Scan 200 Scan 100 Scan
Overview
Use Case 1: FAST Cache with no dedicated replica LUN
Chapter 6: Testing and Validation
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Summary Figure 108 shows the summary results from an antivirus scan for 500, 300, 200, and 100 desktops for a scenario with FAST Cache and a dedicated replicated LUN.
Figure 108. Antivirus scan summary with FAST Cache and a dedicated replica LUN
The graphs in this section show the antivirus scan response times for each of the above desktop configurations.
0:00:00
0:14:24
0:28:48
0:43:12
0:57:36
1:12:00
1:26:24
1:40:48
1:55:12
1 23 45 67 89 111
133
155
177
199
221
243
265
287
309
331
353
375
397
419
441
463
485
Tim
e Ta
ken
to S
can
(h:m
m:s
s)
Anti-virus scan summary with FAST Cache and a dedicated replica LUN
500 Scan 300 Scan 200 Scan 100 Scan
Use Case 2: With FAST Cache and a dedicated replica LUN
Chapter 6: Testing and Validation
103 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Summary Figure 153 shows the summary results from an antivirus scan for 500, 300, 200, and 100 desktops for a scenario with a dedicated replica LUN but no FAST Cache configured.
Figure 153. Antivirus scan summary without FAST Cash but with a dedicated replica LUN
The graphs in this section show the antivirus scan response times for each of the above desktop configurations.
0:00:00
0:14:24
0:28:48
0:43:12
0:57:36
1:12:00
1:26:24
1:40:48
1:55:12
1 23 45 67 89 111
133
155
177
199
221
243
265
287
309
331
353
375
397
419
441
463
485
Tim
e ta
ken
to s
can
(h:m
m:s
s)
Anti-virus scan summary with a dedicated replica LUN and no FAST Cache
500 Scan 300 Scan 200 Scan 100 Scan
Use Case 3: A dedicated replica LUN with no FAST Cache
Chapter 6: Testing and Validation
127 EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
To simulate a real-world user workload scenario, the Virtual Session Index (VSI) tool version 2.1 was used. The Login VSI workload can be categorized as light, medium, heavy, and custom. A medium workload was selected for this testing and had the following characteristics:
• Simulates normal user behavior and speeds for medium workload
• It uses Microsoft Office applications, Internet Explorer, Adobe Acrobat Reader, and zip files
• The tasks include launching the application, typing, minimizing, maximizing other application, printing, reading PDF, browsing sites that are flash-based.
This section describes the Login VSI tests for each of the three use cases.
We tested this use case in the following conditions:
• With the Auto Tiering option enabled
• With Performance Tiering option enabled
The results of both conditions are shown below.
With the Auto Tiering option enabled Figure 195 shows the Login VSI test results using the Auto Tiering option.
Figure 195. Auto Tiering Login VSI results
Overview
Use Case 1: with FAST Cache and no dedicated replica LUN
Chapter 6: Testing and Validation
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
Figure 200 shows the FAST Cache write hit ratio during the Login VSI test with FAST Cache enabled and no dedicated replica LUN.
Figure 200. FAST Cache write hit ratio
Figure 201 shows the FAST Cache hit ratio for both read and write activity during the Login VSI test with FAST Cache enabled and no dedicated replica LUN.
7 Conclusion This section summarizes the solution test results and includes the following sections:
• Summary
• Findings
• References
Summary EMC’s VNX platform along with VMware View enables customers to host virtual desktops economically and to minimize the risk of exposure of data. The presented solution highlights the design guidelines for hosting 2,000 users on EMC VNX5700 and uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize the performance for the virtual desktop environment.
Findings The EMC solution team confirmed the following key results during the testing of this solution:
• By using FAST Cache and VAAI, the time to concurrently boot all 2,000 desktops to a usable start is reduced by 25 percent.
• Having the replica datastore use FAST Cache reduces the virus scanning per desktop by almost 50 percent.
• With no dedicated replica LUN and using FAST Cache, the maximum response time of the simulated workload is lower compared to the other two use cases.
• By using a VAAI-enabled storage platform, we are able to store up to 512 virtual machines compared to 64 virtual machines per LUN without VAAI-enabled storage.
• Using Flash as FAST Cache for the read and write I/O operations reduces the number of spindles needed to support the required IOPS.
Chapter 7: Conclusion
EMC Infrastructure for Virtual Desktops enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, VMware View Composer 2.5, and
References Refer to the following white papers, available on Powerlink, for information about solutions similar to the one described in this paper:
• EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series, VMware vSphere 4.1, VMware View 4.5, and VMware View Composer 2.5—Reference Architecture
• EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series, VMware vSphere 4.1, VMware View 4.5, and VMware View Composer 2.5—Proven Solution Guide
• Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied Best Practices Guide
• EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure—Applied Best Practices
If you do not have access to the above content, contact your EMC representative.
The following Cisco documents, located on the Cisco website, also provide useful information:
• Cisco Desktop Virtualization Solution Whitepaper (link to http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns836/ns978/solution_overview_c22-632364.pdf )
• Cisco Validated Design for Desktop Virtualization with VMware View and EMC Storage (link to http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns993/landing_dcVirt-VM_EMC.html)
• Cisco Desktop Virtualization Solutions Website (link to www.cisco.com/go/vdi)
• Cisco Virtualization Experience Infrastructure Website (link to www.cisco.com/go/vxi)
The following documents are available on the VMware website: