Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure with Brocade VDX networking for private cloud deployments with Microsoft Hyper-V and EMC VNXe for up to 100 virtual machines using iSCSI Storage. October, 2013 EMC ® VSPEX ™ with Brocade Networking Solutions for PRIVATE CLOUD Microsoft ® Windows ® Server 2012 with Hyper-V ™ for up to 100 Virtual Machines Enabled by Brocade VDX with VCS Fabrics, EMC VNXe ™ and EMC Next- Generation Backup
142
Embed
EMC VSPEX with Brocade Networking Solutions for … · Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure with Brocade VDX networking
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Proven Infrastructure
EMC VSPEX
Abstract
This document describes the EMC VSPEX Proven Infrastructure with
Brocade VDX networking for private cloud deployments with
Microsoft Hyper-V and EMC VNXe for up to 100 virtual machines using
iSCSI Storage.
October, 2013
EMC® VSPEX™ with Brocade Networking
Solutions for PRIVATE CLOUD Microsoft® Windows® Server 2012 with Hyper-V™ for up to
100 Virtual Machines
Enabled by Brocade VDX with VCS Fabrics, EMC VNXe™ and EMC Next-
Generation Backup
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
10
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
11
Tables
Table 1. VNXe customer benefits .....................................................................................29 Table 2. Solution hardware ...................................................................................................38 Table 3. Solution software ......................................................................................................40 Table 4. Network hardware ..................................................................................................43 Table 5. Storage hardware ...................................................................................................47 Table 6. Backup profile characteristics .........................................................................54 Table 7. Virtual machine characteristics .....................................................................56 Table 8. Blank worksheet row ..............................................................................................62 Table 9. Reference virtual machine resources ........................................................64 Table 10. Example worksheet row ......................................................................................65 Table 11. Example applications ...........................................................................................66 Table 12. Server resource component totals ..............................................................68 Table 13. Blank customer worksheet .................................................................................69 Table 14. Deployment process overview .......................................................................72 Table 15. Tasks for pre-deployment ...................................................................................73 Table 16. Deployment prerequisites checklist.............................................................74 Table 17. Brocade VDX 6710 and VDX 6720 Configuration Steps ...............78 Table 18. Tasks for storage configuration .................................................................... 110 Table 19. Tasks for server installation .............................................................................. 112 Table 20. Tasks for SQL server database setup ........................................................ 115 Table 21. Tasks for SCVMM configuration ................................................................... 117 Table 22. Tasks for testing the installation .................................................................... 122 Table 23. List of components used in the VSPEX solution for 50 virtual
machines .................................................................................................................... 126 Table 24. List of components used in the VSPEX solution for 100 virtual
machines .................................................................................................................... 127 Table 25. Common server information ......................................................................... 130 Table 26. Hyper-V server information ............................................................................. 130 Table 27. Array information .................................................................................................. 131 Table 28. Network infrastructure information............................................................ 131 Table 29. VLAN information .................................................................................................. 131 Table 30. Service accounts .................................................................................................. 131 Table 31. Hyper-V Fast Track component classification ................................... 140
Tables
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
12
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
13
Chapter 1 Executive Summary
This chapter presents the following topics:
Introduction 14
Target audience 14
Document purpose 14
Business needs 15
Executive Summary
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
14
Introduction
EMC VSPEX with Brocade networking solutions are validated and modular
architectures built with proven best-of-breed technologies to create
complete virtualization solutions on compute, networking, and storage
layers. VSPEX helps to reduce virtualization planning and configuration
burdens. When embarking on server virtualization, virtual desktop
deployment, or IT consolidation, VSPEX accelerates your IT Transformation
by enabling faster deployments, choice, greater efficiency, and lower risk.
This document is a comprehensive guide to the technical aspects of this
solution. Server capacity is provided in generic terms for required
minimums of CPU, memory, and network interfaces; the customer can
select the server hardware that meet or exceed the stated minimums.
Target audience
The reader of this document is expected to have the necessary training
and background to install and configure Microsoft Hyper-V, Brocade VDX
series switches, EMC VNXe series storage systems, and associated
infrastructure as required by this implementation. The document provides
external references where applicable. The reader should be familiar with
these documents.
Readers should also be familiar with the infrastructure and database
security policies of the customer installation.
Users focusing on selling and sizing a Microsoft Hyper-V private cloud
infrastructure should pay particular attention to the first four chapters of this
document. After purchase, implementers of the solution can focus on the
configuration guidelines in Chapter 5, the solution validation in Chapter 6,
and the appropriate references and appendices.
Document purpose
This document serves as an initial introduction to the VSPEX architecture,
an explanation on how to modify the architecture for specific
engagements and instructions on how to deploy the system effectively.
The VSPEX with Brocade VDX private cloud architecture provides the
customer with a modern system capable of hosting a large number of
virtual machines at a consistent performance level. This solution runs on
the Microsoft Hyper-V virtualization layer backed by the highly available
VNX™ family storage. The compute and network components are
customer-definable, and should be redundant and sufficiently powerful to
handle the processing and data needs of the virtual machine
environment.
Executive Summary
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
15
The 50 and 100 virtual machines environments are based on a defined
reference workload. Because not every virtual machine has the same
requirements, this document contains methods and guidance to adjust
your system to be cost-effective when deployed.
A private cloud architecture is a complex system offering. This document
facilitates the setup by providing upfront software and hardware material
lists, step-by-step sizing guidance and worksheets, and verified
deployment steps. When the last component is installed, there are
validation tests to ensure that your system is up and running properly.
Following the procedures defined in this document ensures an efficient
and painless journey to the cloud.
Business needs
Customers require a scalable, tiered, and highly available infrastructure on
which to deploy their business and mission-critical applications. Several
new technologies are available to assist customers in consolidating and
virtualizing their server infrastructure, but customers need to know how to
use these technologies to maximize the investment, support service-level
agreements, and reduce the total cost of ownership (TCO).
This solution addresses the following challenges:
Availability: Stand-alone servers incur downtime for maintenance or
unexpected failures. Clusters of redundant stand-alone nodes are
inefficient in the use of CPU, disk, and memory resources.
Server management and maintenance: Individually maintained servers
require significant repetitive activities for monitoring, problem resolution,
patching, and other common activities. Therefore, the maintenance is
labor intensive, costly, error-prone, and inefficient. Security, downtime,
and outage risks are elevated.
Ease of solution deployment: While small and medium businesses (SMB)
must address the same IT challenges as larger enterprises, the staffing
levels, experience, and training are generally more limited. IT generalists
are often responsible for managing the entire IT infrastructure, and reliance
is placed on third-party sources for maintenance or other tasks. The
perceived complexity of the IT function raises fear of risk and may block
the adoption of new technology. Therefore, the simplicity of deployment
and management are highly valued.
Network performance and resiliency: Networking is added locally to
provide connectivity between physical servers & storage and existing
infrastructure. Network is sized for 1 & 10 GbE performance sizing
requirements and is deployed in a HA dual fabric for resiliency.
Executive Summary
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
16
Storage efficiency: Storage that is added locally to physical servers or
provisioned directly from a shared resource or array often leads to over-
provisioning and waste.
Backup: Traditional backup approaches are slow and frequently
unreliable. There tends to be inflection points (or plateaus) in the
virtualization adoption curve when the number of virtual machines
increases from a few to 100 or more. With a few virtual machines, the
situation can be manageable and most organizations can get by with
existing tools and processes. However, when the virtual environment
grows, the backup and recovery processes often become the limiting
factors in the deployment.
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
17
Chapter 2 Solution Overview
This chapter presents the following topics:
Introduction 18
Virtualization 18
Compute 18
Network 19
Storage 19
Solution Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
18
Introduction
The EMC VSPEX private cloud for Microsoft Hyper-V with Brocade VDX
solution provides a complete system architecture capable of supporting
up to 100 virtual machines with a redundant server/network topology and
highly available storage. The core components that make up this
particular solution are virtualization, storage, server, compute, and
networking.
Virtualization
Microsoft Hyper-V is a leading virtualization platform in the industry. For
years, Hyper-V has provided flexibility and cost savings to end users by
consolidating large, inefficient server farms into nimble, reliable cloud
infrastructures.
Features like Live Migration which enables a virtual machine to move
between different servers with no disruption to the guest operating system,
and Dynamic Optimization which performs Live Migration automatically to
balance loads, make Hyper-V a solid business choice.
With the release of Windows Server 2012, a Microsoft virtualized
environment can host virtual machines with up to 64 virtual CPUs and 1 TB
of virtual RAM.
Compute
VSPEX provides the flexibility to design and implement your choice of
server components. The infrastructure must conform to the following
attributes:
Sufficient processor cores and memory to support the required
number and types of virtual machines
Sufficient network connections to enable redundant connectivity to
the system switches
Excess capacity to withstand a server failure and failover in the
environment
Solution Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
19
Network
Brocade VDX switches with VCS Fabric Technology enables the
implementation of a high performance, efficient, and resilient network with
this VSPEX solution. The Brocade VDX switching infrastructure provides the
following attributes:
Redundant network links for the hosts, switches, and storage.
Architecture for Traffic isolation based on industry-accepted best
practices.
Support for link aggregation.
High utilization and high availability networking
Virtualization automation
Storage
The EMC VNX storage family is the leading shared storage platform in the
industry. VNX provides both file and block access with a broad feature set
which makes it an ideal choice for any private cloud implementation.
The following VNXe storage components are sized for the stated reference
architecture workload:
Host adapter ports – Provide host connectivity via fabric into the
array.
Storage Processors – The compute components of the storage
array, which are used for all aspects of data moving into, out of,
and between arrays along with protocol support.
Disk drives – Disk spindles that contain the host/application data
and their enclosures.
The 50 and 100 virtual machine Hyper-V private cloud solutions discussed in
this document are based on the VNXe3150™ and VNXe3300™ storage
arrays respectively. VNXe3150 can support a maximum of 100 drives and
VNXe3300 can host up to 150 drives.
The EMC VNXe series supports a wide range of business class features ideal
for the private cloud environment, including:
Thin Provisioning
Replication
Snapshots
File Deduplication and Compression
Quota Management
Solution Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
20
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
21
Chapter 3 Solution
Technology
Overview
This chapter presents the following topics:
Overview 22
Summary of key components 23
Virtualization 24
Compute 25
Network 27
Storage 29
Backup and recovery 30
Other technologies 30
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
22
Overview
This solution uses the EMC VNXe series, Brocade VDX switches with VCS
Fabric technology, and Microsoft Hyper-V to provide storage, network,
and server hardware consolidation in a private cloud. The new virtualized
infrastructure is centrally managed to provide efficient deployment and
management of a scalable number of virtual machines and associated
shared storage.
Figure 1 depicts the general solution components.
Figure 1. VSPEX private cloud components
These components are described in more detail in the following sections.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
23
Summary of key components
This section briefly describes the key components of this solution.
Virtualization
The virtualization layer enables the physical implementation of
resources to be decoupled from the applications that use them. In
other words, the application view of the available resources is no
longer directly tied to the hardware. This enables many key
features in the private cloud concept.
Compute
The compute layer provides memory and processing resources for
the virtualization layer software, and for the needs of the
applications running within the private cloud. The VSPEX program
defines the minimum amount of compute layer resources required,
and enables the customer to implement the requirements using any
server hardware that meets these requirements.
Network
Brocade VDX switches, with VCS Fabric technology; connect the
users of the Private Cloud to the resources in the cloud and the
storage layer to the compute layer. EMC VSPEX solutions with
Brocade VDX switches provide the required connectivity for the
solution and general guidance on network architecture. The EMC
VSPEX solutions also enable the customer to implement a solution
that provides a cost effective, resilient, and operationally efficient
virtualization platform.
Storage
The storage layer is critical for the implementation of the private
cloud. With multiple hosts to access shared data, many of the use
cases defined in the private cloud concept can be implemented.
The EMC VNXe storage family used in this solution provides high-
performance data storage while maintaining high availability.
Backup and recovery
The optional backup and recovery components of the solution
provide data protection when the data in the primary system is
deleted, damaged, or otherwise unusable.
The Solution architecture section provides details on all the components
that make up the reference architecture.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
24
Virtualization
Virtualization enables greater flexibility in the application layer by
potentially eliminating hardware downtime for maintenance, and
enabling the physical capability of the system to change without affecting
the hosted applications. In a server virtualization or private cloud use
case, it enables multiple independent virtual machines to share the same
physical hardware, rather than being directly implemented on dedicated
hardware.
Microsoft Hyper-V, a Windows Server role that was introduced in Windows
Server 2008, transforms or virtualizes computer hardware resources,
including CPU, memory, storage, and network. This transformation creates
fully functional virtual machines that run their own operating systems and
applications just like physical computers.
Hyper-V and Failover Clustering provide a high-availability virtualized
infrastructure along with Cluster Shared Volumes (CSVs). Live Migration
and Live Storage Migration enable seamless migration of virtual machines
from one Hyper-V server to another and stored files from one storage
system to another, with minimal performance impact.
SCVMM is a centralized management platform for the virtualized
datacenter. With SCVMM, administrators can configure and manage the
virtualization host, networking, and storage resources in order to create
and deploy virtual machines and services to private clouds. When
deployed, SCVMM greatly simplifies provisioning, management and
monitoring of the Hyper-V environment.
Hyper-V achieves high availability by using the Windows Server 2012
Failover Clustering feature. High availability is impacted by both planned
and unplanned downtime, and Failover Clustering can significantly
increase the availability of virtual machines in both situations. Windows
Server 2012 Failover Clustering is configured on the Hyper-V host so that
virtual machines can be monitored for health and moved between nodes
of the cluster. This configuration has the following key advantages:
If the physical host server that Hyper-V and the virtual machines are
running on must be updated, changed, or rebooted, the virtual
machines can be moved to other nodes of the cluster. You can
move the virtual machines back after the original physical host
server is back to service.
If the physical host server that Hyper-V and the virtual machines are
running on fails or is significantly degraded, the other members of
the Windows Failover Cluster take over the ownership of the virtual
machines and bring them online automatically.
Overview
Microsoft Hyper-V
Microsoft System Center Virtual Machine Manager (SCVMM)
High Availability with Hyper-V Failover Clustering
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
25
If the virtual machine fails, it can be restarted on the same host
server or moved to another host server. Since Windows 2012 Server
Failover Cluster detects this failure, it automatically takes recovery
steps based on the settings in the resource properties of the virtual
machine. Downtime is minimized because of the detection and
recovery automation.
EMC Storage Integrator (ESI) is an agent-less, no-charge plug-in that
enables application-aware storage provisioning for Microsoft Windows
server applications, Hyper-V, VMware, and Xen Server environments.
Administrators can easily provision block and file storage for Microsoft
Windows or for Microsoft SharePoint sites by using wizards in ESI. ESI
supports the following functions:
Provisioning, formatting, and presenting drives to Windows servers
Provisioning new cluster disks and adding them to the cluster
automatically
Provisioning shared CIFS storage and mounting it to Windows servers
Provisioning SharePoint storage, sites, and databases in a single
wizard
Compute
The choice of a server platform for an EMC VSPEX infrastructure is not only
based on the technical requirements of the environment, but on the
supportability of the platform, existing relationships with the server provider,
advanced performance and management features, and many other
factors. For this reason, EMC VSPEX solutions are designed to run on a wide
variety of server platforms. Instead of requiring a given number of servers
with a specific set of requirements, VSPEX documents a number of
processor cores and an amount of RAM that must be achieved. This can
be implemented with 2 or 20 servers and still be considered the same
VSPEX solution.
In the example shown in Figure 2, assume that the compute layer
requirements for a given implementation are 25 processor cores, and 200
GB of RAM. One customer might want to implement this solution using
white-box servers containing 16 processor cores and 64 GB of RAM, while a
second customer chooses a higher-end server with 20 processor cores and
144 GB of RAM.
The first customer needs four of the servers they chose, while the second
customer needs two.
EMC Storage Integrator
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
26
Figure 2. Compute layer flexibility
Note To enable high availability at the compute layer, each customer
needs one additional server to ensure that the system can maintain
business operations if a server fails.
The following best practices apply to the compute layer:
Use a number of identical or at least compatible servers. VSPEX
implements hypervisor level high-availability technologies that may
require similar instruction sets on the underlying physical hardware.
By implementing VSPEX on identical server units, you can minimize
compatibility problems in this area.
When implementing high availability on the hypervisor layer, the
largest virtual machine you can create is constrained by the
smallest physical server in the environment.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
27
Implement the available high availability features in the
virtualization layer, and ensure that the compute layer has sufficient
resources to accommodate at least single-server failures. This
enables the implementation of minimal-downtime upgrades and
tolerance for single-unit failures.
Within the boundaries of these recommendations and best practices, the
compute layer for EMC VSPEX can be flexible to meet your specific needs.
The key constraint is that you provide sufficient processor cores and RAM
per core to meet the needs of the target environment.
Network
The VSPEX with Brocade VDX networking validated solution uses virtual
local area networks (VLANs) to segregate network traffic of VSPEX
reference architecture for iSCSI storage traffic to improve throughput,
manageability, application separation, high availability, and security. The
Brocade VDX networking solution provides redundant network links for
each Microsoft Hyper-V Windows server applications, Hyper-V, the VNXe
storage array, switch interconnect ports, and customer infrastructure uplink
ports. If a link is lost with any of the Brocade VDX network infrastructure
ports, the link fails over to another port. All network traffic is distributed
across the active links.
The Brocade® VDX with VCS Fabrics helps simplify networking
infrastructures through innovative technologies and VSPEX infrastructure
topology design. Brocade VDX 6710/6720 switches support this strategy by
simplifying network architecture and deployment while increasing network
performance and resiliency with Ethernet fabrics. Brocade VDX with VCS
Fabric technology supports active – active links for all traffic from the
virtualized compute servers to the EMC VNXe storage arrays. The Brocade
VDX provides a network with high availability and redundancy by using link
aggregation for EMC VNXe storage array.
The Brocade network switch infrastructure provides redundant network
links for each Hyper-V host, the storage array, the switch interconnect
ports, and the switch uplink ports. This configuration provides both
redundancy and additional network bandwidth. Automatic and
transparent failover is provided using the Brocade VDX networking solution
infrastructure or deploying it alongside other components of the solution.
Figure 3 shows an example of the highly available network topology.
Overview
Brocade VDX Ethernet Fabric switch series
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
28
Figure 3. Example of a highly available network design
Brocade VDX with VCS Fabric technology supports active – active links for
all traffic from the virtualized compute servers to the EMC VNXe storage
arrays. EMC unified storage platforms provide network high availability or
redundancy by using link aggregation. Link aggregation enables multiple
active Ethernet connections to appear as a single link with a single MAC
address, and potentially multiple IP addresses. In this solution, Link
Aggregation Control Protocol (LACP) is configured on VNXe, combining
multiple Ethernet ports into a single virtual device. If a link is lost in the
Ethernet port, the link fails over to another port. All network traffic is
distributed across the active links.
Brocade VCS Fabric technology offers unique features to support
virtualized server and storage environments. Brocade network Hypervisor
automation; for example, provides secure connectivity and full visibility to
virtualized server resources with dynamic learning and activation of port
profiles. With configuration of port profiles, the VDX switches support
Hyper-V mobility between Microsoft Windows servers.
Server and Storage Virtualization Automation Support
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
29
Storage
The storage layer is also a key component of any Cloud Infrastructure
solution that stores and serves data generated by application and
operating systems within the datacenter. A centralized storage platform
often increases storage efficiency, management flexibility, and reduces
total cost of ownership. In this VSPEX solution, EMC VNXe Series is used for
providing virtualization at the storage layer.
EMC VNX family is optimized for virtual applications delivering industry-
leading innovation and enterprise capabilities for file and block storage in
a scalable, easy-to-use solution. This next-generation storage platform
combines powerful and flexible hardware with advanced efficiency,
management, and protection software to meet the demanding needs of
today’s enterprises.
The VNXe series is powered by the Intel Xeon processors, for intelligent
storage that automatically and efficiently scales in performance, while
ensuring data integrity and security.
The VNXe series is purpose-built for IT managers in smaller environments
and the VNX series is designed to meet the high-performance, high-
scalability requirements of midsize and large enterprises. Table 1 shows the
customer benefits.
Table 1. VNXe customer benefits
Feature
Next-generation unified storage, optimized for virtualized
applications
Capacity optimization features including compression,
deduplication, thin provisioning, and application-centric
copies
High availability, designed to deliver five 9s availability
Simplified management with EMC Unisphere™ for a
single management interface for all network-attached
storage (NAS), storage area network (SAN), and
replication needs
Overview
EMC VNXe series
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
30
Software Suites
Local Protection Suite—Increases productivity with snapshots of
production data.
Remote Protection Suite—Protects data against localized failures,
outages, and disasters.
Application Protection Suite—Automates application copies and
provides replica management.
Security and Compliance Suite—Keeps data safe from changes,
deletions, and malicious activity.
Software Packs
VNXe Total Value Pack—Includes the Remote Protection,
Application Protection and Security and Compliance Suite.
Backup and recovery
EMC backup and recovery solutions – EMC Avamar Business Edition and
EMC Data Domain - deliver the protection confidence and efficiency
needed to accelerate deployment of VSPEX Private Clouds.
Our solutions are proven to reduce backup times by 90% and speed
recoveries with single step restore for worry-free protection. And our
protection storage systems add another layer of assurance, with end-to-
end verification and self-healing for ensured recovery.
Our solutions also deliver big saving. With industry-leading deduplication,
you can reduce backup storage by 10-30x, backup management time by
81%, and WAN bandwidth by 99% for efficient DR —delivering a 7-month
payback on average. You'll be able to scale simply and efficiently as your
environment grows.
Other technologies
In addition to the required technical components for EMC VSPEX solutions,
other technologies may provide additional value depending on the
specific use case. These include, but are not limited to the technologies
listed below.
EMC XtemSW CacheTM is a server Flash caching solution that reduces
latency and increases throughput to improve application performance by
using intelligent caching software and PCIe Flash technology.
EMC Avamar
EMC XtemSW Cache (Optional)
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
31
Server-side Flash caching for maximum speed
XtremSW Cache software caches the most frequently referenced data on
the server-based PCIe card, thereby putting the data closer to the
application.
XtremSW Cache caching optimization automatically adapts to changing
workloads by determining which data is most frequently referenced and
promoting it to the server Flash card. This means that the “hottest” or most
active data automatically resides on the PCIe card in the server for faster
access.
XtremSW Cache offloads the read traffic from the storage array, which
allows it to allocate greater processing power to other workloads. While
one workload is accelerated with XtremSW Cache, the array’s
performance for other workloads is maintained or even slightly enhanced.
Write-through caching to the array for total protection
XtemSW Cache accelerates reads and protects data by using a write-
through cache to the storage to deliver persistent high availability,
integrity, and disaster recovery.
Application agnostic
XtemSW Cache is transparent to applications, so no rewriting, retesting, or
recertification is required to deploy XtemSW Cache in the environment.
Minimum impact on system resources
XtremSW Cache does not require a significant amount of memory or CPU
cycles, as all flash and wear-leveling management is done on the PCIe
card, and does not use server resources. However, unlike other PCIe
solutions, there is no significant overhead from using XtremSW Cache on
server resources.
XtemSW Cache creates the most efficient and intelligent I/O path from the
application to the datastore, which results in an infrastructure that is
dynamically optimized for performance, intelligence, and protection for
both physical and virtual environments.
XtemSW Cache active/passive clustering support
XtemSW Cache clustering scripts configuration ensures that stale data is
never retrieved. The scripts use cluster management events to trigger a
mechanism that purges the cache. The XtemSW Cache-enabled
active/passive cluster ensures data integrity, and accelerates application
performance.
XtemSW Cache performance considerations
The following are the XtemSW Cache performance considerations:
On a write request, XtemSW Cache first writes to the array, then to
the cache, and then completes the application I/O.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
32
On a read request, XtemSW Cache satisfies the request with
cached data, or, when the data is not present, retrieves the data
from the array, writes it to the cache, and then returns it to the
application. The trip to the array can be in the order of
milliseconds, therefore the array limits how fast the cache can work.
As the number of writes increases, XtemSW Cache performance
decreases.
XtemSW Cache is most effective for workloads with a 70 percent, or
more, read/write ratio, with small, random I/O (8 K is ideal). I/O
greater than 128 K will not be cached in XtemSW Cache v1.5.
Note For more information, refer to the XtemSW Cache Installation and
Administration Guide v1.5.
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
33
Chapter 4 Solution
Architecture
Overview
This chapter presents the following topics:
Solution Overview 34
Solution architecture 34
Server configuration guidelines 40
Brocade network configuration guidelines 43
Storage configuration guidelines 47
High availability and failover 51
Backup and recovery configuration guidelines 54
Sizing guidelines 55
Reference workload 56
Applying the reference workload 57
Implementing the reference architectures 59
Quick assessment 62
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
34
Solution Overview
VSPEX Proven Infrastructure solutions are built with proven best-of-breed
technologies to create a complete virtualization solution that enables you
to make an informed decision when choosing and sizing the hypervisor,
compute, networking, and storage layers. VSPEX eliminates virtualization
planning and configuration burdens by leveraging extensive
interoperability, functional, and performance testing by EMC. VSPEX
accelerates your IT Transformation to cloud-based computing by enabling
faster deployment, more choice, higher efficiency, and lower risk.
This section is intended to be a comprehensive guide to the major aspects
of this solution. Server capacity is specified in generic terms for required
minimums of CPU, memory, and network interfaces; the customer is free to
select the server and networking hardware that meet or exceed the
stated minimums. The specified storage architecture, along with a system
meeting the server and network requirements outlined, is validated by
EMC to provide high levels of performance while delivering a highly
available architecture for your private cloud deployment.
Each VSPEX Proven Infrastructure balances the storage, network, and
compute resources needed for a set number of virtual machines, which
have been validated by EMC. In practice, each virtual machine has its
own set of requirements that rarely fit a predefined idea of what a virtual
machine should be. In any discussion about virtual infrastructures, it is
important to first define a reference workload. Not all servers perform the
same tasks, and it is impractical to build a reference that takes into
account every possible combination of workload characteristics.
Solution architecture
The VSPEX Proven Infrastructure for Microsoft Hyper-V private clouds with
EMC VNXe is validated at two different points of scale, one with up to 50
virtual machines, and the other with up to 100 virtual machines. The
defined configurations form the basis of creating a custom solution.
Note VSPEX uses the concept of a Reference Workload to describe and
define a virtual machine. Therefore, one physical or virtual server in
an existing environment may not be equal to one virtual machine in
a VSPEX solution. Evaluate your workload in terms of the reference
to achieve an appropriate point of scale.
Overview
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
35
The architecture diagram shown in Figure 4 characterizes the validated
infrastructure with a Brocade VDX solution for up to 50 virtual machines.
Figure 4. Logical architecture for 50 virtual machines
Architecture for up to 50 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
36
The architecture diagram shown in Figure 5 characterizes the infrastructure
with a Brocade VDX solution validated for up to 100 virtual machines.
Figure 5. Logical architecture for 100 virtual machines
Note The networking components of either solution can be implemented
using 1 GbE or 10 GbE IP networks, if sufficient bandwidth and
redundancy meet the listed requirements.
The architecture includes the following key components:
Microsoft Hyper-V—Provides a common virtualization layer to host a server
environment. The specifics of the validated environment are listed in Table
2. Hyper-V provides a highly available infrastructure through features such
as:
Live Migration — Provides live migration of virtual machines within a
virtual infrastructure cluster, with no virtual machine downtime or
service disruption.
Live Storage Migration — Provides live migration of virtual machine
disk files within and across storage arrays with no virtual machine
downtime or service disruption.
Architecture for up to 100 virtual machines
Key components
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
37
Failover Clustering High Availability (HA) – Detects and provides
rapid recovery for a failed virtual machine in a cluster.
Dynamic Optimization (DO) – Provides load balancing of
computing capacity in a cluster with support of SCVMM.
Microsoft System Center Virtual Machine Manager (SCVMM)—SCVMM is
not required for this solution. However, if deployed, it (or its corresponding
function in Microsoft System Center Essentials) simplifies provisioning,
management, and monitoring of the Hyper-V environment.
Microsoft SQL Server 2012—SCVMM, if used, requires a SQL Server
database instance to store configuration and monitoring details.
DNS Server — DNS services are required for the various solution
components to perform name resolution. The Microsoft DNS service
running on a Windows Server 2012 is used.
Active Directory Server — Active Directory services are required for the
various solution components to function properly. The Microsoft Active
Directory Service running on a Windows Server 2012 is used.
Brocade VDX 6710/6720 Ethernet Fabric Network — All network traffic is
carried by the Brocade Ethernet Fabric network with redundant cabling
and switches. User and management traffic is carried over a shared
network while iSCSI storage traffic is carried over a private, non-routable
subnet.
EMC VNXe 3150 array—Provides storage by presenting Internet Small
Computer System Interface (iSCSI) datastores to Hyper-V hosts for up to 50
virtual machines.
EMC VNXe 3300 array—Provides storage by presenting Internet Small
Computer System Interface (iSCSI) datastores to Hyper-V hosts for up to
100 virtual machines.
These datastores for both deployment sizes are created by using
application-aware wizards included in the EMC Unisphere interface.
VNXe series storage arrays include the following components:
Storage Processors (SPs) support block and file data with UltraFlexTM
I/O technology that supports iSCSI, CIFS, and NFS protocols The SPs
provide access for all external hosts and for the file side of the VNXe
array.
Battery backup units are battery units within each storage processor
and provide enough power to each storage processor to ensure
that any data in flight is destaged to the vault area in the event of a
power failure. This ensures that no writes are lost. Upon restart of
the array, the pending writes are reconciled and persisted.
Disk-array Enclosures (DAE) house the drives used in the array.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
38
Table 2 lists the hardware used in this solution.
Table 2. Solution hardware
Hardware Configuration Notes
Hyper-V
servers
Memory:
2 GB RAM per virtual machine
100 GB RAM across all servers for the
50-virtual-machine configuration
200 GB RAM across all servers for the
100-virtual-machine configuration
2 GB RAM reservation per host for
hypervisor
CPU:
One vCPU per virtual machine
One to four vCPUs per physical core
Network:
Two 10 GbE NIC ports per server
Note To implement Microsoft Hyper-V High
Availability (HA) functionality and to meet
the listed minimums, the infrastructure
should have one additional server.
Configured as a
single Hyper-V
cluster.
Brocade
Network
infrastructure
Minimum switching capacity:
Two physical VDX 6710/6720 switches*
One 1 GbE port per storage processor
for management Two 10 GbE ports
per storage processor for data
Redundant
Brocade VDX
Ethernet Fabric
configuration
For 50 & 100 Virtual Machines
Brocade Ethernet Fabric Switch*
Two VDX 6710 – 48 port
o 6 x 1 GbE ports per Hyper-V
server
1 GbE iSCSI
Server option
Brocade Ethernet Fabric Switch
Two VDX 6720 – 24 port
o Two 10 GbE ports per Hyper-V
server
10 GbE iSCSI
Server option
Hardware resources
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
39
Hardware Configuration Notes
Storage Common:
Two Storage Processors
(active/active)
Two 10GbE interfaces per storage
processor for data
For 50 Virtual Machines
EMC VNXe 3150
Forty-five 300 GB 15k RPM 3.5-inch SAS
disks (9 * 300 GB 4+1 R5 Performance
Drive Packs)
Two 300 GB 15k RPM 3.5-inch SAS disks
as hot spares
For 100 Virtual Machines
EMC VNXe 3300
Seventy-seven 300 GB 15k RPM 3.5-
inch SAS disks (11 * 300 GB 6+1 R5
Performance Drive Packs)
Three 300 GB 15k RPM 3.5-inch SAS
disks as hot spares
Include the
initial disk pack
on the VNXe.
Shared
infrastructure
In most cases, a customer environment will
already have configured the infrastructure
services such as Active Directory, DNS, and
other services. The setup of these services is
beyond the scope of this document.
If this configuration is being implemented
with non-existing infrastructure, a minimum
number of additional servers is required:
Two physical servers
16 GB RAM per server
Four processor cores per server
Two 10 GbE ports per server
These servers
and the roles
they fulfill may
already exist in
the customer
environment;
however, they
must exist
before VSPEX is
deployed.
EMC Next-
Generation
Backup
For 50 virtual machines
Avamar Business Edition ½ Capacity
For 100 virtual machines
Avamar Business Edition Full Capacity
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
40
Table 3 lists the software used in this solution.
Table 3. Solution software
Software Configuration
Microsoft Hyper-V
Operating system for Hyper-V
hosts
Windows 2012 Datacenter Edition
(Datacenter Edition is necessary to support
the number of virtual machines in this
solution)
System Center Virtual Machine
Manager
Version 2012 SP1
Microsoft SQL Server Version 2012 Enterprise Edition
VNXe
Software version 2.2.0.16150
Next-Generation Backup
Avamar Business Edition 7.0 SP1 – for up to 100 virtual machines
Server configuration guidelines
When designing and ordering the compute/server layer of the VSPEX
solution, several factors may alter the final purchase. From a virtualization
perspective, if a system workload is well estimated, features like Dynamic
Memory and Smart Paging can reduce the aggregate memory
requirement.
If the virtual machine pool does not have a high level of peak or
concurrent usage, the number of vCPUs may be reduced. Conversely, if
the applications being deployed are highly computational in nature, the
number of CPUs and memory to be purchased may need to increase.
Microsoft Hyper-V has a number of advanced features that help to
maximize performance and overall resource utilization. The most
important of these are in the area of memory management. This section
describes some of these features and the items to consider in the
environment.
In general, you can consider virtual machines on a single hypervisor
consuming memory as a pool of resources. Figure 6 is an example.
Software resources
Overview
Hyper-V memory virtualization
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
41
Figure 6. Hypervisor memory consumption
This basic concept is enhanced by understanding the technologies
presented in this section.
Dynamic Memory
Dynamic Memory, which was introduced in Windows Server 2008 R2 SP1,
increases physical memory efficiency by treating memory as shared
resources and allocating it to the virtual machines dynamically. Actual
used memory of each virtual machine is adjusted on demand. Dynamic
Memory enables more virtual machines to run by reclaiming unused
memory from idle virtual machines. In Windows Server 2012, Dynamic
Memory enables the dynamic increase of the maximum memory available
to virtual machines.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
42
Smart Paging
Even with Dynamic Memory, Hyper-V allows more virtual machines than
physical available memory. There is most likely a memory gap between
minimum memory and startup memory. Smart Paging is a memory
management technique that leverages disk resources as temporary
memory replacement. It swaps out less-used memory to disk storage and
swaps back in when needed, which may cause performance to degrade
as a drawback. Hyper-V continues to leverage the guest paging when
the host memory is oversubscribed, as it is more efficient than Smart Paging.
Non-Uniform Memory Access
Non-Uniform Memory Access (NUMA) is a multi-node computer
technology that enables a CPU to access remote-node memory. This type
of memory access is costly in terms of performance, so Windows Server
2012 employs a process known as processor affinity, which strives to keep
threads pinned to a particular CPU to avoid remote-node memory access.
In previous versions of Windows, this feature is only available to the host.
Windows Server 2012 extends this functionality into the virtual machines,
which can now realize improved performance in SMP environments.
This section provides guidelines to configure server memory for this solution.
The guidelines take into account Hyper-V memory overhead and the
virtual machine memory settings.
Hyper-V memory overhead
Virtualized memory has some associated overhead, which includes the
memory consumed by Hyper-V, the parent partition, and additional
overhead for each virtual machine. Leave at least 2 GB memory for
Hyper-V parent partition for this solution.
Virtual machine memory
In this solution, each virtual machine gets 2 GB memory in fixed mode.
Memory configuration guidelines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
43
Brocade network configuration guidelines
This section provides for setting up a redundant, highly available network
configuration for this VSPEX solution. The guidelines take into account
Jumbo Frames, VLANs, and Multiple Connections per Session (MC/S). For
detailed network resource requirements, refer to Table 4.
Table 4. Network hardware
Hardware Configuration Notes
Network
infrastructure
Minimum switching capacity:
Two physical switches
Two 10 GbE ports per Hyper-V server
o Optionally Six 1 GbE ports per
Hyper-V server
One 1GbE port per storage processor
for management
Two 10-GbE ports per storage
processor for data
Redundant
Brocade VDX
Ethernet Fabric
switch
configuration
It is a best practice to isolate network traffic so that the traffic between
hosts and storage, hosts and clients, and management traffic all move
over isolated networks. In some cases physical isolation may be required
for regulatory or policy compliance reasons; but in many cases logical
isolation using VLANs is sufficient. This solution calls for a minimum of three
VLANs for the following usage:
Client access
Storage
Management/Live Migration
Figure 7 depicts these VLANs.
Overview
VLAN
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
44
Figure 7. Required networks
Note Figure 7 demonstrates the network connectivity requirements for a
VNXe 3300 using 10 GbE network connections (1 GbE for the
Management Network). A similar topology should be created
when using the VNXe 3150 array.
The client access network is for users of the system, or clients, to
communicate with the infrastructure. The Storage Network is used for
communication between the compute layer and the storage layer. The
Management network is used for administrators to have a dedicated way
to access the management connections on the storage array, network
switches, and hosts.
Note Some best practices call for additional network isolation for cluster
traffic, virtualization layer communication, and other features.
These additional networks can be implemented if necessary, but
they are not required.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
45
Brocade VDX Series switches support the transport of jumbo frames. This
solution for EMC VSPEX private cloud recommends an MTU set at 9216
(jumbo frames) for efficient storage and migration traffic. Jumbo frames
are enabled by default on the Brocade ISL trunks. However, to
accommodate end-to-end jumbo frame support on the network for the
edge hosts, this feature can be enabled for the interface connected to
the Microsoft Hyper-V hosts, and the VNXe. The default Maximum
Transmission Unit (MTU) on these interfaces is 1500. This MTU is set to 9216 to
optimize the network for jumbo frame support.
Multiple Connections per Session (MC/S) is configured on each Hyper-V
host so that each host network interface has one iSCSI session to each
VNXe storage processor (SP) interface. In this solution, four iSCSI sessions
are configured between each host and each VNXe SP (each VNXe iSCSI
server).
A link aggregation resembles an Ethernet channel, but uses the Link
Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE
802.3ad standard supports link aggregations with two or more ports. All
ports in the aggregation must have the same speed and be full duplex. In
this solution, Link Aggregation Control Protocol (LACP) can be configured
to the customer infrastructure network, combining multiple Ethernet ports
into a single virtual device. If a link is lost in the Ethernet port, the link fails
over to another port. All network traffic is distributed across the active links.
Brocade Virtual Link Aggregation Groups (vLAGs) are used for the
Microsoft Hyper-V host and customer infrastructure. In the case of the
VNXe, a dynamic Link Aggregation Control Protocol (LACP) vLAG is not
used with MC/S and iSCSI. While Brocade ISLs are used as interconnects
between Brocade VDX switches within a Brocade VCS fabric, industry
standard LACP LAGs are supported for connecting to other network
devices outside the Brocade VCS fabric. Typically, LACP LAGs can only be
created using ports from a single physical switch to a second physical
switch. In a Brocade VCS fabric, a vLAG can be created using ports from
two Brocade VDX switches to a device to which both VDX switches are
connected. This provides an additional degree of device-level
redundancy, while providing active-active link-level load balancing.
In VSPEX Stack Brocade Inter-Switch Link (ISL) Trunking is used within the
Brocade VCS fabric to provide additional redundancy and load
balancing between the iSCSI clients and iSCSI storage. Typically, multiple
links between two switches are bundled together in a Link Aggregation
Group (LAG) to provide redundancy and load balancing. Setting up a
LAG requires lines of configuration on the switches and selecting a hash-
based algorithm for load balancing based on source-destination IP or
MAC addresses.
Enable jumbo frames
MC/S
Link Aggregation
Brocade Virtual Link Aggregation Group (vLAG)
Brocade Inter-Switch Link (ISL) Trunks
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
46
All flows with the same hash traverse the same link, regardless of the total
number of links in a LAG. This might result in some links within a LAG, such
as those carrying flows to a storage target, being over utilized and packets
being dropped, while other links in the LAG remain underutilized. Instead
of LAG-based switch interconnects, Brocade VCS Ethernet fabrics
automatically form ISL trunks when multiple connections are added
between two Brocade VDX® switches. Simply adding another cable
increases bandwidth, providing linear scalability of switch-to-switch traffic,
and this does not require any configuration on the switch. In addition, ISL
trunks use a frame-by-frame load balancing technique, which evenly
balances traffic across all members of the ISL trunk group.
A standard link-state routing protocol that runs at Layer 2 determines if
there are Equal-Cost Multipaths (ECMPs) between RBridges in an Ethernet
fabric and load balances the traffic to make use of all available ECMPs. If
a neighbor switch is reachable via several interfaces with different
bandwidths, all of them are treated as “equal-cost” paths. While it is
possible to set the link cost based on the link speed, such an algorithm
complicates the operation of the fabric. Simplicity is a key value of
Brocade VCS Fabric technology, so an implementation is chosen in the
test case that does not consider the bandwidth of the interface when
selecting equal-cost paths. This is a key feature needed to expand
network capacity, to keep ahead of customer bandwidth requirements.
Brocade VDX Series switches support the Pause Flow Control feature. IEEE
802.3x Ethernet pause and Ethernet Priority-Based Flow Control (PFC) are
used to prevent dropped frames by slowing traffic at the source end of a
link. When a port on a switch or host is not ready to receive more traffic
from the source, perhaps due to congestion, it sends pause frames to the
source to pause the traffic flow. When the congestion is cleared, the port
stops requesting the source to pause traffic flow, and traffic resumes
without any frame drop. When Ethernet pause is enabled, pause frames
are sent to the traffic source. Similarly, when PFC is enabled, there is no
frame drop; pause frames are sent to the source switch.
Equal-Cost Multipath (ECMP)
Pause Flow Control
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
47
Storage configuration guidelines
Hyper-V allows more than one method of utilizing storage when hosting
virtual machines. The solutions are tested utilizing iSCSI and the storage
layout described adheres to all current best practices. The customer or
architect with required knowledge can make modifications based on the
systems usage and load if necessary.
Table 5 lists the required hardware for the storage configuration.
Table 5. Storage hardware
Hardware Configuration Notes
Storage Common:
Two storage processors
(active/active)
Two 10 GbE interfaces per storage
processor
For 50 virtual machines
EMC VNXe 3150
Forty-five 300 GB 15k RPM 3.5-inch SAS
disks (9 * 300 GB 4+1 R5 Performance
Drive Packs)
Two 300 GB 15k RPM 3.5-inch SAS disks
as hot spares
For 100 virtual machines
EMC VNXe 3300
Seventy-seven 300 GB 15k RPM 3.5-
inch SAS disks (11 * 300 GB 6+1 R5
Performance Drive Packs)
Three 300 GB 15k RPM 3.5-inch SAS
disks as hot spares
Include the
initial disk pack
on the VNXe.
Overview
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
48
This section provides guidelines to set up the storage layer of the solution to
provide high availability and the expected level of performance.
Windows Server 2012 Hyper-V and Failover Clustering leverage Cluster
Shared Volumes v2 and new Virtual Hard Disk Format (VHDX) features to
virtualize storage presented from external shared storage system to host
virtual machines.
Figure 8. Hyper-V virtual disk types
Cluster Shared Volumes v2
Cluster Shared Volumes (CSV) was introduced in Windows Server 2008 R2.
They enable all cluster nodes to have simultaneous access to the shared
storage for hosting virtual machines. Windows Server 2012 introduces a
number of new capabilities with CSV v2, which includes flexible
application, file storage, integration with other Windows Server 2012
features, single name space, and improved backup, and restore.
New Virtual Hard Disk format
Hyper-V in Windows Server 2012 contains an update to the VHD format
called VHDX, which has much larger capacity and built-in resiliency. The
main new features of VHDX format are:
Support for virtual hard disk storage with the capacity of up to 64 TB
Additional protection against data corruption during power failures
by logging updates to the VHDX metadata structures
Optimal structure alignment of the virtual hard disk format to suit
large sector disks
The VHDX format also has the following features:
Larger block sizes for dynamic and differential disks, which enables
the disks to meet the needs of the workload
Hyper-V storage virtualization for VSPEX
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
49
The 4 KB logical sector virtual disk that enables increased
performance when used by applications and workloads that are
designed for 4-KB sectors
The ability to store custom metadata about the files that the user
might want to record, such as the operating system version or
applied updates
Space reclamation features that can result in smaller file size and
enables the underlying physical storage device to reclaim unused
space (Trim for example requires direct-attached storage or SCSI
disks and Trim-compatible hardware.)
Figure 9 shows the overall storage layout of the 50 virtual machine solution.
Figure 9. Storage layout for 50 virtual machines
Storage layout overview
The architecture for up to 50 virtual machines uses the following
configuration:
Forty-five 300 GB SAS disks allocated to a single storage pool as nine
4+1 RAID 5 groups (sold as nine packs of five disks).
At least one hot spare allocated for every 30 disks of a given type.
At least four iSCSI LUNs allocated to the Hyper-V cluster from the
single storage pool to serve as datastores for the virtual servers.
Storage layout for 50 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
50
Figure 10 shows the overall storage layout of the 100 virtual machine
solution.
Figure 10. Storage layout for 100 virtual machines
Storage layout overview
The architecture for up to 100 virtual machines uses the following
configuration:
Seventy-seven 300 GB SAS disks allocated to a single storage pool
as eleven 6+1 RAID 5 groups (sold as 11 packs of seven disks).
At least one hot spare disk allocated for every 30 disks of a given
type.
At least 10 iSCSI LUNs allocated to the Hyper-V cluster from the
single storage pool to serve as datastores for the virtual servers.
Note If more capacity is required in either configuration, larger drives may
be substituted. To meet the load recommendations, the drives all
must be 15k RPM and the same size. If different sizes are utilized,
storage layout algorithms may give sub-optimal results.
Storage layout for 100 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
51
High availability and failover
This VSPEX solution provides a highly available virtualized server, network,
and storage infrastructure. By implementing the solution in this guide,
single-unit failures can survive with minimal or no impact to business
operations.
Configure high availability in the virtualization layer, and configure the
hypervisor to automatically restart failed virtual machines. Figure 11
illustrates the hypervisor layer responding to a failure in the compute layer.
Figure 11. High Availability at the virtualization layer
By implementing high availability at the virtualization layer, even in a
hardware failure, the infrastructure attempts to keep as many services
running as possible.
Use enterprise class servers designed for the datacenter to implement the
compute layer when possible. This type of server has redundant power
supplies, which should be connected to separate Power Distribution units
(PDUs) in accordance with your server vendor’s best practices.
Figure 12. Redundant power supplies
Overview
Virtualization layer
Compute layer
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
52
Configure high availability in the virtualization layer. The compute layer
must be configured with enough resources so that the total number of
available resources meets the needs of the environment, even with a
server failure, as demonstrated in Figure 11.
The advanced networking features of the VNX family and Brocade VDX
with VCS Ethernet Fabric provide protection against network connection
failures at the array. Each Hyper-V host has multiple connections to user
and storage Ethernet networks to guard against link failures. These
connections should be spread across multiple Brocade Ethernet Fabric
switches to guard against component failure in the network.
Figure 13. Network layer High Availability
Note Figure 13 demonstrates a highly available network topology based
on VNXe 3300. A similar topology should be constructed if using the
VNXe 3150.
By ensuring that there are no single points of failure in the network layer,
the compute layer is able to access storage, and communicate with users
even if a component fails.
Brocade VDX Network layer
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
53
The VNX family is designed for five 9s availability by using redundant
components throughout the array. All of the array components are
capable of continued operation in case of hardware failure. The RAID disk
configuration on the array provides protection against data loss caused by
individual disk failures, and the available hot spare drives can be
dynamically allocated to replace a failing disk, as shown in Figure 14.
Figure 14. VNXe series High Availability
EMC Storage arrays are designed to be highly available by default.
Configure the storage arrays according to the installation guides to ensure
that no single unit failures cause data loss or unavailability.
Storage layer
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
54
Backup and recovery configuration guidelines
This section provides guideline to set up a backup and recovery
environment for this VSPEX solution. It describes how to characterize and
design the backup environment.
This VSPEX solution was sized with the application environment profile
shown in Table 6.
Table 6. Backup profile characteristics
Profile characteristic Value
Number of users 500 for 50 virtual machines
1,000 for 100 virtual machines
Number of virtual machines 50 for 50 virtual machines
100 for 100 virtual machines
Note 20% DB, 80% Unstructured
Exchange data 0.5 TB for 50 virtual machines
1 TB for 100 virtual machines
Note 1 GB mail box per user
SharePoint data 0.25 TB for 50 virtual machines
0.5 TB for 100 virtual machines
SQL server 0.25 TB for 50 virtual machines
0.5 TB for 100 virtual machines
User data 2.5 TB for 50 virtual machines
5 TB for 100 virtual machines
(5.0 GB per user)
Daily change rate for the applications
Exchange data 10%
SharePoint data 2%
SQL server 5%
User data 2%
Retention per data types
All DB data 14 Dailies
User data 30 Dailies, 4 Weeklies, 1 Monthly
Overview
Backup characteristics
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
55
Avamar Business Edition is a purpose built backup applicance that
provides a conveniently sized, turnkey, afforadable, deduplicated backup
solution. Designed for mid-market companies, it features simplified
management making it ideal for organizations with limited IT resources.
And with built-in storage resiliency, it eliminates the requirement and
expense of a second replicated system. Powered by industry leading EMC
Avamar software, the Avamar Business Edition delivers fast, daily full
backups along with one-step recovery for VSPEX Proven Infrastructures.
Sizing guidelines
The following sections describe definitions of the reference workload used
to size and implement the VSPEX architectures, guidance on how to
correlate those reference workloads to actual customer workloads, and
how that may change the end delivery from the server and network
perspective.
You can modify the storage definition by adding drives for greater
capacity and performance. The disk layouts are created to provide
support for the appropriate number of virtual machines at the defined
performance level along with typical operations such as snapshots.
Decreasing the number of recommended drives or stepping down to a
lower performing array type can result in lower IOPS per virtual machine
and a reduced user experience due to higher response times.
Backup layout for up to100 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
56
Reference workload
When considering an existing server to move into a virtual infrastructure,
you have the opportunity to gain efficiency by right-sizing the virtual
hardware resources assigned to that system.
Each VSPEX Proven Infrastructure balances the storage, network, and
compute resources needed for a set number of virtual machines that have
been validated by EMC. In practice, each virtual machine has its own set
of requirements that rarely fit a predefined idea of what a virtual machine
should be. In any discussion about virtual infrastructures, it is important to
first define a reference workload. Not all servers perform the same tasks,
and it is impractical to build a reference model that takes into account
every possible combination of workload characteristics.
To simplify the discussion, we have defined a representative customer
reference workload. By comparing your actual customer usage to this
reference workload, you can extrapolate which reference architecture to
choose.
For the VSPEX solutions, the reference workload is defined as a single virtual
machine. Table 7 lists the characteristics of this virtual machine:
Table 7. Virtual machine characteristics
Characteristic Value
Virtual machine operating system Microsoft Windows Server 2012
Datacenter Edition
Virtual processors per virtual
machine
1
RAM per virtual machine 2 GB
Available storage capacity per
virtual machine
100 GB
I/O operations per second (IOPS) per
virtual machine
25
I/O pattern Random
I/O read/write ratio 2:1
This specification for a virtual machine is not intended to represent any
specific application. Rather, it represents a single common point of
reference against which other virtual machines can be measured.
Overview
Defining the reference workload
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
57
Applying the reference workload
The reference architectures create a pool of resources that are sufficient
to host a target number of Reference virtual machines with the
characteristics shown in Table 7. The customer virtual machines may not
exactly match the specifications. In that case, define a single specific
customer virtual machine as the equivalent of a number of Reference
virtual machines, and assume the virtual machines are in use in the pool.
Continue to provision virtual machines from the resource pool until no
resources remain.
A small custom-built application server needs to move into this
infrastructure. The physical hardware that supports the application is not
fully utilized. A careful analysis of the existing application reveals that the
application can use one processor, and needs 3 GB of memory to run
normally. The I/O workload ranges from four IOPS at idle time to a peak of
15 IOPS when busy. The entire application consumes about 30 GB of local
hard drive storage.
Based on the numbers, the following resources are required from the
resource pool:
CPU resources for one virtual machine
Memory resources for two virtual machines
Storage capacity for one virtual machine
I/Os for one virtual machine
In this example, a single virtual machine uses the resources for two of the
Reference virtual machines. If the original pool has the resources to
provide 100 Reference virtual machines, the resources for 98 Reference
virtual machines remain.
The database server for a customer’s point of scale system needs to move
into this virtual infrastructure. It is currently running on a physical system
with four CPUs and 16 GB of memory. It uses 200 GB of storage and
generates 200 IOPS during an average busy cycle.
The following resources are required to virtualize this application:
CPUs of four Reference virtual machines
Memory of eight Reference virtual machines
Storage of two Reference virtual machines
I/Os of eight Reference virtual machines
Overview
Example 1: Custom-built application
Example 2: Point of sale system
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
58
In this case, the one virtual machine uses the resources of eight Reference
virtual machines. To implement this one machine on a pool for 100
Reference virtual machines, the resources of eight Reference virtual
machines are consumed and resources for 92 Reference virtual machines
remain.
The web server of the customer needs to move into this virtual
infrastructure. It is currently running on a physical system with 2 CPUs and 8
GB of memory. It uses 25 GB of storage and generates 50 IOPS during an
average busy cycle.
The following resources are required to virtualize this application:
CPUs of two Reference virtual machines
Memory of four Reference virtual machines
Storage of one Reference virtual machines
I/Os of two Reference virtual machines
In this case, the one virtual machine would use the resources of four
Reference virtual machines. If the configuration is implemented on a
resource pool for 100 Reference virtual machines, resources for 96
Reference virtual machines remain.
The database server for a customer’s decision-support system needs to
move into this virtual infrastructure. It is currently running on a physical
system with 10 CPUs and 64 GB of memory. It uses 5 TB of storage and
generates 700 IOPS during an average busy cycle.
The following resources are required to virtualize this application:
CPUs of 10 Reference virtual machines
Memory of 32 Reference virtual machines
Storage of 52 Reference virtual machines
I/Os of 28 Reference virtual machines
In this case, the one virtual machine uses the resources of 52 Reference
virtual machines. If this configuration is implemented on a resource pool
for 100 Reference virtual machines, resources for 48 Reference virtual
machines remain.
The four examples illustrate the flexibility of the resource pool model. In all
four cases, the workloads simply reduce the amount of available resources
in the pool. All four examples can be implemented on the same virtual
infrastructure with an initial capacity for 100 Reference virtual machines,
and resources for 34 Reference virtual machines remain in the resource
pool, as shown in Figure 15.
Example 3: Web server
Example 4: Decision-support database
Summary of examples
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
59
Figure 15. Resource pool flexibility
In more advanced cases, there may be tradeoffs between memory and
I/O or other relationships where increasing the amount of one resource
decreases the need for another. In these cases, the interactions between
resource allocations become highly complex, and are outside the scope
of the document. Once the change in resource balance has been
examined and the new level of requirements is known, these virtual
machines can be added to the infrastructure using the method described
in the examples.
Implementing the reference architectures
The reference architectures require a set of hardware to be available for
the CPU, memory, network, and storage needs of the system. In this VPSEX
solution, these are presented as general requirements that are
independent of any particular implementation. This section describes
some considerations for implementing the requirements.
The reference architectures define the hardware requirements for this
VSPEX solution in terms of the following basic types of resources:
CPU resources
Memory resources
Brocade network resources
Storage resources
This section describes the resource types, how to use them in the reference
architectures, and key considerations for implementing them in a
customer environment.
The architectures define the number of required CPU cores instead of a
specific type or configuration. It is intended that new deployments use
recent revisions of common processor technologies. It is assumed that
they perform as well as, or better than the systems used to validate the
solution.
In any running system, it is important to monitor the utilization of resources
and adapt as needed. The Reference virtual machine and required
hardware resources in the reference architectures assume that there are
no more than four virtual CPUs for each physical processor core (4:1 ratio).
Overview
Resource types
CPU resources
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
60
In most cases, this provides an appropriate level of resources for the
hosted virtual machines; however, this ratio may not be appropriate in all
use cases. Monitor the CPU utilization at the hypervisor layer to determine
if more resources are required.
Each virtual server in the reference architectures is defined to have 2 GB of
memory. In a virtual environment, it is common to provision virtual
machines with more memory than the hypervisor physically has, due to
budget constraints. The memory over commitment technique takes
advantage of the fact that each virtual machine may not fully utilize the
amount of memory allocated to it. Therefore, it makes business sense to
oversubscribe the memory usage to some degree. The administrator has
the responsibility to monitor the oversubscription rate such that it does not
shift the bottleneck away from the server and become a burden to the
storage subsystem via swapping.
This solution is validated with statically assigned memory and no over
commitment of memory resources. If memory over commit is used in a
real-world environment, regularly monitor the system memory utilization,
and associated page file I/O activity to ensure that a memory shortfall
does not cause unexpected results.
The reference architecture outlines the minimum needs of the system. If
additional bandwidth is needed, it is important to add capability at both
the storage array and the hypervisor host to meet the requirements. The
options for Brocade network connectivity on the server depend on the
type of server for either 1 or 10GbE connectivity. The storage arrays have
a number of included network ports, and have the option to add ports
using EMC FLEX I/O modules and connectivity via 10 GbE.
For reference purposes in the validated environment, EMC assumes that
each virtual machine generates 25 IOs per second with an average size of
8 KB. This means that each virtual machine is generating at least 200 KB/s
of traffic on the storage network. For an environment rated for 100 virtual
machines, this comes out to a minimum of approximately 20 MB/sec. This
is well within the bounds of modern networks. However, this does not
consider other operations. For example, additional bandwidth is needed
for the following operations:
User network traffic
Virtual machine migration
Administrative and management operations
Memory resources
Brocade network resources
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
61
The requirements for each of these vary depending on how the
environment is being used. It is not practical to provide concrete numbers
in this context. However, the network described in the reference
architecture for each solution should be sufficient to handle average
workloads for the preceding use cases. The specific network layer
connectivity for the Brocade VDX Fabric solution is defined in Chapter 5.
Regardless of the network traffic requirements, always have at least two
physical network connections that are shared for a logical network so that
a single link failure does not affect the availability of the system. Design
the network to ensure that the aggregate bandwidth in a failure is
sufficient to accommodate the full workload.
The reference architectures contain layouts for the disks used in the
validation of the system. Each layout balances the available storage
capacity with the performance capability of the drives. There are a few
layers to consider when examining storage sizing. Specifically, the array
has a collection of disks that are assigned to a storage pool. From that
storage pool, you can provision datastores to the Microsoft Hyper-V
cluster. Each layer has a specific configuration that is defined for the
solution and documented in the deployment guide.
It is generally acceptable to replace drive types with a type that has more
capacity with the same performance characteristics or with ones that
have higher performance characteristics and the same capacity.
Similarly, it is acceptable to change the placement of drives in the drive
shelves in order to comply with updated or new drive shelf arrangements.
In other cases where there is a need to deviate from the proposed number
and type of drives specified, or the specified pool and datastore layouts,
ensure that the target layout delivers the same or greater resources to the
system.
The requirements that are stated in the reference architectures are what
EMC considers the minimum set of resources to handle the workloads
required based on the stated definition of a reference virtual server. In any
customer implementation, the load of a system varies over time as users
interact with the system. However, if the customer virtual machines differ
significantly from the reference definition, the system may require
additional resources.
Storage resources
Implementation summary
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
62
Quick assessment
An assessment of the customer environment helps ensure that you
implement the correct VSPEX solution. This section provides an easy-to-use
worksheet to simplify the sizing calculations, and help assess the customer
environment.
Summarize the applications that are planned for migration into the VSPEX
private cloud. For each application, determine the number of virtual
CPUs, the amount of memory, the required storage performance, the
required storage capacity, and the number of Reference virtual machines
required from the resource pool. Applying the reference workload
provides examples of this process.
Fill out a row in the worksheet for each application, as shown in Table 8.
Table 8. Blank worksheet row
Application
CPU
(virtual
CPUs)
Memory
(GB) IOPS
Capacity
(GB)
Equivalent
Reference
virtual
machines
Example
application
Resource
requirements
Equivalent
Reference
virtual
machines
Fill out the resource requirements for the application. The row requires
inputs on four different resources: CPU, Memory, IOPS, and Capacity.
Optimizing CPU utilization is a significant goal for almost any virtualization
project. A simple view of the virtualization operation suggests a one-to-
one mapping between physical CPU cores and virtual CPU cores
regardless of the physical CPU utilization. In reality, consider whether the
target application can effectively use all of the presented CPUs. Use a
performance-monitoring tool, such as Microsoft perfmon to examine the
CPU Utilization counter for each CPU. If they are equivalent, implement
that number of virtual CPUs when moving into the virtual infrastructure.
However, if some CPUs are used and some are not, consider decreasing
the number of virtual CPUs that are required.
In any operation involving performance monitoring, it is a best practice to
collect data samples for a period of time that includes all of the
operational use cases of the system. Use either the maximum or 95th
percentile value of the resource requirements for planning purposes.
Overview
CPU requirements
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
63
Server memory plays a key role in ensuring application functionality and
performance. Therefore, each server process has different targets for the
acceptable amount of available memory. When moving an application
into a virtual environment, consider the current memory available to the
system, and monitor the free memory by using a performance-monitoring
tool like perfmon, to determine if it is being used efficiently.
The storage performance requirements for an application are usually the
least understood aspect of performance. Three components become
important when discussing the I/O performance of a system.
The number of requests coming in, or IOPS
The size of the request, or I/O size -- a request for 4 KB of data is
significantly easier and faster to process than a request for 4 MB of
data
The average I/O response time or latency
The Reference virtual machine calls for 25 I/O operations per second. To
monitor this on an existing system use a performance-monitoring tool like
perfmon, which provides several counters that can help here.
Logical Disk\Disk Transfer/sec
Logical Disk\Disk Reads/sec
Logical Disk\Disk Writes/sec
The Reference virtual machine assumes a 2:1 read: write ratio. Use these
counters to determine the total number of IOPS, and the approximate
ratio of reads to writes for the customer application.
The I/O size is important because smaller I/O requests are faster and easier
to process than large I/O requests. The Reference virtual machine
assumes an average I/O request size of 8 KB, which is appropriate for a
large range of applications. Use perfmon or another appropriate tool to
monitor the “Logical Disk\Avg. Disk Bytes/Transfer” counter to see the
average I/O size. Most applications use I/O sizes that are even powers of 2
KB (i.e. 4 KB, 8 KB, 16 KB, and 32 KB, and so on) are common. The
performance counter does a simple average, so it is common to see 11 KB
or 15 KB instead of the common I/O sizes.
The Reference virtual machine assumes an 8 KB I/O size. If the average
customer I/O size is less than 8 KB, use the observed IOPS number.
However, if the average I/O size is significantly higher, apply a scaling
factor to account for the large I/O size. A safe estimate is to divide the I/O
size by 8 KB and use that factor. For example, if the application is using
mostly 32 KB I/O requests, use a factor of four (32 KB / 8 KB = 4). If that
application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400
IOPS since the Reference virtual machine assumed 8 KB I/O sizes.
Memory requirements
Storage performance requirements
I/O operations per second (IOPs)
I/O size
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
64
The average I/O response time, or I/O latency, is a measurement of how
quickly I/O requests are processed by the storage system. The VSPEX
solutions are designed to meet a target average I/O latency of 20 ms. The
recommendations in the Sizing guidelines section should allow the system
to continue to meet that target, however it is worthwhile to monitor the
system and re-evaluate the resource pool utilization if needed. To monitor
I/O latency, use the “Logical Disk\Avg. Disk sec/Transfer” counter in
perfmon. If the I/O latency is continuously over the target, re-evaluate the
virtual machines in the environment to ensure that they are not using more
resources than intended.
The storage capacity requirement for a running application is usually the
easiest resource to quantify. Determine how much space on disk the
system is using, and add an appropriate factor to accommodate growth.
For example, to virtualize a server that is currently using 40 GB of a 200 GB
internal drive with anticipated growth of approximately 20% over the next
year, 48 GB are required. EMC also recommends reserving space for
regular maintenance patches and swapping files. In addition, some file
systems, like Microsoft NTFS, degrade in performance if they become too
full.
With all of the resources defined, determine an appropriate value for the
equivalent Reference virtual machines line by using the relationships in
Table 9. Round all values up to the closest whole number.