DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT July 2017 VMAX Engineering White Paper ABSTRACT This white paper examines deployment of the Microsoft Windows Server Hyper-V virtualization solution on Dell EMC VMAX All Flash arrays, with focus on storage efficiency, availability, scalability, and best practices. H16434R This document is not intended for audiences in China, Hong Kong, Taiwan, and Macao. WHITE PAPER
32
Embed
Dell EMC VMAX All Flash Storage For Microsoft Hyper-V ... · Rapid deployment and protection of Hyper-V environments Block and file services, multiple connectivity and accessibility
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT July 2017
VMAX Engineering White Paper
ABSTRACT
This white paper examines deployment of the Microsoft Windows Server Hyper-V
virtualization solution on Dell EMC VMAX All Flash arrays, with focus on storage
efficiency, availability, scalability, and best practices.
H16434R
This document is not intended for audiences in China, Hong Kong, Taiwan,
and Macao.
WHITE PAPER
Copyright
2 Dell EMC VMAX All Flash Storage For Microsoft Hyper-V Deployment White Paper
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Dell EMC VMAX All Flash storage array product overview
7 Dell EMC VMAX All Flash Storage For Microsoft Hyper-V Deployment White Paper
manage VMAX arrays by using Dell EMC Unisphere™ for VMAX, the Solutions
Enabler command line interface (CLI), or REST APIs.
Scale-up and scale-out flexibility—The VMAX All Flash arrays use V-Bricks to
scale out.
The VMAX 250 starter V-Brick consists of one VMAX engine and 11 TBu of
capacity. The V-Brick scales up to two V-Bricks with Flash Capacity Packs in
11 TBu increments.
VMAX 450, VMAX 850, and VMAX 950 starter V-Bricks consist of a VMAX
engine and 53 TBu of capacity. These V-Bricks scale up with Flash Capacity
Packs in increments of 13 TBu.
You can order the following VMAX All Flash storage systems:
F package—An entry package with pre-packaged software bundles
FX package—A more encompassing package
The packages also include embedded Unisphere for VMAX management and
monitoring.
High performance—VMAX All Flash storage is designed for high performance and
low latency. It scales from one engine up to eight engines (V-Bricks). Each engine
consists of dual directors. Each director includes two-socket Intel CPUs, front-end
and back-end connectivity, a hardware compression module, InfiniBand internal
fabric, and a large mirrored and persistent cache. All writes are acknowledged to
the host as soon as they are registered with VMAX cache1. Writes are
subsequently, after multiple updates, written to flash. Reads also benefit from the
large VMAX cache. When a read is requested for data that is not already in cache,
FlashBoost technology delivers the I/O directly from the back-end (flash) to the
front-end (host). Reads are only later staged in the cache for possible future
access. VMAX All Flash storage also excels in servicing high bandwidth sequential
workloads that leverage pre-fetch algorithms, optimized writes, and fast front-end
and back-end interfaces.
Copy Data Management, Disaster Recovery (DR), and HA—VMAX All Flash
storage offers a strong set of data services. It natively protects all data with T10-DIF
from the moment data enters the array until it leaves (including replications). With
Dell EMC SnapVX™ and Dell EMC SRDF™, VMAX All Flash storage provides
many topologies for consistent local and remote replications. VMAX All Flash
storage provides optional D@RE, integrations with Dell EMC Data Domain™
software, such as Dell EMC ProtectPoint™ software, or cloud gateways with Dell
EMC CloudArray™ software. Other VMAX data services include Quality of Service
(QoS)2, compression, the “Call-Home” support feature, non-disruptive upgrades
(NDU), non-disruptive migrations (NDM), and so on. In virtual environments, VMAX
1 VMAX All Flash cache is large (from 512 GB to 16 TB, based on the configuration), mirrored, and
persistent due to the vault module that protects the cache content in case of a power failure, and
then restores it when the system comes back up.
2 Two separate features support VMAX QoS. The first relates to Host I/O limits that enable placing
IOPS and bandwidth limits on “noisy neighbors” applications (set of devices) such as test/dev
environments. The second relates to slowing down the copy rate for local or remote replications.
Microsoft Windows server and Hyper-V on VMAX
8 Dell EMC VMAX All Flash Storage For Microsoft Hyper-V Deployment White Paper
All Flash storage also supports vStorage APIs for Array Integration (VAAI)
primitives such as write-same and xcopy.
VMAX All Flash storage also provides automatic, scheduled, and application-
consistent snapshots for Microsoft SQL Server and other applications for creating
point-in-time copies for backup, reporting, or test/dev natively using SnapVX
software. AppSync software provides policy-driven, automated, and self-service
snapshots for applications with tighter integration between SnapVX and Microsoft
VSS and SQL Server Virtual Device Interface (VDI).
With the introduction of the HyperMax OS Q3 2016 microcode release, VMAX All Flash systems are now capable of performing data compression to increase the effective capacity of the array significantly. With this microcode release, HyperMax OS now uses the Adaptive Compression Engine (ACE) to compress data and efficiently optimize system resources to balance overall system performance.
Figure 2. VMAX All Flash benefits for Microsoft environments
Microsoft Windows server and Hyper-V on VMAX
Hyper-V is Microsoft's hardware virtualization product that enables you to create and run a
software version of a computer, called a virtual machine (VM). Each VM acts like a
complete computer, running an operating system and programs. When you need
computing resources, VMs give you more flexibility, help save time and money, and are a
more efficient way to use hardware than running just one operating system on physical
hardware.
Hyper-V runs each VM in its own isolated space, which means that you can run more than
one VM on the same hardware at the same time. You might want to do this to avoid
problems such as a failure affecting the other workloads, or to provide different people,
groups, or services access to different systems.
Hyper-V provides customers with an ideal platform for key virtualization scenarios, such
as production server consolidation, business continuity management, software test and
development, and development of an agile data center. Scalability and high performance
can be achieved by supporting features such as guest multiprocessing support and 64-bit
guest and host support. Features such as quick migration of virtual machines from one
Microsoft Windows server and Hyper-V on VMAX
9 Dell EMC VMAX All Flash Storage For Microsoft Hyper-V Deployment White Paper
physical host to another and integration with System Center Virtual Machine Manager
provide users with flexibility and ease of use.
Hyper-V allows customers to achieve significant space, power, and cooling savings while
maintaining availability and performance targets. VMAX storage systems can provide
additional value to customers by providing the ability to consolidate storage resources,
implement advanced high-availability solutions, and provide seamless multisite protection
of customer data assets.
As customers seek to consolidate data center operations, Microsoft’s Hyper-V hypervisor
provides a scalable solution for virtualization on the Windows Server platform. To further
facilitate cost savings, large-scale consolidation efforts can benefit by optimizing and
consolidating storage resources to a single storage repository. Additionally, many of the
advanced features of the Hyper-V environment are either facilitated by, or enhanced with,
the implementation of a scalable storage by using VMAX arrays.
Hyper-V can enable you to:
Establish or expand a private cloud environment. Provide more flexible, on-demand
IT services by moving to or expanding your use of shared resources and adjust
utilization as demand changes.
Use your hardware more effectively. Consolidate servers and workloads onto fewer,
more powerful physical computers to use less power and physical space.
Improve business continuity. Minimize the impact of both scheduled and
unscheduled downtime of your workloads.
Establish or expand a virtual desktop infrastructure (VDI). Use a centralized
desktop strategy with VDI can help you increase business agility and data security,
as well as simplify regulatory compliance and manage desktop operating systems
and applications. Deploy Hyper-V and Remote Desktop Virtualization Host (RD
Virtualization Host) on the same server to make personal virtual desktops or virtual
desktop pools available to your users.
Make development and test more efficient. Reproduce different computing
environments without having to buy or maintain all the hardware you'd need if you
only used physical systems.
Windows Server 2016 has a new Nano Server, a stripped-down edition of Windows
Server that is optimized for hosting Hyper-V, running in a VM or running a single
application. There is no desktop or local login because it is designed to be automated with
PowerShell. Nano Server benefits include faster restarts, lower attack surface, and the
ability to run more VMs on the same physical hardware. Fewer features also mean fewer
patches and fewer forced reboots. In Server 2016, Microsoft recommends Nano Server as
the default host for Hyper-V.
Containers are another feature of Windows 2016. Using containers, both the application
and its resources and dependencies are packaged so that deployment is automated.
Containers go hand in hand with microservices, the concept of decomposing applications
into small units each of which runs separately. The Windows 2016 operating system
supports both Windows Server Containers, which use shared operating system files and
memory, and Hyper-V containers, which have their own operating systems kernel files
Windows Server and VMAX All Flash design considerations
10 Dell EMC VMAX All Flash Storage For Microsoft Hyper-V Deployment White Paper
and memory. Hyper-V containers have greater isolation and security, at the expense of
efficiency.
Windows Server and VMAX All Flash design considerations
VMAX storage provisioning has become simpler than in previous releases. The following
sections discuss the principles and considerations for storage connectivity and
provisioning for Microsoft Server. When using Windows Server with VMAX All Flash
storage, consider the overall system design and configuration to gain the most benefits
and avoid artificial bottlenecks.
Flexible VMAX architecture provides three connectivity options to provide storage for
Windows 2016 Server and Hyper-V VMs. These options include block storage access by
Fibre Channel and iSCSI, as well as file-based access using SMB 3.0 and NFS protocols
over Ethernet interfaces. For block access, host HBA ports (initiators) and VMAX storage
ports (targets) are connected to an FC or Ethernet switch that is based on the connectivity
requirements. FC connectivity requires that you create zones on the switch and define
which initiator has access to which target. The zones create an I/O path between the host
and storage. iSCSI connectivity requires that you set up IP network connectivity between
the host and VMAX array. File-based connectivity also requires that you set up the IP
network between the host and VMAX-embedded file server eNAS. Figure 3 shows all
connectivity options available between a VMAX array and a Windows Server.
Figure 3. VMAX to Windows Server connectivity options
FC Connectivity
Use at least two HBAs for each server to enable better availability and scale. Use
multipathing software such as Dell EMC PowerPath® or Microsoft Windows Multipath I/O
(MPIO) to load balance and fail over automatically or recover paths. To check that all
paths are visible and active, use either of the following PowerPath commands:
powermt display paths
multipath -l
Storage
connectivity
options for
Hyper-V Server
Windows Server and VMAX All Flash design considerations
11 Dell EMC VMAX All Flash Storage For Microsoft Hyper-V Deployment White Paper
When zoning host initiators to storage target ports, ensure that each pair is on the same
switch. Performance bottlenecks are often created when I/Os travel through ISL (paths
between switches) because they are shared and limited.
Consider the ports’ speed and count when planning bandwidth requirements. Each 8 Gb
FC port can deliver up to approximately 800 MB/s. Therefore, a server with four ports
cannot deliver more than approximately 3 GB/s. Also, consider that between the host
initiator, storage port, and switch, the lowest speed that is supported by either of these
components is negotiated and used for the I/O paths that are serviced by those
components.
iSCSI connectivity
Windows Server and Hyper-V Server (2012 and later) include native support for a
software iSCSI initiator as well as Multipath IO for resiliency and load balancing of storage
I/O over multiple network paths. Storage connectivity using iSCSI is more cost-effective
and flexible. VMAX All Flash storage provides iSCSI connectivity over multiple 10 GbE
interfaces. Support for VLANs offers network partitioning and traffic isolation in multitenant
environments and Challenge-Handshake Authentication Protocol (CHAP) addresses
iSCSI security concerns by enabling access to clients that supply valid authentication.
Consider the following guidelines:
For clustered environments, disable the cluster network communication for any
network interfaces that you plan to use for iSCSI.
Use VLANs dedicated to iSCSI setup. VLANs allow logical grouping of network end
points, minimizing network bandwidth, contention for iSCSI traffic and eliminating
impact on iSCSI traffic due to noisy neighbors.
If all network devices in the iSCSI communication paths support jumbo frames, use
jumbo frames on Ethernet to improve iSCSI performance.
To minimize host CPU impact due to network traffic, ensure that Transmission
Control Protocol (TCP) offloading is enabled on a host network interface card (NIC),
which offloads processing of the TCP stack to the NIC and eases impact on the
CPU.
As with FC connectivity, use of PowerPath software or native multipathing for
Windows helps load balance and ease queueing issues for iSCSI traffic through the
host NICs.
SMB 3.0 connectivity using eNAS
Windows 2016 server can use SMB 3.0 file shares from the VMAX eNAS-embedded file
server to store VMs or their copies. With this capability, Hyper-V can store VM files, which
include configuration, virtual hard disk (VHD) files, and snapshots, on SMB file shares.
Using the file share for Hyper-V provides increased flexibility because the existing
converged network can be used for storage connectivity and you can dynamically migrate
VMs or databases in the data center.
Storage Choice for VMs
12 Dell EMC VMAX All Flash Storage For Microsoft Hyper-V Deployment White Paper
The advanced features of VMAX3 eNAS for Microsoft environments include offloaded
data transfer (ODX), MPIO, and jumbo frame support, which allow users to make optimal
use of resources for best performance. VMAX3 eNAS supports data protection for files
using easy-to-schedule periodic snapshots as well as local and remote file system
replication. eNAS also supports the Continuous Availability (CA) feature that allows
Windows-based clients to access SMB shares persistently without the loss of the session
state in case of a failover.
Storage Choice for VMs
There are two basic methods by which VMs can be provided access to VMAX storage.
Connectivity can be either through the Hyper-V Server or directly to the VM. VHDs are
created on the Hyper-V server and then made available to the VMs. The virtual Fibre
Channel or iSCSI initiator on the VM can be used to connect VMs directly to a VMAX
array
You can create and manage Virtual Machine storage by using the Hyper-V Manager tool
or PowerShell commands. VHD is a legacy storage format. Starting with Windows Server
2012, Microsoft introduced VHDX. Both VHD and VHDX formats are available; however,
VHDX has distinct advantages over the legacy VHD format. VHDX has a larger storage
capacity compared the older VHD format. It also provides data corruption protection
during power failures and optimizes structural alignments of dynamic and differencing
disks to prevent performance degradation on new, large-sector physical disks. A 4 KB
logical sector virtual disk allows for increased performance when used by applications and
workloads that are designed for 4 KB sectors. When you create a VHDX file, you can
configure it to be a fixed size or dynamic. A fixed-size VHDX file has all space allocated
when the file is created. A dynamic VHDX file grows as data is written to it, which provides
space efficiency. There might be slight overhead for future growth of dynamic VHDX files,
but dynamic VHDX on VMAX All Flash storage minimizes performance degradation due
to wide striping and virtual provisioned storage on the VMAX All Flash back end.
A VHD Set, a new type of virtual disk model for a guest cluster in Windows Server 2016,
is a VHD created with a .vhds extension. The .vhds file is only 260 KB. A file with a
.vhdx extension is also created to store the actual data and can be either a dynamic or
fixed-size file. The main purpose of a VHD Set is to share the virtual disk between multiple
VMs. The VHD Set is useful for deploying guest clusters for SQL Server, file servers, and
other services that require shared storage.
The Hyper-V virtual Fibre Channel connectivity feature can provide VMs with direct