Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs Joey Dieckhans, VMware Yea-Cheng Wang, VMware Alex Amaya, Emulex
Jan 17, 2015
Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs
Joey Dieckhans, VMwareYea-Cheng Wang, VMwareAlex Amaya, Emulex
2© 2011 Emulex Corporation 2© 2011 Emulex Corporation
Agenda
Introduction
What’s New With VMware vSphere 5.1 for Storage
Performance Study
Emulex LPe16000 16Gb Fibre Channel (16GFC) PCIe 3.0 HBAs
Strategic Management
Conclusion
Q&A
What’s New With VMware vSphere 5.1 for Storage
Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
Space Efficient Sparse Virtual DisksJoseph Dieckhans
A new Space Efficient Sparse Virtual Disk which
1. Reclaims wasted / stranded space in side a Guest OS
2. Uses a variable block size to better suit applications / use cases
VMDK
Wasted Blocks
VMDK
Traditional VMDK
No Wasted Blocks
Space Efficient Sparse VMDK
Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
Increasing VMFS File Sharing LimitsJoseph Dieckhans
vSphere 5.1 supports sharing a file on a VMFS Datastore with up to 32 concurrent ESXi hosts. (previous limit was 8)
VMFS-5
VMDK
Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
Storage DRS & vCloud DirectorJoseph Dieckhans
vCloud Director Interoperability/Support for Linked Clones• vCloud Director will use Storage DRS for the initial placement
of linked clones during Fast Provisioning.
• vCloud Director will use Storage DRS for managing space utilization and I/O load balancing.
Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
Storage vMotion – Parallel Migration EnhancementJoseph Dieckhans
In vSphere 5.1 Storage vMotion performs up to 4 parallel
disk migrations per Storage vMotion operation
16GFC Performance Study by VMware
New 16GFC Support in vSphere5.1
• Provide new support for 16GFC on
vSphere 5.1 for better storage I/O
performance
• Performance results
Newly added 16GFC driver has twice the
throughput compared to 8GFC HBA, at better
cpio (cpu cost per I/O)
Reached 16GFC wire speed for random I/Os
in 8KB block sizes.
• Whitepaper
Storage I/O Performance on VMware vSphere5.1 over 16 Gigabit Fibre Channel
Comparison of Throughput and CPU Efficiency
16GFC Driver delivers double the throughput at better CPU efficiency per I/O
Sequential read I/Os over a 16GFC or a 8GFC port (Single Iometer worker in single VM)
Throughput and CPU cost per I/O comparison between two adapters. (see note on server configuration)
KB 4KB 8KB 16KB 32KB 64KB 256KB
0
200
400
600
800
1000
1200
1400
1600
1800
8Gb 16Gb
Block size
Seq
uen
tial
read
th
rou
gh
pu
t (M
Bp
s)
1KB 4KB 8KB 16KB 32KB 64KB256KB75%
80%
85%
90%
95%
100%
8Gb 16Gb
Block size
CPU
cost
per
I/O
(low
er
is b
ett
er)
Throughput CPU Cost per I/O
More Bandwidth and Better IOPs
16GFC Adapter can attain much better IOPs compared to the 8Gbps wire speed limit of a 8GFC port.
Random read I/Os from 1 VM to 8 VMs over a 16GFC port (single Iometer worker per VM)
1KB 4KB 8KB 16KB0
200
400
600
800
1,000
1,200
1,400
1,600
1,800
1 VM 2 VMs 4 VMs 6VMs 8 VMs
Block size
Ran
dom
read
th
rou
gh
pu
t (M
Bp
s)
1KB 4KB 8KB 16KB0
100,000
200,000
300,000
400,000
500,000
600,000
1 VM 2 VMs 4 VMs 6VMs 8VMs
Block size
I/O
s p
er
secon
ds
(IO
Ps)
Random Read Throughput Random Read IOPs
8Gbps wire speed limit on the throughput of a 8Gb FC HBA
Server and Workload Configuration
ESX Host
HP Intel Proliant DL370, Dual Quad Core Xeon W5580 processors
Emulex LPe16002 16GFC HBA initiator
Emulex LPe12000 8GFC HBA initiator
EMC VNX7500 Storage Array
8GFC target ports connected to 16GFC SAN switch for LPe16002 initiator
8GFC target ports connected to 8GFC SAN switch for LPe12000 initiator
32 SSD cached luns of size 256MB, with mirrored write cache enabled at the VNX array
Virtual Machine and Workload
Windows 2008 R2, 64-bit Gust O; single vcpu, and single PVSCSI virtual controller
Single Iometer worker, and 4 target luns in each VM, at 32oios per target lun
Emulex 16GFC PCIe 3.0 HBAs
14© 2011 Emulex Corporation 14© 2011 Emulex Corporation
Single Port Max IOPS
LPe16002 LPe12002
15© 2011 Emulex Corporation 15© 2011 Emulex Corporation
Single Port Max MB/s
LPe16002 LPe12002
16© 2011 Emulex Corporation 16© 2011 Emulex Corporation
Half the I/O Response TimeAverage I/O response during a single SSD LUN read I/O
LPe16002 LPe12002
17© 2011 Emulex Corporation 17© 2011 Emulex Corporation
Best Practices for 16GFC HBAs
Stay up to date with firmware and drivers tested and supported by VMware HCL
Update the firmware preferably during planned downtime
OEM adapters – visit partner website for latest firmware and drivers
Update inbox drivers
Always check with the storage vendor for the recommended queue depth settings
Always check with the storage vendor for the recommended Multipathing policy
HBA Management in Virtual Environments
19© 2011 Emulex Corporation 19© 2011 Emulex Corporation
OneCommand Manager for VMware vCenter Server
OneCommand Manager software plug-in for the VMware vCenter Server console– Real-time lifecycle management for
Emulex adapters from vCenter Server– Builds on Emulex CIM providers and
OCM features – no new agents– Extends the vCenter Server console
with an Emulex OneCommand tab
Display / manage adapters with multiple views and filters:– View per VMware host; per VMware cluster; per network fabric – Firmware version, hardware type and many other display filters
Batch update adapter firmware across VMware clusters– Deploy firmware across hosts in a cluster
20© 2011 Emulex Corporation 20© 2011 Emulex Corporation
OneCommand Manager for VMware vCenter ServerCluster View – Hosts in a VMware Cluster
Emulex OneCommand
Tab
VMware Hosts, VMs and Clusters
OCM Cluster-based Management Tasks
Data Window for Selected Items
Resources
22© 2011 Emulex Corporation 22© 2011 Emulex Corporation
Implementers Lab
One-stop site for IT administrators and system architects (implementers)
Technically accurate and straight-forward resources
Fibre Channel and OEM Ethernet and ESXi 5.0 Deployments
How-to Guides for Solutions from:– HP– IBM– Dell
Please wander around our website– Implementerslab.com
23© 2011 Emulex Corporation
Additional Resources
VMware– Storage I/O performance on V
Mware vSphere 5.1 over 16GFC
– Blog: Storage Protocol Comparison – A vSphere Perspective
– Technical Resources
Emulex– www.ImplementersLab.com– Demartek LPe16000B Evaluat
ion report– OneCommand Manager for V
Mware vCenter– OneCommand Manager– OneCommand Vision
24© 2011 Emulex Corporation 24© 2011 Emulex Corporation
Final Thoughts…
Virtualization adoption is spreading– More virtualization spreading to cloud, VDI, and mission critical
applications
Virtualization density is increasing– Enabled by bigger servers, more memory, faster networks and vSphere
Fibre Channel is the most popular network for SANs– Networking is the #2 factor (after memory) for bigger VM deployments
16GFC from Emulex is here:– Lower latency, better throughput and more IOPS for bigger VM
Deployments– Best management for vSphere
25© 2011 Emulex Corporation
Q & A
26© 2011 Emulex Corporation