-
Technical Report
Configuring and Installing the EqualLogic Multipathing Extension
Module for VMware vSphere and PS Series SANs Abstract This
Technical Report will explain the benefits of the EqualLogic
Multipathing Extension Module (MEM) for VMware vSphere 5.x and 4.1
which provides multipath I/O for highly available access to the
Dell EqualLogic PS Series SAN
TR1074
V1.3
-
Copyright 2013 Dell Inc. All Rights Reserved.
EqualLogic is a registered trademark of Dell Inc.
Dell is a trademark of Dell Inc.
All trademarks and registered trademarks mentioned herein are
the property of their respective owners.
Information in this document is subject to change without
notice.
Dell Inc. will not be held liable for technical or editorial
errors or omissions contained herein. The information in this
document is subject to change.
Reproduction in any manner whatsoever without the written
permission of Dell is strictly prohibited.
Authored by: David Glynn
[September 2013]
WWW.DELL.COM/PSseries
-
Preface
PS Series arrays optimize resources by automating performance
and network load balancing. Additionally, PS Series arrays offer
all-inclusive array management software, host software, and free
firmware updates.
Audience
The information in this guide is intended for administrators of
VMware vSphere environment utilizing EqualLogic iSCSI arrays.
Related Documentation
For detailed information about PS Series arrays, groups,
volumes, array software, and host software, log in to the
Documentation page at the customer support site.
Dell Online Services
You can learn about Dell products and services using this
procedure:
1. Visit http://www.dell.com or the URL specified in any Dell
product information.
2. Use the locale menu or click on the link that specifies your
country or region.
Dell EqualLogic Storage Solutions
To learn more about Dell EqualLogic products and new releases
being planned, visit the Dell EqualLogic TechCenter site:
http://delltechcenter.com/page/EqualLogic. Here you can also find
articles, demos, online discussions, technical documentation, and
more details about the benefits of our product family.
For an updated Dell EqualLogic compatibility list please visit
the following URL:
https://eqlsupport.dell.com/support/download.aspx?id=6442454231&langtype=1033
-
Table of Contents Revision Information
.......................................................................................................................................
iii
Executive Summary
..........................................................................................................................................
1
Introduction
.........................................................................................................................................................
1
Deploying the EqualLogic MEM
..................................................................................................................
2
Deployment Considerations: Requirements
....................................................................
2 Deployment Considerations: Storage Heartbeat on vSphere 5.0 and
4.1 ........ 2 Deployment Considerations: iSCSI Connection Count
............................................. 3 Deployment
Considerations: iSCSI Login Timeout on vSphere 5.x
.................... 4 Deployment Considerations: Best Practices
parameter in MEM 1.2.................... 5
Configuring an iSCSI vSwitch for Multipathing
....................................................................................
6
Interactive Mode configuration
..............................................................................................
6 Unattended Mode configuration
............................................................................................
8
Installing the EqualLogic MEM
.....................................................................................................................
9
Verification of MEM iSCSI Session Creation
........................................................................................
10
Advanced iSCSI connection configuration parameters
.................................................................
11
Increasing the default values
.................................................................................................
11 Decreasing the default values
................................................................................................
12 Setting the EHCM configuration values under vSphere 5.x
................................... 13 Setting the EHCM
configuration values under vSphere 4.1
................................... 13
Summary
.............................................................................................................................................................
14
Technical Support and Customer Service
............................................................................................
14
Appendix A: Example iSCSI vSwitch configurations
........................................................................15
Overriding hardware iSCSI offload default utilization
............................................... 15 Utilizing a
vNetwork Distributed Switch
..........................................................................
15 Enabling Jumbo Frames
...........................................................................................................
15 Setting the iSCSI discover
address.......................................................................................
15
Appendix B: Installing the MEM with VMware Update Manager
................................................ 16
-
iii
Revision Information The following table describes the release
history of this Technical Report.
Report Date Document Revision
1.0 November 2011 Initial Release
1.1 February 2012 General Availability updates
1.2 September 2012 Updated Storage Heartbeat recommendation for
vSphere 5.1
1.3 September 2013 Updated to reflect firmware 6.0 and vSphere
5.5
The following table shows the software and firmware used for the
preparation of this Technical Report.
Vendor Model Software Revision
VMware vSphere 4.1 or 5.x with Enterprise licensing 4.1 or
5.x
Dell Dell EqualLogic PS Series SAN 4.3 or above
Dell EqualLogic Multipathing Extension Module for VMware vSphere
5.x
1.2
Dell EqualLogic Multipathing Extension Module for VMware vSphere
4.1
1.1.1
The following table lists the documents referred to in this
Technical Report. All PS Series Technical Reports are available on
the Customer Support site at: support.dell.com
Vendor Document Title
Dell Dell EqualLogic PS Series Array Administration Guide
Dell EqualLogic Multipathing Extension Module: Installation and
User Guide
VMware vSphere Storage Guide
VMware vSphere Installation and Setup Guide
VMware Installing and Administering VMware vSphere Update
Manager
VMware vSphere Command-Line Interface Concepts and Examples
VMware vSphere Management Assistant Guide
CONVENTIONS Throughout this document the term ESXi is used to
refer to both ESX and ESXi hypervisors. The EqualLogic MEM is
supported on vSphere 4.1 ESX and ESXi, and vSphere 5.x ESXi
hypervisor platforms with Enterprise or Enterprise Plus
licensing.
-
1
Executive Summary High availability is an important requirement
of any system in the datacenter. This availability is even more
critical if that system is a component in the virtual
infrastructure upon which a virtualized datacenter is built.
Redundant hardware and RAID technologies form a critical
foundation. When using shared storage, the paths from the servers
to the storage need to also be redundant and highly available.
This Technical Report details the benefits of Dells EqualLogic
Multipathing Extension Module, MEM, for VMware vSphere, as well the
installation and configuration process to provide multipath I/O for
high available access to the Dell EqualLogic PS Series SAN. Also
covered are a number of overall virtual environment iSCSI design
considerations and best practices.
Introduction VMware vSphere offers many enhancements to the
software iSCSI initiator beyond basic iSCSI SAN connectivity. The
most significant of these enhancements is the API support for third
party mutipathing plugins. This provides a framework that enables
the EqualLogic MEM to intelligently route and efficiently load
balance iSCSI traffic across multiple NICs.
The EqualLogic MEM provides the following benefits:
Ease of install Increased bandwidth Reduced network latency
Automatic failure detection and failover Automatic load balancing
across multiple active paths Automatic connection management
Multiple connections to a single iSCSI target
The EqualLogic MEM utilizes the same multipathing iSCSI vSwitch
as VMwares Round Robin multipathing. As part of Dells continuous
efforts to help customers simplify the configuration of their IT
environments, the iSCSI MPIO vSwitch configuration process for the
EqualLogic MEM has been reduced to either a single command with a
few parameters or a guided question and answer process. The MEM
installation process is equally straightforward, requiring that
only a single command be executed. The MEM can also be installed
using VMwares vSphere Update Manager.
Once installed, the EqualLogic MEM will automatically create
iSCSI sessions to each member that a volume spans. As the storage
environment changes, the MEM will respond by automatically adding
or removing iSCSI sessions as needed.
As storage I/O requests are generated on the ESXi hosts, the MEM
will intelligently route these requests to the array member best
suited to handle the request. This results in efficient load
balancing of the iSCSI storage traffic, reduced network latency and
increased bandwidth.
-
2
Deploying the EqualLogic MEM Deploying the EqualLogic MEM
consists of two steps:
Configuring a vSwitch for iSCSI multipathing Installing the
EqualLogic MEM
The entire process of creating the multipathing compatible
vSwitch and installing the EqualLogic multipathing extension module
can be quickly and efficiently completed. Once deployed, it can
provide both new and existing ESXi hosts with increased performance
to EqualLogic storage resources.
Deployment Considerations: Requirements
The EqualLogic MEM has the following requirements:
VMware vSphere ESXi 4.1 or 5.x with Enterprise or Enterprise
Plus licensing VMware CLI (Command-Line Interface) or VMware vMA
(vSphere
Management Assistant) compatible with above version of ESXi
Compatible EqualLogic array firmware, refer to Release Notes.
Prior to deploying the MEM the following steps should be
completed:
1. The ESXi host(s) must be placed in maintenance mode. 2.
Download and install either the VMware vMA, or vSphere CLI. Refer
to
VMwares documentation for installation and configuration details
of these tools.
3. Download the Dell EqualLogic Multipathing Extension Module
for VMware vSphere (MEM) from the EqualLogic Support website. There
are currently two version of MEM; MEM 1.2 which supports versions
5.0, 5.1 and 5.5 of ESXi, and MEM 1.1.2 which supports versions
4.1, 5.0 and 5.1 of ESXi. Customers are encouraged to use the most
current applicable version for the version of ESXi they are using.
The download will be a zip archive named
EqualLogic-ESX-Multipathing-Module.zip
4. Unpack the ZIP archive. It will contain the files:
dell-eql-mem-esx-.zip
Note: This is a VIB offline bundle and should not be
unzipped.
MEM-Release_Notes.pdf MEM-User_Guide.pdf README.txt setup.pl
5. If using the vMA, upload setup.pl and the VIB offline bundle
to the vMA. 6. Retain the MEM-Release_Notes.pdf,
MEM-User_Guide.pdf, and README.txt
files for reference.
Note: MEM 1.2 is only compatible with vSphere 5.x and certified
on EqualLogic firmware version 5.2. For prior versions of vSphere
or firmware use MEM 1.1.x.
Deployment Considerations: Storage Heartbeat on vSphere 5.0 and
4.1
In the VMware virtual networking model, certain types of
vmkernel network traffic are sent out a default vmkernel port for
each subnet. The iSCSI multipathing network
-
3
configuration requires that the iSCSI vmkernel ports use a
single physical NIC as an uplink. As a result, if the physical NIC
that is being used as the uplink for the default vmkernel port goes
down, network traffic that is using the default vmkernel port will
fail. This includes vMotion traffic, SSH access, and ICMP ping
replies.
Though iSCSI traffic isnt directly affected by this condition, a
side effect of the suppressed ping replies is that the EqualLogic
PS Series group will not be able to accurately determine
connectivity during the login process, and therefore a suboptimal
placement of iSCSI sessions will occur. In some scenarios,
depending upon array, server and network load, logins may not be
completed in a timely manner. To prevent this from occurring, Dell
recommends that a highly available vmkernel port be created on the
iSCSI subnet serving as the default vmkernel port for such outgoing
traffic.
Note: This recommendation for using Storage Heartbeat applies
only to vSphere 4.1 and 5.0. It is not necessary with the
improvements in vSphere 5.1 and later.
Deployment Considerations: iSCSI Connection Count
The Dell EqualLogic MEM provides for increased bandwidth and
lower latency access to EqualLogic storage. This performance
benefit is achieved through leveraging several behaviors that are
unique to the EqualLogic peer storage architecture. It is important
to understand how the MEM achieves these performance gains, and the
potential impact it can have if these considerations are not taken
into account.
The MEM achieves its performance gains by creating multiple
iSCSI connections to each PS Series group member on which a
datastore volume resides. Assuming the MEMs default settings are in
use, and depending on the configuration of the environment, there
will be up to six iSCSI connections to a datastore volume. This is
a significant increase, when compared with VMwares Fixed Path
policy, which utilizes one iSCSI connection per volume. This
increase in connections must be planned for at deployment time, and
also, as the vSphere environment and its supporting EqualLogic
storage environment scales.
To calculate the number of iSCSI connections that the MEM will
consume in a particular environment, utilize the following
formula:
Number of ESXi hosts * 2 (default iSCSI sessions per volume
portion) * Number of EqualLogic members in the pool (with a max
value of 3) * Number of volumes
For example: In a vSphere environment with eight ESXi hosts, and
using a two member EqualLogic pool for hosting the ten volumes
needed for virtual machines this would be:
8 * 2 * 2 * 10 = 320 iSCSI connections
While this is within the 1024 iSCSI connection limit per pool of
the firmware, it does not take into account other iSCSI connections
to the EqualLogic storage pool from other servers in the
datacenter. Consideration should also be given to how this virtual
environment may grow in the future with respect to additional ESXi
hosts, datastore volumes and additional EqualLogic members.
-
4
The release of EqualLogic 6.0 firmware provided for enhanced
communication with the MEM. This enables the array to notify the
MEM of the number of total iSCSI connections to the pool from all
sources. Should the number of iSCSI connections to the pool begin
to approach the limit, then the MEM will reduce the number of iSCSI
session it is creating to the array, while still maintaining
redundancy.Depending on the overall environment, and future
requirements, changes may need to be made to one or more of the
following if there are concerns about the iSCSI connection count
growing too large:
The MEM parameters membersessions, volumessessions or
totalsessions: see the Installation and User Guide for details.
The number of volumes: By increasing the size of a given volume,
more virtual machines can reside on it, thereby reducing the total
number of volumes needed. With vSphere 5.0 datastore size is no
longer restricted to 2TB.
Reconfigure larger clusters into multiple smaller clusters. By
splitting large clusters into smaller clusters, the number of hosts
accessing particular volumes is reduced, thereby reducing the
number of iSCSI connections consumed.
The configuration of EqualLogic members: reducing the number of
EqualLogic members per pool, and increasing the number of pools,
will increase the number of available iSCSI connections.
For more information on this topic read TR1072 Dell EqualLogic
PS Arrays Scalability and Growth in Virtual Environments
http://en.community.dell.com/dell-groups/dtcmedia/m/mediagallery/19992296.aspx
As with all rules there are exceptions. The MEM iSCSI connection
algorithm works to avoid creating unnecessary sessions while still
maintaining redundancy. With the PS6110 and PS-M4110, both of which
have only one 10Gb Ethernet port, this results in fewer iSCSI
connections being created when compared to arrays with multiple
Ethernet ports per controller. With the PS6110 and PS-M4110, if
they are in a single member pool, MEM will create two iSCSI
connections per volume, for redundancy. In a two or three member
pool MEM will create a single iSCSI connection per member,
resulting in a total of two or three iSCSI connections being
created per volume. This balances the bandwidth and throughput from
an individual ESXi host with that of an individual EqualLogic
member.
Deployment Considerations: iSCSI Login Timeout on vSphere
5.x
The default value of 5 seconds for iSCSI logins on vSphere 5.x
is too short in some circumstances. For example: In a large
configuration where the number of iSCSI sessions to the array is
close to the limit of 1024 per pool. If a severe network disruption
were to occur, such as the loss of a network switch, a large number
of iSCSI sessions will need to be reestablished. With such a large
number of logins occurring, some logins will not be completely
processed within the 5 second default timeout period.
Dell therefore recommends applying patch ESXi500-201112001 and
increasing the ESXi 5.0 iSCSI Login Timeout to 60 seconds to
provide the maximum amount of time for such large numbers of logins
to occur.
-
5
If the patch is installed prior to installing the EqualLogic
MEM, the MEM installer will automatically set the iSCSI Login
Timeout to the Dell recommended value of 60 seconds.
The iSCSI Login Timeout value can also be set using esxcli with
the following syntax:
esxcli iscsi adapter param set --adapter=vmhba
--key=LoginTimeout --value=60 See VMware KB 2009330 for additional
information.
Deployment Considerations: Best Practices parameter in MEM
1.2
One of the new options in version 1.2 of MEMs setup.pl script
for configuring the ESXi hosts iSCSI vSwitch is the --bestpractice
parameter. Using this parameter will change some of the ESXi hosts
settings from the defaults.
With the MEM 1.2 release the settings that are changed are:
Disabling Delayed ACK: Delayed ACK is a TCP/IP method of
allowing segment acknowledgements to piggyback on each other or
other data passed over a connection with the goal of reducing IO
overhead. One side effect of delayed ACK is that if the pipeline
isnt filled, acknowledgement of data will be delayed. In SANHQ this
can be seen as higher latency during lower I/O periods. Latency is
measured from the time the data is sent to when the acknowledgement
is received. Since we are talking about disk I/O any increase in
latency can result in poorer performance. Additional information
can be found in VMware KB 1002598.
Note: While iSCSI Login Timeout is considered a best practice,
it is also consider a requirement, and therefore will always be set
to 60 seconds during installation.
-
6
Configuring an iSCSI vSwitch for Multipathing As previously
mentioned, the EqualLogic MEM utilizes the same multipathing
vSwitch as VMwares Round Robin multipathing. Therefore if the ESXi
hosts are already configured for Round Robin this step can be
omitted. However, if the ESXi hosts are configured for Fixed Path,
the iSCSI initiator and associated vSwitch should be removed before
continuing. Those who have previously configured Round Robin
multipathing will appreciate the powerful functionality and the
ease of use the MEMs setup.pl script provides when compared to the
manual configuration process.
Note: The instructions provided here refer to using the vMA,
however the syntax is the same whether the vMA or CLI is used. The
optional parameters --username and --password have been excluded
from the examples below for clarity. If not included and
vi-fastpass is not configured, then the user will be prompted to
provide the username and password.
1. Connect to the vMA and change directory to where the setup.pl
was uploaded to.
2. The simplest method of creating a multipathing iSCSI vSwitch
is to invoke the setup.pl in Interactive Mode.
setup.pl --configure --server=hostname In this configuration
mode a series of questions are posed to the user, and in many
instances default options presented. To accept a default, press
enter, as shown in the example below:
Interactive Mode configuration
setup.pl --configure --server=10.124.6.223 Do you wish to use a
standard vSwitch or a vNetwork Distributed Switch (vSwitch/vDS)
[vSwitch]: Found existing switches vSwitch0. vSwitch Name
[vSwitchISCSI]: Which nics do you wish to use for iSCSI traffic?
[vmnic1 vmnic2 vmnic3]: vmnic2 vmnic3 IP address for vmknic using
nic vmnic2: 192.168.0.215 IP address for vmknic using nic vmnic3:
192.168.0.216 What IP address would you like to use for the highly
available heartbeat vmknic (optional)?: 192.168.0.214 Netmask for
all vmknics [255.255.255.0]: What MTU do you wish to use for iSCSI
vSwitches and vmknics? Before increasing the MTU, verify the
setting is supported by your NICs and network switches. [1500]:
9000 What prefix should be used when creating VMKernel Portgroups?
[iSCSI]: The SW iSCSI initiator is not enabled, do you wish to
enable it? [yes]: What PS Group IP address would you like to add as
a Send Target discovery address (optional)?: 192.168.0.200 What
CHAP user would you like to use to connect to volumes on this group
(optional)?: BLUEcluster What CHAP secret would you like to use to
connect to volumes on this group (optional)?:
-
7
Configuring iSCSI networking with following settings: Using a
standard vSwitch 'vSwitchISCSI' Using NICs 'vmnic2,vmnic3' Using IP
addresses '192.168.0.215,192.168.0.216' Creating a highly available
vmkernel port with IP '192.168.0.214' Using netmask '255.255.255.0'
Using MTU '9000' Using prefix 'iSCSI' for VMKernel Portgroups Using
SW iSCSI initiator Enabling SW iSCSI initiator Adding PS Series
Group IP '192.168.0.200' with CHAP user 'CHAPuser' to Send Targets
discovery list. The following command line can be used to perform
this configuration: setup.pl --configure --server=10.124.6.223
--vswitch=vSwitchISCSI --mtu=9000 --nics=vmnic2,vmnic3
--ips=192.168.0.215,192.168.0.216 --heartbeat=192.168.0.214
--netmask=255.255.255.0 --vmkernel=iSCSI --nohwiscsi
--enableswiscsi --groupip=192.168.0.200 --chapuser=BLUEcluster Do
you wish to proceed with configuration? [yes]:
Note that the CHAP secret is not displayed as part of the
command line that Interactive Mode will generate.
3. As the script executes, it will provide status notifications
as it proceeds through several steps to create the multipathing
iSCSI vSwitch.
4. Once complete a multipathing iSCSI vSwitch, like the example
shown below, will be created on the ESXi host.
A detailed list of all the parameters options and their usage
can be found in the EqualLogic MEM Installation and User Guide.
Additional configuration examples can be found in Appendix A.
Note: The example above was generated using MEM 1.1.1 and
vSphere 5.1, the output may differ slightly with other
combinations.
-
8
Unattended Mode configuration
While setup.pl scripts Interactive Mode is extremely helpful,
using it to configure all the hosts in a cluster is not the most
efficient means of performing this task. Instead it is possible to
pass to the setup.pl script the values for the various
parameters.
At the end of the Interactive Mode configuration shown above, it
presents the resulting command line from the Interactive Mode
questions and answers. This is shown again here:
setup.pl --configure --server=10.124.6.223
--vswitch=vSwitchISCSI --mtu=9000 --nics=vmnic2,vmnic3
--ips=192.168.0.215,192.168.0.216 --heartbeat=192.168.0.214
--netmask=255.255.255.0 --vmkernel=iSCSI --nohwiscsi
--enableswiscsi --groupip=192.168.0.200 --chapuser=BLUEcluster
Using this as an example, and by modifying the hostname and IP
information additional ESXi hosts can be easily provisioned with
the same vSwitch configuration. For example:
setup.pl --configure --server=10.124.6.224
--vswitch=vSwitchISCSI --mtu=9000 --nics=vmnic2,vmnic3
--ips=192.168.0.225,192.168.0.226 --heartbeat=192.168.0.224
--netmask=255.255.255.0 --vmkernel=iSCSI --nohwiscsi
--enableswiscsi --groupip=192.168.0.200 --chapuser=BLUEcluster
--chapsecret=BLUEsecret Note that the CHAP secret is not displayed
as part of the command line that Interactive Mode generates, but
has been added to the example above. If not included, setup.pl will
prompt for the secret at run time. Depending upon the required
configuration of the multipathing iSCSI vSwitch additional
parameters maybe required or removed. For a full listing of the
parameters and their defaults see the Configuring Your Network for
the MEM Plugin in the Dell EqualLogic MEM Installation and User
Guide.
-
9
Installing the EqualLogic MEM The EqualLogic MEM can be
installed using the command line tools, CLI or vMA, or with VMware
Update Manager (VUM). The command line tools are used in this
example. See Appendix C for install instructions on using VUM for
installation.
Setup.pl is a Perl script wrapper around a number of VMware CLI
commands, and hence provides administrators with a similar
installation process regardless of vSphere version. However there
are some differences, which are listed here:
On vSphere 5.x:
The multipathing functionality is available immediately after a
fresh install; however hostd needs to be restarted prior to the new
esxcli controlling and reporting commands being available.
Prior to installation setup.pl will copy the VIB to the first
datastore it finds; optionally a particular datastore can be
specified by using the --datastore parameter. Due to API
limitations the VIB is not deleted by setup.pl.
On vSphere 4.1:
A reboot is required before the plugins multipathing
functionality is available.
Note: When performing an upgrade or uninstall of the MEM a
reboot is required on all versions of vSphere.
1. Using the vMA or vSphere CLI, execute the following command
for each host the MEM is to be installed to:
setup.pl --install --server=hostname It may take a few minutes
to install the module to the ESXi host.
2. Depending on the version of vSphere this next step will
differ: a. On a vSphere 5.x host, the plugins multipathing
functionality is available
for use.
To enable of the new esxcli commands restart the hostd service
by executing the following command issued on the host, as
documented in VMware KB2004078:
/etc/init.d/hostd restart b. On a vSphere 4.1 host, a reboot is
required before the plugins
multipathing functionality is available. To automatically reboot
the ESXi host once the install is complete include the --reboot
parameter.
3. To verify the availability of a the EqualLogic MEM execute:
vSphere 5.x: esxcli storage nmp psp list vSphere 4.1: esxcli nmp
psp list
This will display a list of the PSP installed on the host, as
shown below:
Name Description DELL_PSP_EQL_ROUTED Dell EqualLogic Path
Selection VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR
Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection
-
10
Verification of MEM iSCSI Session Creation By default the
EqualLogic MEM will claim any existing and new EqualLogic volumes
and create the additional iSCSI session used by the MEM to route
the iSCSI data. To verify these additional sessions, perform the
following steps:
1. In the vSphere client click on the Configuration tab. In the
Hardware pane select Storage.
2. Right click on the datastore, select Properties, and then
click the Managed Paths button.
3. In the example above the Path Selection Policy has been set
to use the MEM, as path selection shows DELL_PSP_EQL_ROUTED. Four
sessions to the volume have been created, as the volume resides in
a two member storage pool.
4. This can also be verified from the array web GUI by clicking
on the volumes Connections tab. As shown in the example below, two
iSCSI sessions have been established from each of the VMkernel
Ports on the ESXi host to the volume.
-
11
Advanced iSCSI connection configuration parameters There are
minimal configuration parameters that are needed with the
EqualLogic MEM. The few that do exist are used to alter the runtime
behavior of the EqualLogic Host Connection Manager, EHCM, which
manages the iSCSI sessions to the EqualLogic volumes. For
information on all parameters refer to the Dell EqualLogic MEM
Installation and User Guide.
Three configuration parameters affect the number of iSCSI
sessions that will be created:
totalsessions: The maximum total number of session that may be
created to all EqualLogic volumes that the host can access. Default
value: 512
volumesessions: The maximum number of session that may be
created to any individual EqualLogic volume. Default value: 6
membersessions: The maximum number of session that may be
created to the portion of a volume that resides on a single member.
Default value: 2
When determining how many sessions to create, EHCM chooses a
value that meets all these constraints. The default values were
chosen to provide good performance without unnecessarily consuming
excess resources. In the majority of installations, these settings
do not need to be changed from the defaults.
Prior to altering these values it is strongly advised that an
administrator fully understand the impact of such a change on the
total number of iSCSI connections. For information on the number of
iSCSI connections that particular firmware versions can handle at a
per pool and group level, refer to the release notes for that
firmware. Refer to VMwares documentation for details on vSphere
limits that may apply.
The Deployment Considerations: iSCSI Connection Count section of
this technical report includes a formula for calculating the iSCSI
connection count based on the number of hosts, sessions per volume
portion, EqualLogic members in the pool, and the number of volumes
the hosts are connected to. One should also take into account
connections from other systems such as; other physical servers,
virtual machines with iSCSI within the guest connections, and
backup servers, as well as short lived connections used by xcopy,
which is used by the vSphere VAAI Full Copy primitive and
replication, in the total iSCSI connection count.
Increasing the default values
In some instances there may be a preference to increase the
number of iSCSI connections. Typically these preferences only exist
in lab settings, where the desire is to benchmark I/O to an
individual volume or in other ways test the limits.
In such a configuration it is possible that the default
membersession value is limiting the number of iSCSI connections to
fewer then the number of VMkernel ports bound to the iSCSI
initiator. To utilize all the VMkernel ports bound to the iSCSI
initiator, membersessions should be increased to a sufficiently
high value to enable EHCM to create iSCSI connections through all
bound VMkernel ports.
Generally this increase would not be beneficial in a production
environment. In a production environment multiple volumes are used,
and EHCM would, in aggregate,
-
12
create and balance an optimal number of iSCSI sessions,
effectively utilize all of the network ports
Decreasing the default values
In large configurations it is possible to exceed the iSCSI
connection limits of the array firmware. In such cases it is
necessary to reduce the number of iSCSI connections to be within
limits. This can be achieved in several ways including:
Reduce the number of volumes: By increasing the size of a given
volume, more virtual machines can reside on it, thereby reducing
the total number of volumes needed. With vSphere 5.0 datastore are
no longer restricted to 2TB.
Reconfigure larger clusters into multiple smaller clusters. By
splitting a large cluster into smaller clusters, the number of
hosts accessing particular volumes is reduced, thereby reducing the
number of iSCSI connections consumed.
Alter the configuration of EqualLogic group: reducing the number
of EqualLogic members per pool, and increasing the number of pools,
will increase the number of available iSCSI connections.
However, these changes typically take some time to implement.
Altering the EHCM parameters provides a means of quickly reducing
the iSCSI connection to be within limits. However, depending on the
overall environment configuration and I/O workload, a drop in
overall performance may be observed.
totalsessions: This parameter is defaulted to a maximum of 512,
and can be lowered to 64. It is the maximum number of iSCSI
connections that EHCM will create from this individual ESXi host to
all EqualLogic arrays. If an additional volume is added to the host
EHCM will automatically rebalance the sessions to include the new
and existing volumes, yet not consume more than the totalsessions
limit.
volumesessions: This parameter is defaulted to a maximum of 6,
and can be lowered to 1. It is the maximum number of iSCSI
connections that EHCM will make from this individual ESXi host to
each individual EqualLogic volume, regardless of the number of
array members the volume may span as the group expands. This value
applies to all volumes that the host accesses, and cannot be
specified to affect only particular volumes. As additional members
are added to the pool, and volumes are distributed to include this
new member, EHCM will create additional sessions to the volume
within this limit.
membersessions: This parameter is defaulted to a maximum of 2,
and can be lowered to 1. It is the number of iSCSI connections that
EHCM will make from this individual ESXi host to the portion of a
volume that resides on an individual EqualLogic member. Setting
membersessions to 1 will halve the iSCSI connection count from the
ESXi host to the array. However, this effectively disables the
Least Queue Depth part of the MPIO algorithm, as there is one queue
to each portion of the volume to choose from. The Intelligent
Routing part of the algorithm remains in use, so I/O requests will
be efficiently directed to the appropriate member.
-
13
Setting the EHCM configuration values under vSphere 5.x
For vSphere 5.0 EqualLogic have added to the existing esxcli
command set a number of commands that can be used to control and
report on the status of the MEM. It is through these commands that
the EHCM parameters are changed.
esxcli can be executed remotely from the vMA, CLI, or directly
on the ESXi host
The syntax of this command is:
esxcli equallogic param set --name=parameter_name
--value=parameter_value
For example:
esxcli equallogic param set --name=membersession --value=1 To
query the current EHCM values use the following syntax:
esxcli equallogic param list
Setting the EHCM configuration values under vSphere 4.1
For vSphere 4.1 the setup.pl script is used to change and query
the EHCM configuration values. It can be executed from the vMA or
CLI, but not directly on the ESXi host.
The syntax of this command is:
setup.pl --setparam --name=parameter_name
--value=parameter_value --server=hostname
For example:
setup.pl --setparam --name= membersesion --value=1
--server=hostname
To query the current EHCM values use the following syntax:
setup.pl --listparam --server=hostname
-
14
Summary The EqualLogic MEM, through its intelligent routing and
load balancing, can provide a reduction in network latency and an
increase in bandwidth to PS Series Storage arrays. Through
automated setup and host connection management, the MEM also
reduces the steps for deployment and ongoing management of advanced
vSphere iSCSI configurations.
Technical Support and Customer Service
Dell support service is available to answer your questions about
PS Series SAN arrays.
Contacting Dell
1. If you have an Express Service Code, have it ready. The code
helps the Dell automated support telephone system direct your call
more efficiently.
2. If you are a customer in the United States or Canada in need
of technical support, call 1-800-945-3355. If not, go to Step
3.
3. Visit support.dell.com/equallogic.
4. Log in, or click Create Account to request a new support
account.
5. At the top right, click Contact Us, and call the phone number
or select the link for the type of support you need.
-
15
Appendix A: Example iSCSI vSwitch configurations This appendix
provides some examples of the additional parameters used with the
setup.pl configuration script. A detailed list of all the
parameters and their usage can be found in the EqualLogic MEM
Installation and User Guide
Overriding hardware iSCSI offload default utilization
With vSphere 4.1 and above, ESXi can utilize the iSCSI offload
capabilities of the Broadcom NetXtreme II network adaptors,
resulting in significantly lower software iSCSI CPU utilization.
The setup.pl configuration script will, by default, utilize this
iSCSI offload capability if it is present. If there is a preference
not to use the iSCSI offload capability this must be specified when
configuring the iSCSI vSwitch. This is shown below:
setup.pl --configure --server=172.17.5.121 --nics=vmnic2,vmnic3
--ips=192.168.0.215,192.168.0.216 --nohwiscsi
Utilizing a vNetwork Distributed Switch
With Enterprise Plus licensing VMware provides a virtual switch
which spans many ESXi hosts. This abstracts the configuration of
individual vSwitches on the host level and enables centralized
management through vSphere vCenter Server. If utilizing a vNetwork
Distributed Switch for iSCSI traffic the --vds parameter must be
specified. Should the name of the vNetwork Distributed Switch
differ from the default utilized by the configuration script, it
can be specified using the -vswitch parameter. This is shown
below:
setup.pl --configure --server=172.17.5.121 --nics=vmnic2,vmnic3
--ips=192.168.0.215,192.168.0.216 -vswitch vdsISCSI --vds
Enabling Jumbo Frames
With the vSphere release of ESXi support for Jumbo Frames has
been extended to VMkernel traffic, which includes the iSCSI stack.
To utilize Jumbo Frames they must be enabled on all networking
components used. By default the setup.pl script uses an MTU of 1500
when creating the iSCSI vSwitch and VMkernel interfaces. To use a
larger value specify it as demonstrated in the example below.
setup.pl --configure --server=172.17.5.121 --nics=vmnic2,vmnic3
--ips=192.168.0.215,192.168.0.216 -mtu=9000
Setting the iSCSI discover address
It is possible to specify the group IP of an array to be set as
the Send Targets discovery address for the iSCSI initiator.
setup.pl --configure --server=172.17.5.121 --nics=vmnic2,vmnic3
--ips=192.168.0.215,192.168.0.216 -groupip=192.168.0.200
-
16
Appendix B: Installing the MEM with VMware Update Manager VMware
Update Manager has the ability to install and upgrade third party
packages to the ESXi hosts. This enables administrators to not only
manage the patching and updating of ESXi hosts but also to update
third party packages installed on the ESXi hosts. Installing the
MEM consists of four major steps: Steps 1-3 need to be done once
per version release of the MEM
o Importing the MEM to the Patch Repository o Creating an
extension baseline that includes the MEM o Attaching the baseline
to the environment
Step 4 needs to be done for each host in the cluster, or
datacenter o Installing the MEM to each host
Step 1: Importing the MEM to the Patch Repository
1. Download Dells EqualLogic MEM from the EqualLogic Support
website. 2. Unpack the ZIP archive. Do not unpack the embedded zip
file, this is a VIB
offline bundle and is expected to be in this format. 3. From the
vSphere client Home section, select Update Manager from under
Solutions and Applications. 4. From the Admin View select the
Patch Repository tab and then click on
Import Patches to start the Import Patches wizard. 5. Click on
Browse and browse to where the Zip archive was unpacked and
select the embedded Zip file named dell-eql-mem-.zip, click Open
and then click Next.
6. The Import Patches wizard will upload and analyze the file;
this may take a few minutes, and will then present a Confirm Import
page. Verify the details are as expected and click Finish.
Step 2: Creating an Extension Baseline
7. From the vSphere client select Update Manager -> Admin
View and select the Baselines and Groups tab.
8. On the Baselines and Group tab, in the Baseline section,
click Create to start the New Baseline wizard.
-
17
9. Provide the baseline with a suitable name and optional
description, select the baseline type Host Extension and then click
Next, as shown in the example below.
10. From the list of extensions, highlight the Dell EqualLogic
iSCSI MEM 1.1.0, click the selection button and then click Next. If
there are a large number of extensions in the repository, use
EqualLogic as a keyword for filtering.
11. On the Ready to Complete page of the wizard verify that the
information is correct, and click Finish.
Step 3: Attaching the Extension Baseline
12. From the vSphere client select Update Manager ->
Compliance View. 13. From the tree pane, on the left, select the
Datacenter, Cluster, or Host that
the MEM Extension Baseline is to be attached to, and then click
on Attach to start the Attach Baseline wizard.
14. Select the MEM extension baseline created above and click on
Attach to attach the Extension Baseline to the vSphere Datacenter,
Cluster or Host.
Step 4: Installing the MEM
15. From the vSphere client select Update Manager ->
Compliance View. 16. In the Attached Baselines pane highlight the
MEM extension baseline that
was attached earlier and click the Remediate button to start the
Remediation wizard.
17. On the Remediation Selection page of the wizard verify that
the correct baseline is listed, and unselect any host to which the
EqualLogic MEM is not to be installed to at this time. Click Next
to continue.
18. The Patches and Extensions page lists the patches and
extensions to be applied. Click Next to continue.
-
18
19. On the Schedule page change the Task Name, if desired, and
optionally provide a Task Description. There is also the option to
schedule the deployment for a future time and date, rather than
Immediately which is selected by default. Click Next to
continue.
20. On the Host Remediation Options and Cluster Remediation
Options pages there are options for altering the behaviors of the
virtual machines, hosts, and cluster during the install. Refer to
the vCenter Update Manager Installation and Administration Guide
for details. Click Next to continue.
21. On the Ready to Complete page, verify that the information
is correct, and click Finish. Unless the task was schedule to run
at a later time it will be immediately executed.
22. As the task is been executed it will display status updates
in the Recent Tasks pane of the vSphere client. If several host
were selected the task will execute against one host at a time,
until the MEM has been installed on all the selected hosts.
23. Once the task has been completed the MEM will have been
installed or updated on all the hosts selected.
24. As shown in the screenshot below, the vCenter Update Manager
can provide a clear graphical view of which ESXi hosts the MEM is
installed or updated on, and which hosts it has yet to be installed
or updated on.
Note: vCenter Update Manager is only capable of install the MEM.
vMA or vCLI will need to be utilized to configure the iSCSI vSwitch
for the MEM.
Revision InformationExecutive SummaryIntroductionDeploying the
EqualLogic MEMDeployment Considerations: RequirementsDeployment
Considerations: Storage Heartbeat on vSphere 5.0 and 4.1Deployment
Considerations: iSCSI Connection CountDeployment Considerations:
iSCSI Login Timeout on vSphere 5.xDeployment Considerations: Best
Practices parameter in MEM 1.2
Configuring an iSCSI vSwitch for MultipathingInteractive Mode
configurationUnattended Mode configuration
Installing the EqualLogic MEMVerification of MEM iSCSI Session
CreationAdvanced iSCSI connection configuration
parametersIncreasing the default valuesDecreasing the default
valuesSetting the EHCM configuration values under vSphere
5.xSetting the EHCM configuration values under vSphere 4.1
SummaryTechnical Support and Customer ServiceAppendix A: Example
iSCSI vSwitch configurationsOverriding hardware iSCSI offload
default utilizationUtilizing a vNetwork Distributed SwitchEnabling
Jumbo FramesSetting the iSCSI discover address
Appendix B: Installing the MEM with VMware Update Manager